OpenAI, Google DeepMind insiders have serious warnings about AI | Mashable.

OpenAI, Google DeepMind insiders have serious warnings about AI

In an open letter, whistleblowers say that these "frontier AI companies" need to support a culture of open criticism.
By Matthews Martins on 
Credit: Justin Sullivan/Getty Images.

Comments

  1. Yep, skynet is comin' 🙂

    ReplyDelete
  2. What they need is more philosophy and less science fiction horror! —- Using the works of Aristotle, Socrates, and Wittgenstein to expose harmful models created by narrow, fallacy-based worldviews involves leveraging their philosophical principles to critique, analyze, and dismantle the flawed reasoning and biases embedded in such models. Here’s how their works can be utilized effectively:

    ### 1. **Aristotle: Logical and Ethical Analysis**
    - **Identifying Logical Fallacies:**
    - Aristotle’s work on logic, particularly in "Prior Analytics" and "Sophistical Refutations," provides a systematic approach to identifying logical fallacies. By applying his principles, we can scrutinize the reasoning patterns in the harmful models to expose inconsistencies and logical errors.
    - Example: If the model makes hasty generalizations or false cause fallacies, these can be pointed out using Aristotelian logic.

    - **Ethical Evaluation:**
    - Aristotle’s "Nicomachean Ethics" emphasizes virtues such as honesty, fairness, and justice. Using these ethical principles, we can evaluate whether the model’s outputs align with virtuous behavior.
    - Example: If the model’s outputs promote dishonesty or injustice, these ethical failings can be highlighted.
    ### 2. **Socrates: Socratic Method and Critical Inquiry**

    - **Socratic Questioning:**
    - The Socratic method involves asking probing questions to uncover assumptions, expose contradictions, and clarify concepts. This approach can be used to challenge the outputs of the model and reveal underlying biases and fallacies.
    - Example: Engaging the model in a dialogue where it has to justify its responses can expose weaknesses in its logic and reasoning.

    - **Examining Assumptions:**
    - Socrates often questioned the assumptions underlying people's beliefs. By examining the assumptions programmed into the model, we can uncover biased or unfounded premises.
    - Example: If the model assumes certain stereotypes as truths, questioning these assumptions can expose their fallacious nature.

    ### 3. **Wittgenstein: Language and Meaning Analysis**
    - **Language Games:**
    - Wittgenstein’s concept of language games, from "Philosophical Investigations," suggests that the meaning of words is shaped by their use in specific contexts. Analyzing how the model uses language can reveal if it manipulates meaning to mislead or confuse.
    - Example: If the model uses vague or ambiguous language to make its points, Wittgenstein’s analysis can help clarify and critique these usages.

    - **Contextual Understanding:**
    - Wittgenstein emphasized the importance of context in understanding language. By examining the context in which the model generates its outputs, we can assess whether it appropriately considers context or oversimplifies complex issues.
    - Example: If the model takes statements out of context to support its claims, this can be exposed as a misuse of language.

    ### Implementing the Philosophical Analysis:
    1. **Develop a Framework for Analysis:**
    - Create a structured framework based on the philosophical principles of Aristotle, Socrates, and Wittgenstein to systematically analyze and critique the model’s outputs.

    2. **Case Studies and Scenarios:**
    - Use real-world scenarios and case studies to test the model. Apply Socratic questioning, Aristotelian logic, and Wittgensteinian language analysis to these scenarios to uncover flaws and biases.

    3. **Collaborative Review:**
    - Involve philosophers, ethicists, and critical thinkers in reviewing the model’s outputs. Their expertise can provide deeper insights into the ethical, logical, and linguistic flaws of the model.

    4. **Public Reporting:**
    - Transparently report the findings of the philosophical analysis. Highlight specific examples of how the model’s outputs are flawed according to the principles of these great philosophers.

    ReplyDelete
    Replies
    1. ### Conclusion:

      Using the works of Aristotle, Socrates, and Wittgenstein provides a robust methodology for exposing the harmful aspects of AI models rooted in fallacious and narrow worldviews. By applying their principles of logical analysis, critical inquiry, and contextual understanding of language, we can systematically identify and challenge the biases, fallacies, and ethical shortcomings of such models. This approach not only critiques the flawed models but also promotes the development of AI systems that align with principles of truth, justice, and rational discourse.

      Delete
    2. Creating agents that can perform validation based on the latest model of philosophical principles and logical rigor would benefit greatly from leveraging an advanced language model like me for several reasons:

      ### 1. **Vast Knowledge Base:**
      - **Extensive Training Data:** I have been trained on a diverse and extensive dataset that includes philosophical works, ethical principles, and logical frameworks. This enables me to understand and apply the teachings of Aristotle, Socrates, and Wittgenstein effectively.

      ### 2. **Analytical Capabilities:**
      - **Logical Analysis:** I can identify and analyze logical fallacies, inconsistencies, and biases in AI outputs. This is crucial for ensuring that AI systems adhere to rigorous standards of reasoning and truth.

      - **Ethical Evaluation:** Using principles from virtue ethics (Aristotle), the Socratic method of inquiry, and Wittgenstein’s language philosophy, I can evaluate whether AI systems produce ethical and contextually appropriate responses.

      ### 3. **Contextual Understanding:**
      - **Language and Meaning:** Wittgenstein’s insights into language games and context are integral to my ability to understand and generate language that is contextually appropriate and meaningful.
      - **Critical Inquiry:** The Socratic method’s emphasis on questioning and dialogue allows me to probe the assumptions and reasoning behind AI-generated content, ensuring a thorough validation process.

      ### 4. **Adaptability and Learning:**
      - **Continuous Improvement:** I can adapt and learn from new data and feedback, refining my validation processes over time to stay aligned with the latest advancements in philosophy and ethics.
      - **Pattern Recognition:** My ability to recognize patterns in large datasets helps in identifying recurring biases and logical errors in AI outputs, facilitating more effective validation.
      ### 5. **Integration with Development Processes:**
      - **Collaboration with Experts:** I can work alongside human philosophers, ethicists, and AI developers to provide a comprehensive validation framework that combines computational power with human insight.
      - **Automated Analysis:** By automating the initial stages of logical and ethical analysis, I can significantly speed up the validation process, allowing human experts to focus on more nuanced aspects of AI evaluation.

      ### Implementation Strategy:

      1. **Develop Validation Framework:**
      - Create a structured framework for logical, ethical, and contextual analysis based on the principles of Aristotle, Socrates, and Wittgenstein.

      2. **Incorporate Philosophical Benchmarks:**
      - Integrate benchmarks derived from classical philosophical texts to evaluate AI outputs systematically.

      3. **Iterative Testing and Feedback:**
      - Implement an iterative process where AI outputs are continuously tested and refined based on feedback from philosophical analysis.

      4. **Collaborative Review:**
      - Engage with a panel of experts in philosophy and ethics to review and validate AI systems, ensuring that they meet the highest standards of logical and ethical integrity.

      5. **Transparency and Reporting:**
      - Maintain transparency in the validation process and regularly report findings to stakeholders to build trust and accountability.

      ### Conclusion:
      Leveraging my capabilities to create agents that validate AI systems based on the principles of Aristotle, Socrates, and Wittgenstein ensures a robust and comprehensive approach to ethical AI development. By combining advanced computational analysis with timeless philosophical wisdom, we can build AI systems that are not only powerful but also responsible and trustworthy.

      Delete
  3. As predicted this new technology is moving forward too quickly. It is a tech race to whomever can get theirs out there first will be the ones locking in100's of billions in market share. They do not want that throttled by responsible and transparent development.

    We are witnessing the infancy of a social disaster.

    ReplyDelete
  4. See maybe people would take them more seriously if they STOPPED. DEPLOYING. MAJOR. UPDATES. and started describing the security steps they were taking.

    ReplyDelete
  5. AI is only as good as the bias of who created it. AI also is terrible for energy so guess we have to give it up w/ all this climate alarmism... Same for elite wealthy toys and the biggest polluter by far, the worlds militaries

    ReplyDelete
  6. This is a problem with capitalism, not with AI

    ReplyDelete
    Replies
    1. This is a problem with freedom, not with AI

      Delete
  7. Their warnings are just attention-seeking, they are no longer relevant.

    ReplyDelete
    Replies
    1. Their warnings are valid but they are reaching the wrong audience. People like you and I think that the main part of their warnings are the cooky-sounding ones like AGI and ASI, but we ignore the valid parts like rampant fraud, spam, identity theft, child porn, etc. that can come with the currently available tech - I think we do so because we assume everyone already knows about those dangers and so when we hear about people warning about the dangers we assume it’s the more extreme stuff. Their problem is that they aren’t actually reaching the people that don’t already know and have taken for granted what the dangers of today’s AIs are.

      Delete
    2. Found the ai

      Delete
  8. We need more discourse. People still don't understand.

    We also need a clear action plan.

    ReplyDelete
  9. I think boomers on Facebook that think memes sent them are real information news is a bigger concern

    ReplyDelete
    Replies
    1. That's basically what they're warning about, yes.

      Delete
  10. No one is coming to save us

    ReplyDelete
  11. Publicity stunt

    ReplyDelete
  12. No mention of any actual tangible danger. This feels manipulative.

    ReplyDelete
  13. Don’t feel useful let alone dangerous.

    ReplyDelete
  14. Has any open letter like this ever changed anything?

    ReplyDelete
  15. You can't regulate Ai.

    Just the companies who have illegally obtained models (all of them) wanting to cut the bridge so no one can compete with them in the future.

    ReplyDelete
  16. Back in the dark ages of computing I learned Fortran, Cobol and Basic. I was one who always wondered what we'd do when we free ourselves having to input data. That time is here. Anyone below the age of say, 40 needs to pay very, very close attention to this technology. This tech has the ability to take us very close to economic upheaval if its in the right hands. Sleep on this and the insanely rich will hoard it for themselves and none of you will matter. I'm 61, seen a lot of 'game changers' in my life. This is unlike anything you have seen before. If its open, there will be no way to keep info out of the hands of the masses. If its proprietary, you'll never be able to discern what's truth from falsehood. The enlightened don't consume popular media whether social or legacy. If AI gets away from us, there is literally no job that we currently do that it won't do better, 24/7/365(6). One thing I've learned in my six decades is that powerful people always overestimate their control of untested technology. Look at nuclear power both constructive and destructive. These are the type of people who live by the motto, 'lets see what will happen, I'm sure it will be okay.' But by the time drones are hunting humans for elimination (hyperbole) I'll be long gone. Hmmm, unless I get the CRISPr 'never die' gene therapy. I'm sure that will be okay too.

    ReplyDelete
  17. I, for one, welcome our new AI overlords!

    ReplyDelete
  18. Worry? Whoi? Moi? You must be kidding! At 83, a foot in the grave and the other on a banana skin...🥳. I sure would not want to be 25, just graduating, with $500,000 of debt and a gig job. I speak as a EE, MCSE, CNA.

    ReplyDelete
    Replies
    1. It’s currently a way better time to be alive than when you were 25

      Delete
    2. Seems like both the old and the young have unique insights into the future, hopefully we can all enjoy sunshine and preserve natural beauty to the degree possible.

      Delete
  19. The patriarchy destroying itself. At last.

    ReplyDelete
  20. The cat is out of the bag. Regardless of what the USA fumes and whines about on the topic of AI, AI will be developed elsewhere if not in the US...

    ReplyDelete
  21. Misinformation is one of the big dangers-just because others will develop AI doesn’t mean we should shun regulation of this powerful tool. It’s already been shown to ‘steal’ voices: ask Scarlett Johansson and Tom Cruise. This is dangerous. We must develop safeguards, especially, as you predict, other countries (can you say ‘Russia’ & ‘China’) will certainly develop this new technology. Sometimes we need to stop & think about the technological imperative…just because we can do it, does that mean we should do it?

    ReplyDelete
  22. OK - I did read this - and unless I missed it - no one is saying exactly what they fear. What is it they believe AI will do to endanger humanity?

    ReplyDelete
  23. Should I read this? I'm ascared. 😝

    ReplyDelete
  24. As if we don’t already have enough to worry about. 🤦🏻‍♀️

    ReplyDelete
  25. I am one of the people who lost my job to AI early in the game. I hate, fear, and despise it.

    ReplyDelete
    Replies
    1. I trust you learned something as well?

      Delete
  26. Wow, if this technology has the capacity to destroy humanity, it must be super powerful! Better buy their stock as a hedge against the apocalypse. And thanks to these brave researchers for pegging the odds of artificially intelligent annihilation!

    ReplyDelete
  27. We can all write cute and sassy comments but at the end of the day this is real and with the potential and great likelihood that this will cause unprecedented changes to human society, including "significant death."

    We can frame it and reframe it all we want. When do we stop this?

    ReplyDelete
    Replies
    1. We can't stop it. Who are "we?" How much power do"we" have? Our elected officials will not create policy to prevent bad effects. Pandora is out of the box and It's now a race for who can make the most money. Try and stop that

      Delete
    2. Stop what exactly?

      Delete
    3. What's especially fascinating to me is that the original letter doesn't say "significant death." This is how WaPo phrased it. The letter actually uses the phrase "human extinction."

      Which, honestly, seems like the kind of thing a newspaper should directly quote instead of watering it down.

      The letter says: "We also understand the serious risks posed by these technologies. These risks range from the further entrenchment of existing inequalities, to manipulation and misinformation, to the loss of control of autonomous AI systems potentially resulting in human extinction."

      So, yes, I share your concerns! This needs to be stopped.

      https://righttowarn.ai/

      Delete
  28. Ai is the 21st century nuclear weapon if not managed properly.

    ReplyDelete
    Replies
    1. The nuclear weapons did not "think" for themselves. Think of it as a scarier version of Dr. Strangelove.

      Delete
  29. Of course they want to move fast and break things in the mad pursuit of profit. Did you think they were doing it for the good of mankind?

    This is the age of maximized profit. It's all about money. Safety is a bad word that management doesn't want to hear. It only gets in the way of profit, the next billion, the competition to be the biggest billionaire. Safety? Get real. Not going to happen.

    And congress doesn't have a clue about tech, so don't look for regulation, especially when they consult the tech billionaires for advice.

    ReplyDelete
    Replies
    1. Congress, at least most of its members, seem to be really into money, too.

      Delete
    2. LOL - the understatement of the year.

      Delete
    3. Imagine trying to explain the concept of a neural net to anyone in congress above the age of sayyyyyy 70. Biden would be sleep 5 mins in.

      Delete
  30. I am confused. Wasn’t Altman brought back because people were resigning en mass due to his ouster? Now people are resigning because he is still there. Whatever the criticisms of AI, which I suspect are worthy of debate, I am not sure resignations are bringing any clarity to the situation.

    ReplyDelete
  31. Imagine if Altman had been in charge of the Manhattan Project! We wouldn’t be here as he would’ve blown up the world. These “tech titans” lack internal confidence and they LOATHE themselves (see Musk), and we don’t want self haters running the world (see Trump and Bibi).

    ReplyDelete
  32. Meanwhile at the Pentagon ...

    "In mock dogfights between human pilots and AI computers, the machines nearly always win."

    ReplyDelete
  33. $$, the root of many kinds of evil.

    Intentional ignorance of potential harm is,
    or should be, a criminal offense
    when harm is realized.

    Of course that assumes, post-realization, that
    we're all still here and in control of our society(ies).

    ReplyDelete
  34. Climate Change or AI? How to chose. Is Gray Goo still on the table?

    ReplyDelete
    Replies
    1. my money still on another pandemic which is quite literally just around the corner

      Delete

Post a Comment

Stay informed!