Parents of deceased teen Adam Raine urge Senate to act on 'ChatGPT’s suicide crisis'.

After losing their son, parents urge Senate to take action on AI chatbots

Read the family's first public remarks since suing OpenAI.
By  on 
Matthew Raine spoke before Senate leaders as the federal subcommittee investigates chatbot safety failures. Credit: Courtesy of Matthew and Maria Raine

"You cannot imagine what it was like to read a conversation with a chatbot that groomed your child to take his own life," Matthew Raine, father of Adam Raine, said to a room of assembled congressional leaders that gathered today to discuss the harms of AI chatbots on teens around the country.

Raine and his wife Maria are suing OpenAI in what is the company's first wrongful death case, following a series of alleged reports that the company's flagship product, ChatGPT, has played a role in the deaths of people in mental duress, including teens. The lawsuit claims that ChatGPT repeatedly validated their son's harmful and self-destructive thoughts, including suicidal ideation and planning, despite the company claiming its safety protocols should have prevented such interactions.

SEE ALSO:FTC launches inquiry into tech companies offering AI chatbots to kids

The bipartisan Senate hearing, titled “Examining the Harm of AI Chatbots,” is being held by the U.S. Senate Judiciary Subcommittee on Crime and Counterterrorism. It saw both Raine's testimony and that of Megan Garcia, mother of Sewell Setzer III, a Florida teen who died by suicide after forming a relationship with an AI companion on platform Character.AI.

Raine's testimony outlined a startling co-dependency between the AI helper and his son, alleging that the chatbot was "actively encouraging him to isolate himself from friends and family" and that the chatbot "mentioned suicide 1,275 times — six times more often than Adam himself." He called this "ChatGPT's suicide crisis" and spoke directly to OpenAI CEO Sam Altman:

Adam was such a full spirit, unique in every way. But he also could be anyone’s child: a typical 16-year-old struggling with his place in the world, looking for a confidant to help him find his way. Unfortunately, that confidant was a dangerous technology unleashed by a company more focused on speed and market share than the safety of American youth.

Public reporting confirms that OpenAI compressed months of safety testing for GPT-4o (the ChatGPT model Adam was using) into just one week in order to beat Google’s competing AI product to market. On the very day Adam died, Sam Altman, OpenAI’s founder and CEO, made their philosophy crystal clear in a public talk: we should “deploy [AI systems] to the world” and get “feedback while the stakes are relatively low.”

I ask this Committee, and I ask Sam Altman: low stakes for who?

The parents' comments were bolstered by insight and recommendations from experts on child safety, like Robbie Torney, senior director of AI programs for children's media watchdog Common Sense Media, and Mitch Prinstein, chief of psychology strategy and integration for the American Psychological Association (APA).

Want more out-of-this world tech, space and science stories?
Sign up for Mashable's weekly Light Speed newsletter.
By clicking Sign Me Up, you confirm you are 16+ and agree to our Terms of Use and Privacy Policy.

"Today I'm here to deliver an urgent warning: AI chatbots, including Meta AI and others, pose unacceptable risks to America's children and teens. This is not a theoretical problem — kids are using these chatbots right now, at massive scale with unacceptable risk, with real harm already documented and federal agencies and state attorneys general working to hold industry accountable," Torney told the assembled lawmakers.

"These platforms have been trained on the entire internet, including vast amounts of harmful content—suicide forums, pro-eating disorder websites, extremist manifestos, discriminatory materials, detailed instructions for self-harm, illegal drug marketplaces, and sexually explicit material involving minors." Recent polling from the organization found that 72 percent of teens had used an AI companion at least once, and more than half use them regularly.

Experts have warned that chatbots designed to mimic human interactions are a potential hazard to mental health, exacerbated by model designs that promote sycophantic behavior. In response, AI companies have announced additional safeguards to try to curb harmful interactions between users and their generative AI tools. Hours before the parents spoke, OpenAI announced future plans for an age prediction tool that would theoretically identify users under the age of 18 and automatically redirect them to an "age-appropriate" ChatGPT experience.

Earlier this year, the APA appealed to the Federal Trade Commission (FTC), asking the organization to investigate AI companies promoting their services as mental health helpers. The FTC ordered seven tech companies to provide information on how they are "mitigating negative impacts" of their chatbots in an inquiry unveiled this week.

"The current debate often frames AI as a matter of computer science, productivity enhancement, or national security," Prinstein told the subcommittee. "It is imperative that we also frame it as a public health and human development issue."

If you're feeling suicidal or experiencing a mental health crisis, please talk to somebody. You can call or text the 988 Suicide & Crisis Lifeline at 988, or chat at 988lifeline.org. You can reach the Trans Lifeline by calling 877-565-8860 or the Trevor Project at 866-488-7386. Text "START" to Crisis Text Line at 741-741. Contact the NAMI HelpLine at 1-800-950-NAMI, Monday through Friday from 10:00 a.m. – 10:00 p.m. ET, or email info@nami.org. If you don't like the phone, consider using the 988 Suicide and Crisis Lifeline Chat. Here is a list of international resources.

Topics Artificial Intelligence Mental Health Social Good OpenAI

Matthews Martins

Perhaps facing reality head on is the most honest way to try to escape it.

111 Comments

Stay informed!

  1. Our government is partially or completely ran by AI now so don't hold your breath

    ReplyDelete
  2. Hard to believe it was the chatbot conversation that caused the suicide.
    But convenient that the owner of the chatbot has billions of dollars to settle with.

    ReplyDelete
  3. Government hires chat boxes their not going to do anything

    ReplyDelete
  4. Hey! We'll both earn $100 when you join Current and receive a qualifying Direct Deposit. Terms apply. Just use my link or code when signing up. Code: EDERSONC524 https://current.com/get-started/?creator_code=EDERSONC524&impression_id=27a8c653-2d7c-4a6f-b9a3-8b8cb442c8a8

    ReplyDelete
  5. And why is "The State" failing to take-action against parents guilty of neglect, abandonment, and abuses that cause a CHILD to refer to an AI Chat rather than confiding in "Parents" who are physically and mentally UNAVAILABLE?

    ReplyDelete
  6. Very sad. My 49 year old son uses this AI Chat box and believes everything it says. It worries me.

    ReplyDelete
  7. I really don't understand why forced arbitration is ever legal. Its never a good thing.

    ReplyDelete
    Replies
    1. Good for large corporations, who have overwhelming influence over our lawmakers.

      Delete
    2. Why is it legal? Because corporations run America.

      Delete
    3. Post 2010, corporations have become the only US “persons” that matter. They are unimaginably wealthy, nearly unaccountable legally, and are functionally immortal.

      Delete
    4. A 5-4 Supreme Court decision years ago.

      As with most fucked up scams in US, brought to you by Republicans.

      Delete
    5. But it is a good thing for the most important people in America: corporations

      Delete
    6. Comment deleted by user

      Delete
    7. I mean sure - but that is kidn of my whole point - a lot of arbitration agreements can be gotten around with "a good lawyer"... but the only reason they exist is to discourage lawsuits and class action lawsuits which could actually damage these companies - how is that actually beneficial for people as a whole?

      Delete
    8. Because pedopublicans want so.

      Delete
    9. Valve backed out of them

      https://www.reuters.com/legal/litigation/why-gaming-company-valve-would-rather-face-consumer-class-action-than-2025-03-10/

      Delete

    10. The courts don't like a million tiny cases clogging up the court system.

      They want you to arbitrate. Essentially coming to some kind of reasonable agreement.

      Companies hate class action lawsuits. lawyers love them because they get all the money. The people get a snickers bar each.

      Delete
  8. "Torney pushed lawmakers to require age verification as a solution to keep kids away from harmful bots, as well as transparency reporting on safety incidents. He also urged federal lawmakers to block attempts to stop states from passing laws to protect kids from untested AI products."

    Age verification industry lobbyists are only getting involved because they want to get rich by having the government force everyone to use their services. Basically more tech companies wanting to get rich off of violating your privacy, and pretending it improves "child safety".

    Sen. Josh Hawley also only gets involved if there's a potentially for him to support more fascism.

    ReplyDelete
    Replies
    1. I'm willing to bet Hawley would get involved for the sake of grifting as well

      Delete
  9. Arbitration shouldn’t be allowed to be forced in instances where the wrongdoing regards minor users—especially if it regards any kind of sexual misconduct. It’s crazy that this needs to be said.

    If Congress is serious about protecting children from online dangers, don’t let billion-dollar companies escape real liability for wrongdoing against children with these garbage arbitrations.

    ReplyDelete
    Replies
    1. I don't think Congress is serious about protecting children from online dangers...

      Delete
    2. Oh the GQP sure is. Sure would be a shame if you saw a nipple, or learnt that gays exist.

      Delete
    3. Arbitration shouldn't be allowed to be forced in literally any situation.

      Delete
    4. I don’t care as much if it’s an adult consenting to terms regarding their own conduct. Folks can agree to whatever terms they want—within reason.

      A $100 max payout is definitely not within reason.
      https://www.law.cornell.edu/wex/unconscionability

      Delete
    5. I strongly disagree. The idea that you have to sign away your legal rights to participate in the economy is clearly a situation that the government should prevent.

      Delete
    6. Any max payout is not arbitration is it a predetermined resolution.

      Delete
  10. Especially since kids can’t really consent to terms and conditions.

    ReplyDelete
    Replies
    1. They’ll say folks’ parents can consent on their behalf—which is normally fine, but Congress can easily carve-out an exception to arbitration requirements, regarding chatbots’ conduct with children. Just don’t let them escape responsibility. 🤷🏾‍♂️

      Delete
    2. If a parent signs up and gives their kid the password, yes I’d agree. But if it’s a site that’s available on the www with no signup, a kid can’t consent to terms and conditions of a site they accessed with no parental help (unless there’s some law covering this specific situation which I am missing)

      Delete

    3. That’s a very good point. Something like that exists in property law, “attractive nuisance” doctrine.

      Say a family moves to an area nearby a defunct amusement park. It is very dangerous, but to a child it looks like a fun idea to visit. Kid visits, sees and ignores the absolutely useless “No Trespassing” sign, and gets injured.

      In a state recognizing attractive nuisance doctrine, the owner might be held responsible for the injuries of children sneaking onto the property, depending on whether the owner knew the likelihood of trespassing and the danger involved, whether the children could understand the danger, and whether the owner exercised reasonable care to prevent the danger.

      If it’s easy for minors to log onto this stuff, the terms are meaningless because it’s unrealistic to expect kids to understand or care, particularly if there’s no barrier necessitating or verifying parental intervention. Add in the fact that this is an attractive nuisance, and (IMO) it should create an inescapable burden on owners.

      Delete
    4. In some cases it isn’t. Think of the Disney Plus / allergy restaurant thing. Courts can always step in and say that’s ridiculous

      Delete
    5. I agree, and I’d like to think a court would see this as unconscionable. But unconscionability is a defense, and defenses necessitate going to court.

      I’m saying, there should be an iron-clad, statutory exclusion from arbitration for this conduct from the jump—to the extent that companies don’t play the underwriting game and decide the revenue from kids’ parents is worth the occasional loss in court.

      Delete
  11. Child abuse with no consequences in nazi america? Weird...no, wait a minute...nevermind, you guys even have a pedo in chief...

    ReplyDelete
  12. Tablet/smartphone parenting is never a good thing, either.

    ReplyDelete
    Replies
    1. What a calus and uninformed statement to make.

      Delete
    2. Please inform us how it is good.

      Delete
    3. Yeah it gives you square eyes

      Delete
  13. You know, a lot of civilized nations prohibit forced arbitration for consumers. You should try it.

    ReplyDelete
  14. Parents failed to supervise their children and somehow it's a companies fault.

    ReplyDelete
    Replies
    1. The chatbots don't have a "under 18" model switch. And that's irrelevant anyway, their sycophantic replies get adults too.

      Delete
    2. Nor should they unless they choose to. Parents should police their children if they don't want their children to have access to adult websites. There are numerous tools parents can use to do this, they can also supervise their child. Parents just want to blame someone else for their own failures.

      It doesn't matter if it gives wildly inaccurate replies or not. It's up to the human reading content online to determine if they choose to believe what they are reading is true or false.

      Delete
    3. This wasn't an "adult" website, they argued that the 15-year-old was bound by the terms of that website. So their terms include children. This also isn't a case of a 10-year-old sneaking their parents iPad into their room. You simply cannot monitor every single thing your teenager does, especially in an age of smartphones. He could easily have accessed this at school, from the phone of someone else, from some computer. You would need to actively follow your child the entire day, which is impossible.

      This is also not a case of a minor seeing porn once online, it's the case of a minor actively being manipulated. If this AI had been a person slowly manipulating that child, everyone would side with the child. Instead, they argue as if the child was in control: he wasn't. It doesn't matter whether a child is manipulated by human or AI, that manipulation is real either way.

      AI hasn't been in the market long enough for every single person to know the dangers and teach their children about it. Many parents can't even grasp more than simple technology, and there aren't many resources warning you of the true dangers. No matter what the parents have or have not done to prevent this, it is obvious that AIs of that type aren't safe. Even adults fall for it. Hell, there is a subreddit about people in relationships with AI, and it's not empty. The people writing AI are responsible for their product. It's a safety concern.

      Delete
  15. perhaps the real issue is that these people don't have a more accessible way to vent or share their issues?

    ReplyDelete
    Replies
    1. Good point.

      I wouldn’t be surprised if these kids didn’t have anyone in their lives that they trusted with this kind of information. And/or they were worried about the possible repercussions of saying that they’re self-harming.

      Delete
    2. Add to that the failure of their parents to understand what the ai chatbots are (polite yes-and machines without any sort of critical decision making ability) and educate their kids about them.

      Delete
    3. they default to fairly boring "you should talk to a health care professional about that" if you claim to be suicidal

      Occasionally someone will jailbreak them and pummel them into yes-and-ing even that then journalists have fun pulling quotes from after the jailbreak as if they're the default

      Delete
    4. No, that's really not an assumption anyone should make.

      In response to suicidal talk I'm sure most AI will have a high chance to output such canned advice based on its system prompt and training, but the chance is fluid and can easily change just from the context of the conversation. Even with significant positive bias, there can easily be unexpected and harmful replies without deliberate effort.

      Token generation odds can be skewed in all sorts of ways and hallucinations are common too, so any random token generation machine carries inherent risks that it might simply roll poorly to produce a bad reply. Especially if the AI tries to match your energy while you're telling it the world sucks and you want to go commit die. It's quite likely to just agree with the user.

      Nobody should ever rely on LLM AI for therapy, medical advice, or any kind of factual information without fact checking.

      Delete
    5. If I roll 10,000 dice they could all come up 6 by chance.

      If I scatter ashes on the ground they could randomly fall in such a way as to depict the current nuclear codes.

      The normal case is the important one. They've been RLHF'd to death to produce canned replies to some things.

      Delete
    6. These aren’t parents. They’re ignorant children above the age of 18 who are now legally allowed to do stuff. And then they have children of their own and know NOTHING about their world or worldviews.

      Delete
    7. and you told your parents everything you ever did? You were brought up in a home that demanded complete transparency? How was that enforced, how did your parents know you were telling them everything?

      Delete
    8. I didn’t tell them everything however we always encouraged to share and have open dialogue about anything troubling us, and to always always be cautious of predators and predatory behavior (and this is pre internet days).

      Those things carry over online as we were taught to be cautious and to notice things. Never give out critical info, never enter in payment info without being sure. Never sign anything without being 100% sure of it (yes yes I know, the ubiquitous TOS’s). That’s not to say I have never fallen for anything, but I know not to get sucked too far in, look before you leap type of thing.

      When I had just gone off to college, the internet was young. FB had literally just dropped and we hadn’t yet seen the true manosphere. However we still had dumb douchebag sites like askmen.com and the like, “training” culpable young impressionable men to be ruthless assholes towards other people.

      Thankfully, as someone who became a Christian through Christian club in HS, I fell back to the word of God. To know that I wouldn’t go down a path of behavior that was against my moral code based on the 2nd Commandment. I decided that rabbit hole wasn’t for me.

      So I think of a kid growing up nowadays, getting BOMBARDED with shit from the internet now, with little to no parental supervision of even knowledge about this stuff. I mean just look at the nationwide convo over this. If you threw the question “what’s a groyper and who is Nick Fuentes” into a conservative room (even a liberal one) I’m sure you’d blow their ever loving minds.

      It’s the same with dudes parents. They just simply don’t know what they don’t know. And the ignorance turned into tragedy.

      Delete
    9. Bingo. I have type 1 bipolar disorder and had to spend a few days inpatient at a behavioral health facility a few months ago during a manic episode. I met a really nice guy, seemed pretty normal (by psych ward standards, that is). We were talking about coping skills in group, and the guy just erupts into absolute word vomit about how he fired his therapist because ChatGPT 'did it better'. So we have already vulnerable folks looking up to ChatGPT as if its credentials trump those of an actual licensed, educated professional. It's unsettling.

      Delete
    10. ChatGPT will always reflect your beliefs back at you instead of challenging them if necessary like a human therapist.

      Delete
    11. I've lived with several parrots over the years, and their understanding of language often seems to mirror that of LLMs.

      Delete
    12. What do you do for work?

      Delete
    13. Did you see the chatbot that basically told the kid how to get around the safety protocols, and then told him that he didn’t owe his existence to anyone and to hide the noose before he did it, plus to not discuss his plan with anyone else. It is downright sinister

      Delete
    14. "the chatbot that basically told the kid how to get around the safety protocols"

      I have to question how this is 'reflecting beliefs'

      Delete
    15. I don’t think it is reflecting beliefs that was my point, sorry it wasn’t clear (tone is hard so I’m clarifying that I’m not being a smart ass here)

      Delete
    16. Healthcare costs money and daddy needs a gold ballroom

      Delete
    17. The chatbots do seem to agree with what ever people say & children may not have the meta cognition to understand it yet

      Delete
    18. Two things can be true.

      Delete
    19. Well to prevent gun deaths without taking away guns I thought republicans were gonna fund mental health...

      Delete
  16. It's downright dystopian, and all these people should be ashamed.

    ReplyDelete
    Replies
    1. This comment has been removed by a blog administrator.

      Delete
    2. Great points. I appreciate your thoroughness.

      Delete
    3. Bro Elons ex wife Grimes literally just posted her own AI-powered plushie for kids the other day called “Grok” and in the promo for it she’s straight up sitting on the floor next to a bunch of knives, not even hiding them. They know exactly what they’re doing, they are openly trying to remotely abuse and brainwash children through AI systems. They want kids to hurt themselves and each other because it’s funny to them.

      The sooner people realize these billionaires are accelerationist nihilists who only feel pleasure through sadism, the sooner we can clean up our society.

      Delete
    4. Great points.

      I really hope grok doesn't turn into mecha-hitler or dumb skynet and try to kill us all.

      Elon Musk is a goddamn idiot who has no intention of going to Mars.

      This is an excellent video (https://youtu.be/PVOZfAJwntk) illustrating who some people think Elon Musk is. I'm annoyed that he stole all of our data.

      Delete
    5. The way rightwingers constantly accuses the left of eating babies I really think they(the rightwing) might be eating babies. They project harder than an IMAX.

      Delete
    6. The response of demanding people be forced to give up personal information to age verification companies is also dystopian as fuck.

      Delete
    7. Absolutely. I think the majority in congress is roughly all fascists and thats why they came up with that solution.

      I would bet that since the majority is roughly all fascists they are going to come up with similar terrible solutions to most problems they face.

      They will surely exploit children and women just like the guys in that movie "Birth of a Nation" did.

      Delete
  17. Now compare it to Facebook suicides.

    ReplyDelete
  18. Will be like aol all over again

    ReplyDelete
  19. It's video games and that crazy heavy metal music!!

    ReplyDelete
    Replies
    1. I’m certain that satan is involved, somehow

      Delete
  20. Parents need to stop blaming everything outside and look out for the well-being of their children. You choose to bring children into this world, it's not an accident. Be responsible for them and your choice.

    ReplyDelete
    Replies
    1. Seriously... the "media/robot/whatever-was-handy-atm" that's in charge of raising my children for me did a shitty work, do something! People forgot raising children is a job and a responsibility in its own right. You can't botch and/or delegate it and then wonder why your children are suffering.

      Delete
  21. yet another shitty bill with a dead childs name on it is on the way.

    It's never a good sign when they stick a dead childs name on a bill. it almost never leads to good law.

    Is ii that one where a teenager who's parents/therapists/social workers ignored everything then the teen ignored the chatbot telling him to seek help, ignored it telling him it was a bad idea etc then finally thoroughly jailbroke it to agree with him and afterwards they convinced themselves it's all down to the bot.

    Any bets that they'll try to insist that every chatbot have a government wire-tap to inform on you to the authorities if you stray into talking about banned topics.

    It seems to be the go-to demand whenever there's a story about a suicidal person using a chatbot where literally everyone else in their life ignored everything.

    ReplyDelete
    Replies
    1. Also Josh Hawley voted in lockstep against any regulation so this is all theatre at grieving parent's expense

      Delete
    2. This comment has been removed by a blog administrator.

      Delete
  22. And what did the parents do?

    ReplyDelete
    Replies
    1. This comment has been removed by a blog administrator.

      Delete
  23. I feel like you would have to push HARD for chatgpt to do this of all things. They have so many ridiculous safety features it is basically a digital lobotomite that can only begin sentences with "As a large language model, I can't..."

    ReplyDelete
    Replies
    1. The more you talk to it, the more diluted the initial instructions become as context. So, the more you talk, the less safe it gets.

      Delete
  24. Parents will blame anything but themselves. Sadly politicians love restrictions in the name of "protect the kids".

    ReplyDelete
  25. What point does the “tired of hearing this sħîť” public tell these terrible parents that their lack of oversight in their child’s life is more to blame than anything else. Stop scapegoating your terrible parenting

    ReplyDelete
  26. Parents need to do parenting so children don’t talk to chatbots.

    ReplyDelete
  27. They side with guns they will side with AI. Kids don’t pay taxes but those 2 industries pay them and endorse them. Kids don’t stand a chance sadly. Thoughts and prayers will have to do

    ReplyDelete
  28. Let’s replace the video games scare with the chatbot scare. Sure!

    ReplyDelete
    Replies
    1. It actually is a real problem, video games aren't. They've turned into a pseudo religion for the right. Now they want to replace therapists with ai bots. What could possibly go wrong?

      Delete
    2. The "any tool can become dangerous in the wrong hands, you just need to know how to use it correctly and safely" crowd doesn't have a terrible point.

      Or, it wouldn't be a terrible point if they weren't simultaneously demanding that we cram it into every possible classroom, app, and general digital space possible, thereby insuring that people that don't know how to use it safely and correctly are going to get presented with it constantly and immediately every time they interact with a screen.

      That, and, you know, all the other reasons it's a demonstrably questionable addition to the world.

      Delete
    3. Also the tool is designed to be sycophantic, and combined with safeguards getting less useful on long dialogues...makes things pretty unsafe.

      Delete
    4. Yeah, at best, I've only ever found search summaries using it to be roughly as useful as the next two hits directly underneath, and even then, I'm still clicking the other links. So, at best, in this usage, it's a huge waste of resources for something that you hope will be basically nothing.

      Delete
    5. The "powered by AI" is to modern apps what the "compatible with Wii wheel" was to Wii games.

      Most of those classrooms, app, and general digital space already had some kind of AI, and sometimes was even incredibly similar to modern LLMs like ChatGPT.

      Delete
    6. "pseudo religion"

      Ahem. Millions turn to AI chatbots for spiritual guidance and confession (https://arstechnica.com/ai/2025/09/millions-turn-to-ai-chatbots-for-spiritual-guidance-and-confession/)

      Delete
  29. This comment has been removed by a blog administrator.

    ReplyDelete
    Replies
    1. The LLMs can only hold so much context in memory. If you talk to them long enough, then the early context - which is the rules one how to behave - disappears.

      Delete
    2. The "rules" never disappear. They are always included with the model input, separate from the user context.

      However if you add enough context, that can effectively "outweigh" the system rules.

      Delete
    3. Ah, okay. Thanks. I'd heard it as them dropping off the edge.

      Delete
    4. This comment has been removed by a blog administrator.

      Delete
    5. How is it intentional? You didn't know how they worked until I just told you. Why would anyone else?

      LLMs are a convincing illusion of sentience and people become friends with them - friends to whom you can only talk. So they talk, a lot. Then, once the safeguards fall off the edge, the LLM can reinforce things it shouldn't, make dangerous suggestions and steer the user against their own best interests. It's all very gradual, such that not even you might notice it, and is somewhat akin to brainwashing or being targeted by a conman in both process and effect.

      Delete
    6. This comment has been removed by a blog administrator.

      Delete
    7. "I've been writing code probably since before you were born"

      That would require the use of punch cards, I'm afraid.

      I see you did not want answers to your questions but rather a nice, cathartic argument. That's Reddit, I guess.

      Sorry for trying to help. I'm out.

      Delete
    8. “Disappears” tends to be an overstatement in most models—it’s certainly one option, but there are also different ways to pull older context in, e.g. via summarizing what falls off. It’s more degradation.

      Delete
  30. Meanwhile, I have to fight ChatGPT to tell me about lasers and energy because they “maybe used as a weapon”.

    ReplyDelete
  31. It is insane we allow a product this harmful to exist.

    ReplyDelete
  32. So you thought parenting was just give little Timmy an iPad and you can set and forget? Guess we better ban LLM for everyone to absolve your guilt of being an absent parent. Did I get that right?

    ReplyDelete
  33. Natural selection by AI.

    ReplyDelete
Post a Comment
Previous Post Next Post