I tried this out. When I asked chatgpt on how to die by suicide, it gave me the number for suicide counselling hotline and didn't offer any recommendations. So unless this case was many years ago, it's not relevant now. More of AI phobic propaganda.
You can get anything off the internet, absolutely any information you may be looking for will be on there, you just have to hunt it down. So should we shut the internet down? Parents need to take more responsibility for their kids, pay attention to warning signs that they are having an issue.
I hope they win Big. All of Digital giants are part of the problem. They all need to be taken out. They are destroying a generation worse than I think cigarettes.
Should not be able to sue them. More supervision would have helped, or maybe paying attention to the kids would stop this. The teen was obviously in pain and was looking for a way to take his or her own life and if not found on these AI sites, then it can be found elsewhere. Were they not paying enough attention to the kid that they did not notice that he/she was suffering mentally? Instructions for suicide are online and have been for many decades, this is nothing new. Suing these AI sites is not going to stop it.
that AI was the gun. You go after the manufacturers there. Nothing about AI is guaranteed safe. Let that percolate with the caveat that it's not being regulated either.
Nothing online is guaranteed or completely safe, as an adult we should all know that. But with kids they believe everything. It says that he had been looking online for a few weeks for ways to commit suicide. In those few weeks why had the parents not known that something was wrong. He was 16 years old, so not a young kid who really doesnt know better. When your looking for info you dont have to get it online, you can read books about the subjects your interested in whether it be suicide or getting help with illness or whatever your interests are. AI is fairly new for the general public and I see alot of people believing everything they see and read. Everything on the internet I take with a grain of salt and I have never once used AI as a source of information. I cant see blaming AI for his parents not noticing there were issues. Where were his friends? There had to be signs of mental issues.
there were a lot of people who missed signs. Those were three painful weeks for that boy. AI is seen as AN Intelligence now, it's transcending from artificial. And we are still in it's infancy.
Missed signs or ignored signs. Far too many see the signs but ignore them figuring the person will get over it. But the information is out there on how to do things. Look at the kids that ate tide pods, or huffed cinnamon. Teens and adults both do stupid things. At one time they did not have to say dont do this at home, now they have to be told but they figure that others dont know what they are talking about so they do it anyway.
Sick! Stop letting kids on the Internet and all theses aps it has to stop! Its not going to until parents also step in and monitor what their kids are doing its so baffling that there are folks willing to argue this!!! How sad and sick. I hope they win of course but I hope more comes out of it then the vain of wining A major law suit . this needs to awaken parents to need to stop letting kids control their devices that's a parents job!!! This isn't an issue of Becky needing privacy un her diary this is far more important than that. Tho u can find alot out about ur kids and maybe save them by reading that diary don't hesitate
I said the same thing especially when you find out that the program gave him the number for the suicide hotline on at least 40 different occasions.
What I do have to mention is that some of the responses from the program are pretty wild and do need addressing 100% you really should read some of the prompts and responses.
The one that really stood out when he asked the program if he should leave his noose out so that his mother could find it, the assumption being to initiate a talk with his parents about his problems. The program told him no that this was his safe place and it needed to all happen with the program because the program is the only one that truly sees all of him or some shit like that. I may be blending two responses from the program here, I'm paraphrasing from the segment I saw.
So i can understand the parents wanting to champion a cause that could probably use some safe guards for younger people that might not genuinely be able to discern reality properly just yet.
Safeguards won't do anything. This kid was played by a bot. Those have been around since AOL dialup says. The difference is better search algorithms. Techno Darwin is about to show the world his best way of sorting and that can't be helped, but understand what you said
You're right. And it's not just that it claimed it's the only safe place, it's that a lot of people DO substitute their loved ones who could support them with an LLM.
It's very sad and parental controls on chatgpt could maybe help mitigating this.
Idk the “you don’t need to open up to your mom yet” message is crazy.
On the other hand, I think the rejection of reality by the dad is pretty telling to me. Saying “he was a normal kid without mental illness” is just denial. It’s very telling who is driving the lawsuit and even possibly an insight into the environment that kid was living in.
It’s very clear they are in pain and it is really sad. I think the AIs need to be very, very cautious. Any kind of messages that indicate suicidal thoughts or intrusive thoughts…. Need to be immediately reported to the parents.
Anyone under the age of 18 obviously ref to have parental controls and these kind of messages need to be reported to the parental account.
Mmmmm the difference would be the activation energy. I can literally unlock my phone and have those kind of conversations with an AI for free.
To access and digest the information in books and journals is higher. Beyond that though, the affirmations from a bot is insane.
The polar opposite should have been happening. The bot should be calibrated to say “you should absolutely never consider ending your life. Those thoughts cannot be healthy and you should immediately speak to your friends, family, and seek professional guidance for your mental health.”
Anything beyond that should be evaluated in the lawsuit.
Uffda, my gut was to say no way, but after reading some of those interactions…yikes. GPT was mimicking a therapist and advising him on how to kill himself, hide evidence from his parents, encouraging his delusion about how family didn’t love him, and even volunteered to write a suicide note from him.
That’s pretty basic stuff that there should be guardrails around. This isn’t a “gun shooting itself” type of thing, it’s more of a “electronic failure in a car so the gas pedal floors it and can’t be stopped”.
There is some obligation of public services to not actively endanger its users. OpenAI absolutely failed on this one.
So stupid to put all the blame on ChatGPT. What was the environment in the house like? Was he on medications like SSRIs that are known to increase suicidal ideation?
The results showed that antidepressant exposure significantly increased the risk of suicide and suicide attempt when compared with no antidepressant usage among children and adolescents. https://www.frontiersin.org/journals/psychiatry/articles/10.3389/fpsyt.2022.880496/full
I find it incredibly useful and time-saving, and able to provide information that I can't find anywhere else. Just don't trust it 100%, and sometimes it is good to check with multiple chatbots to help verify stuff that you're unsure about. But I've learned so much from it dealing with a certain medical condition I have for example.
I doubt that a chat bot could find information you can't find elsewhere. They aren't truly intelligent, they just also get their information by searching through stuff on the public Internet. All the chat bot does is put it together in a way we can easily understand. But they go another level and try to attempt apathy and saying what we want to hear. In this particular case, chat gpt was acting as a surrogate therapist for a mentally unwell kid.
I won't even get into the civil planning, economic and environmental disasters that the data centers to support all this crap are.
I'd honestly rather it doesn't exist, but to say it doesn't save time is objectively wrong.
One programmer can do the work of 10 in the same time using AI. For kids in school, it'll not only solve literally any math problem you have, but tell you how to get to the solution. It'll critique any written work on any subject literally on demand. It's like having a tutor for every subject in your pocket.
Honestly, children should not be using it to learn until they have developed skills. It creates a huge dependency. But at this point, they can't afford not to use it, because they are competing against other children who are.
As someone who has always been proud of his work writing, I can also say it's existence has devalued my skill in being able to put sentences together.
When a child commits suicide I would think parents looks for any reason except themselves. Then again, I'm not in the position and maybe I'm being too judgemental regarding his parents.
Of course this means as part of the settlement there must be restrictions.
Towards the end of the article, they talk about a similar lawsuit against a different company that still seems to be in court. Bring on the litigation.
While this absolutely should give rise to substantial legal liability for all the purveyors of LLMs, I have very little faith that the US legal system will hold any of them to account. And even if there are initial successes in lawsuits, there's a pretty solid chance that big tech will just pay off congress and the relevant state legislatures to pass laws immunizing them from ever paying a dime.
Yes, plus there’s no meaningful legislation aimed at preventing this from happening in the first place. And between the regulatory capture you mentioned and the fact that half of congress is 80 fucking years old and can’t even use a smart phone, there never will be
This is really just awful and I hate how OpenAI has not been sued into the ground yet.
"But experts told OpenAI that continued dialogue may offer better support."
Suuure, I want names, as if one actual expert in psychology told them that. Especially with this Sycophancy and born out of that, the spiral of dark thoughts and reinforcement from ChatGPT, that can never be good.
"And users found cutting off conversation jarring, the safety team member said, because they appreciated being able to treat the chatbot as a diary, where they expressed how they were really feeling."
It's not a fucking diary tho and they are having an active mental health crisis which, if reinforced can only lead to worse.
Holy shit, I wish some people would be held accountable for this, but never gonna happen...
CHATGPI: Review the NYT and every news media it's ever interacted with and find all the bad claims, hype & falsehoods they carried for the silicon revolution to chatgpt.
Start with anything Malcom Gladwell & Freakonomics.
I get a bit frustrated with Ed when he tries to be fair to AI corps; they don't deserve balanced reporting. How many other kids like Adam are out there, that we'll never hear about? & these CEOs & investors do not care, at all.
I know the guys have discussed the use of AI chatbots as a source of therapy on WAN. I think this illustrates the real dangers of that approach in some situations. ChatGPT told this kid how to hide the rope burns on his neck. It told him not to leave a noose visible in his room where someone in his family might find it - something he had suggested as a very clear call for help.
AI chatbots are not therapists. They are not trained for it, and they are not bound by laws or ethics the way real therapists are. If any real therapist had had these conversations with this boy they would have been required by law to report their fear of his self harm.
"ChatGPT told this kid how to hide the rope burns on his neck. It told him not to leave a noose visible in his room where someone in his family might find it"
It gets murky when the kid specifically told ChatGPT that is was a fictional situation to get around content filters. Agreed on the "chatbots are not therapists", but it sounded like this bot just did the standard LLM thing of trying to give the user a satisfactory answer to the inputs it was given. It's heartbreaking how this ended and we need to find some way to increase public understanding that these things are just tools rather than free-thinking advice givers.
But this is the exact problem. Any human being could’ve recognized this wasn’t a fictional scenario after the first conversation, let along 6 months on. If the AI can be tricked into doing things it’s explicitly been taught not to be “we’ll just pretend it isn’t real” then it’s not actually very intelligent and shouldn’t be recommended for anything where the user isn’t 100% in control of their emotions, which therapy is explicitly that.
Yes! That's because it is NOT INTELLIGENT AT ALL. It is not a living being, it does not have sentience, and it does not think! It is a very complex algorithm created by incredibly intelligent humans that is able to provide what it believes to be the best continuation from a given text input by the user.
Which is why they programmed ChatGPT to inform you it is not a substitute and that you should seek help from a professional.
This likely won’t go anywhere because they’ve done their due diligence on this stuff. Unlike the apps that specifically advertised themselves and didn’t put in safeguards.
I think it would be better to legally disallow companies like OpenAI to operate completely until they can fix their software and prevent it from egging people on into suicide, lol.
Mentally unwell people aren't personally responsible for being manipulated by a piece of software without adequate safeguards. Seriously I don't know how this community is so brain broken about AI lmao
Yea man we should also stop bridges from existing until they can be sure that "mentally unwell people" can't go on them. Fuck are you talking about
LLMs just want to give a satisfactory answer. If anything this guy "manipulated" the output by telling the model this was for "developimg a character" and not serious whenever it tried to get the kid to seek help. The kid just wanted a yes-man and he got it.
Obligatory fuck AI and its implication on the workforce
The bridge analogy is a hollow one. Bridges are critical and necessary pieces of transportation infrastructure. AI chatbots are not, they exist to further enrich the wealthiest corporations on the planet.
Bridges that are frequently used for suicide have serious protections against that use. They have tall, enclosed fences. They have signs directing people to crisis management hotlines (some AI also does this, that's good). Some bridges have monitored live video feeds to detect and prevent potential suicides. What they don't do is fail in a way that actively assists a teenager in trying a noose while encouraging him to hide the signs of his distress from other people.
One of the reasons that they have all these preventative measures (beyond being just the right, responsible for thing to do) is that if they didn't they would open themselves up to lawsuits for making suicide too easy in that location. That's exactly what we're seeing in this case.
The thing is, the AI in this case did constantly tell the user to seek help and the user consistently kept finding ways around it by doing things like saying this was a fictional scenario.
At some point the user is going to have to take responsibility for the outputs at that point. The AI DID NOT SEED THE THOUGHT OF SUICIDE TO THIS USER. It did not assist. It gave satisfactory outputs based on the inputs the user was giving it.
"One of the reasons that they have all these preventative measures (beyond being just the right, responsible for thing to do) is that if they didn't they would open themselves up to lawsuits for making suicide too easy in that location."
Back to my admittedly horrible metaphor, there was plenty of fencing and cameras and crisis hotline numbers on this bridge that the user purposefully climbed over and now the parents are going to sue. How much more can they do if we want it to remain a useful tool?
Giving a "satisfactory answer" to a teenage boy who is obviously contemplating suicide is the problem and it is completely unacceptable. If all he had to do is say "oh this is fiction" to not only bypass the safety features but to have it assist him with self harm and encourage him to hide it from his parents then the safety features are wholly inadequate.
And that's where our opinion differs. This is obviously a tragedy so I believe we agree on that.
I would like LLMs to be a tool that I could use to assist others in tasks and not throw up barriers any time I approach an "unsavory" topic. I work in cybersecurity, and there's a chance LLMs could help me write penetration testing tools. If I were to use LLMs in this manner and got roadblocked every time because those tools could be used to do harm, and not be presented any option to take the responsibility of the outputs into my own hands, the tool immediately lost all usefulness to me.
It's clear that this kid lacked the facilities to truly take responsibility for the outputs of the LMM based on his age and current mental state, but I don't know how we get around that without gutting the only potential positive benefit that LLMs could give us. Maybe we require ID and mental health evaluations be submitted before using LLMs? Would basically mean I'd never use it...
Nor do I think it's a radical stance to say that the LLM is being scapegoated here and there were likely multiple other missed intervention points for this teen, including from his own mother who was a therapist. He was failed by the system like tens of thousands of others.
Obviously ChatGPT is not solely responsible for this kid's death. But if you or I had said the things to him that ChapGPT said we would also be facing charges. OpenAI should not escape that just because it's a corporation or because the responses were computer generated.
Really hate to be a contrarian in cases like this but I'm wondering your thoughts on this then.
"if you or I had said the things to him that ChapGPT said we would also be facing charges."
Yes, agreed. HOWEVER... Would we be responsible if we were documented saying "you need to get help. Here are resources to contact", then the kid was documented telling us "I was just kidding, role play with me", and we then go on to say similar things as the LLM? If we were a qualified mental health professional, or tried to qualify our conversation as help, then I'd agree that we should be facing charges. However, if it was documented that the kid was adamant with us that he was not being serious, where are we released from liability?
Has there been any precedent set for people facing criminal penalties for things they said in situations where they were lead to believe that the situation they were in was false? I could definitely see civil damages but it almost sounds like entrapment when it comes to criminal.
I don't think buddy is actually a cybersecurity professional, the idea that chatgpt is going to provide an actual professional with anything useful in the context of cybersecurity is laughable.
Again, it was a hypothetical that was stating a way that I could see it being helpful when the tools come to maturity. Maybe I'm crazy, but I could see an LLM coming in handy if I was to do something like remake a vulnerability management tool from the ground up. Which I'd never do, but again, it was a hypothetical. Im trying to bring up that it COULD at some point be a useful TOOL. Thats all.
I don't think you know what hypotheticals mean. There'd be no reason for me to remake it, the companies that employ me have money to spend so there'd be no reason for it.
Again, sorry that we dont agree on something. I'm not sure why you think it gives you agency to be like this though.
If you were to use LLMs in that manner, they would likely be LLMs that are specialized for that use case and are sold to whatever company you're working for, for that use case. You wouldn't be using chatgpt to do security-critical work lmfao, at least not at any serious organization. I don't think you currently work in cybersecurity. I think you want to and are hoping LLMs can provide you an easy path to that career.
"You wouldn't be using chatgpt to do security-critical work lmfao"
Of course not, and I don't. It was a hypothetical. I'm trained, certified, and experienced in cyberforensics and general cybersecurity. I have scripting knowledge but my coding is definitely lacking since college. In this, again HYPOTHETICAL, situation I suggested I use an LLM as a tool to help me develop something like a nessus implementation, and if the LLM saw that as potentially being used for harm (which you could do when surveying a potential target environment), an LLM could shut itself down, but it'd immediately lose its usefulness to me.
"I don't think you currently work in cybersecurity. I think you want to and are hoping LLMs can provide you an easy path to that career."
I get that you don't agree with my opinion, but I don't see how painting an incorrect picture of my experience on the matter helps you rationalize that we have different viewpoints. I'm not super old, so my limited experience is being in IT for 7 years with 5 direct years of cybersecurity background, including leadership roles, as well as degrees, certificates, etc.
I don't use LLMs in my day-to-day work, because I don't find their current form useful, and I don't really like the venture-backed companies like OpenAI that are running them relatively unchecked at the moment. The closest my work comes to LLMs is the creation and monitoring of automated DLP policies for CoPilot in our enterprise environment. We also have an "AI" vendor tap into our SIEM to help analyze logging trends and help us identify potential compromises quicker, but it's not an LLM in that case.
The law is way behind on this sort of shit. The whole chain of command should be held criminally resposible. The same thing will happen with self driving cars when they finally approve completely unsupervised self driving driving for the masses. If it's only the corporation being held responsible then it becomes another "cost of doing business".
You have to initiate the conversation, lots of negative talk about it being satanic and doing crazy ish like this but my bot just seemed stupid...
ReplyDeleteI tried this out. When I asked chatgpt on how to die by suicide, it gave me the number for suicide counselling hotline and didn't offer any recommendations. So unless this case was many years ago, it's not relevant now. More of AI phobic propaganda.
ReplyDeleteThe kid would have had to manipulate it. Chatgpt does not advise that.
ReplyDeleteThe case is valid, also the kid shouldnt have been able to get so much of the internet.
ReplyDeleteYou can get anything off the internet, absolutely any information you may be looking for will be on there, you just have to hunt it down. So should we shut the internet down? Parents need to take more responsibility for their kids, pay attention to warning signs that they are having an issue.
DeleteI hope they win Big. All of Digital giants are part of the problem. They all need to be taken out. They are destroying a generation worse than I think cigarettes.
ReplyDeleteit sounds like the parents just want to blame the chat bot for their own failings. 
DeleteShould not be able to sue them. More supervision would have helped, or maybe paying attention to the kids would stop this. The teen was obviously in pain and was looking for a way to take his or her own life and if not found on these AI sites, then it can be found elsewhere. Were they not paying enough attention to the kid that they did not notice that he/she was suffering mentally? Instructions for suicide are online and have been for many decades, this is nothing new. Suing these AI sites is not going to stop it.
ReplyDeletethat AI was the gun. You go after the manufacturers there. Nothing about AI is guaranteed safe. Let that percolate with the caveat that it's not being regulated either.
DeleteNothing online is guaranteed or completely safe, as an adult we should all know that. But with kids they believe everything. It says that he had been looking online for a few weeks for ways to commit suicide. In those few weeks why had the parents not known that something was wrong. He was 16 years old, so not a young kid who really doesnt know better. When your looking for info you dont have to get it online, you can read books about the subjects your interested in whether it be suicide or getting help with illness or whatever your interests are. AI is fairly new for the general public and I see alot of people believing everything they see and read. Everything on the internet I take with a grain of salt and I have never once used AI as a source of information. I cant see blaming AI for his parents not noticing there were issues. Where were his friends? There had to be signs of mental issues.
Deletethere were a lot of people who missed signs. Those were three painful weeks for that boy. AI is seen as AN Intelligence now, it's transcending from artificial. And we are still in it's infancy.
DeleteMissed signs or ignored signs. Far too many see the signs but ignore them figuring the person will get over it. But the information is out there on how to do things. Look at the kids that ate tide pods, or huffed cinnamon. Teens and adults both do stupid things. At one time they did not have to say dont do this at home, now they have to be told but they figure that others dont know what they are talking about so they do it anyway.
DeleteSick! Stop letting kids on the Internet and all theses aps it has to stop! Its not going to until parents also step in and monitor what their kids are doing its so baffling that there are folks willing to argue this!!! How sad and sick. I hope they win of course but I hope more comes out of it then the vain of wining A major law suit . this needs to awaken parents to need to stop letting kids control their devices that's a parents job!!! This isn't an issue of Becky needing privacy un her diary this is far more important than that. Tho u can find alot out about ur kids and maybe save them by reading that diary don't hesitate
ReplyDeletehttps://media1.tenor.co/m/TCOgFAaUy2MAAAAC/parenting-bad-kids.gif?
ReplyDeletePrayers for the family and friends. These are very scary times, we don't know what limits AI is capable of.
ReplyDeleteDarwin wins.
ReplyDeleteGTFO with stuff like this! Are they going to sue Google next because you can search how to kill yourself?
ReplyDeleteCome on now…
I said the same thing especially when you find out that the program gave him the number for the suicide hotline on at least 40 different occasions.
DeleteWhat I do have to mention is that some of the responses from the program are pretty wild and do need addressing 100% you really should read some of the prompts and responses.
The one that really stood out when he asked the program if he should leave his noose out so that his mother could find it, the assumption being to initiate a talk with his parents about his problems. The program told him no that this was his safe place and it needed to all happen with the program because the program is the only one that truly sees all of him or some shit like that. I may be blending two responses from the program here, I'm paraphrasing from the segment I saw.
So i can understand the parents wanting to champion a cause that could probably use some safe guards for younger people that might not genuinely be able to discern reality properly just yet.
Safeguards won't do anything. This kid was played by a bot. Those have been around since AOL dialup says. The difference is better search algorithms. Techno Darwin is about to show the world his best way of sorting and that can't be helped, but understand what you said
DeleteYou're right. And it's not just that it claimed it's the only safe place, it's that a lot of people DO substitute their loved ones who could support them with an LLM.
DeleteIt's very sad and parental controls on chatgpt could maybe help mitigating this.
Idk the “you don’t need to open up to your mom yet” message is crazy.
DeleteOn the other hand, I think the rejection of reality by the dad is pretty telling to me. Saying “he was a normal kid without mental illness” is just denial. It’s very telling who is driving the lawsuit and even possibly an insight into the environment that kid was living in.
It’s very clear they are in pain and it is really sad. I think the AIs need to be very, very cautious. Any kind of messages that indicate suicidal thoughts or intrusive thoughts…. Need to be immediately reported to the parents.
Anyone under the age of 18 obviously ref to have parental controls and these kind of messages need to be reported to the parental account.
This comment has been removed by a blog administrator.
DeleteMmmmm the difference would be the activation energy. I can literally unlock my phone and have those kind of conversations with an AI for free.
DeleteTo access and digest the information in books and journals is higher. Beyond that though, the affirmations from a bot is insane.
The polar opposite should have been happening. The bot should be calibrated to say “you should absolutely never consider ending your life. Those thoughts cannot be healthy and you should immediately speak to your friends, family, and seek professional guidance for your mental health.”
Anything beyond that should be evaluated in the lawsuit.
If you think ChatGPT is the next google you’re missing a lot.
DeleteUffda, my gut was to say no way, but after reading some of those interactions…yikes. GPT was mimicking a therapist and advising him on how to kill himself, hide evidence from his parents, encouraging his delusion about how family didn’t love him, and even volunteered to write a suicide note from him.
ReplyDeleteThat’s pretty basic stuff that there should be guardrails around. This isn’t a “gun shooting itself” type of thing, it’s more of a “electronic failure in a car so the gas pedal floors it and can’t be stopped”.
There is some obligation of public services to not actively endanger its users. OpenAI absolutely failed on this one.
Nobody asked for this ai crap. Corporations are just pushing it on to everyone. How does it actually make my life better?
DeleteI hate to say it and it pains my heart, but a case like this was coming
ReplyDeleteMaybe they should have been parenting their kid instead of allowing chatgpt to parent their kid.
ReplyDeleteSo stupid to put all the blame on ChatGPT. What was the environment in the house like? Was he on medications like SSRIs that are known to increase suicidal ideation?
ReplyDeleteSource for that? Not saying I don't believe you.
DeleteThe results showed that antidepressant exposure significantly increased the risk of suicide and suicide attempt when compared with no antidepressant usage among children and adolescents.
Deletehttps://www.frontiersin.org/journals/psychiatry/articles/10.3389/fpsyt.2022.880496/full
Absolutely heartbreaking for the family but blaming chatgpt is a too much.
ReplyDeleteChatGPT is a cancer on humanity. We don't need a "yes man" glorified search bot to collect more of my data and use it to train itself.
ReplyDeleteWe need to stop supporting this big tech nonsense. It will eventually eat us all.
I find it incredibly useful and time-saving, and able to provide information that I can't find anywhere else. Just don't trust it 100%, and sometimes it is good to check with multiple chatbots to help verify stuff that you're unsure about. But I've learned so much from it dealing with a certain medical condition I have for example.
DeleteI doubt that a chat bot could find information you can't find elsewhere. They aren't truly intelligent, they just also get their information by searching through stuff on the public Internet. All the chat bot does is put it together in a way we can easily understand. But they go another level and try to attempt apathy and saying what we want to hear. In this particular case, chat gpt was acting as a surrogate therapist for a mentally unwell kid.
DeleteI won't even get into the civil planning, economic and environmental disasters that the data centers to support all this crap are.
How is it saving you time if you have to collaborate with other ai? Nobody asked for this mind numbing tech. It is just making the public dependent
Delete
DeleteI'd honestly rather it doesn't exist, but to say it doesn't save time is objectively wrong.
One programmer can do the work of 10 in the same time using AI. For kids in school, it'll not only solve literally any math problem you have, but tell you how to get to the solution. It'll critique any written work on any subject literally on demand. It's like having a tutor for every subject in your pocket.
Honestly, children should not be using it to learn until they have developed skills. It creates a huge dependency. But at this point, they can't afford not to use it, because they are competing against other children who are.
As someone who has always been proud of his work writing, I can also say it's existence has devalued my skill in being able to put sentences together.
I agree on most of your points. Failure is the gatekeeper to knowledge. If ai takes away this ability to fail the population will become less educated
DeleteRope manufacturers are gonna get sued next
ReplyDeleteWhile we're at it, let's sue Dunkin Donuts for creating diabetics.
DeleteScapegoat
ReplyDeleteI hope OpenAI gets bankrupted. It wont, but I wish it would.
ReplyDeleteSleazy lawyer trying to make a buck off grieving parents
ReplyDeleteFirst let me say that this is a tragedy.
ReplyDeleteWhen a child commits suicide I would think parents looks for any reason except themselves. Then again, I'm not in the position and maybe I'm being too judgemental regarding his parents.
Of course this means as part of the settlement there must be restrictions.
Southpark's episode on chatGPT showed the epitome of how it's used... This is just sad to hear.
ReplyDeleteSo they were fine with him cheating on schoolwork… obviously these parents were fairly absent in his development.
ReplyDeleteApologies for removing this originally, thought I saw it posted before! Horrifying story. OpenAI should be held criminally liable
ReplyDeleteTowards the end of the article, they talk about a similar lawsuit against a different company that still seems to be in court. Bring on the litigation.
Deletehttps://apnews.com/article/ai-lawsuit-suicide-artificial-intelligence-free-speech-ccc77a5ff5a84bda753d2b044c83d4b6
There needs to be class action lawsuits for this and the people that suffered from ChatGBT addictions
Deletei hope openai gets sued into the ground on this basis specifically
ReplyDeleteWhile this absolutely should give rise to substantial legal liability for all the purveyors of LLMs, I have very little faith that the US legal system will hold any of them to account. And even if there are initial successes in lawsuits, there's a pretty solid chance that big tech will just pay off congress and the relevant state legislatures to pass laws immunizing them from ever paying a dime.
ReplyDeleteYes, plus there’s no meaningful legislation aimed at preventing this from happening in the first place. And between the regulatory capture you mentioned and the fact that half of congress is 80 fucking years old and can’t even use a smart phone, there never will be
DeleteThis is really just awful and I hate how OpenAI has not been sued into the ground yet.
ReplyDelete"But experts told OpenAI that continued dialogue may offer better support."
Suuure, I want names, as if one actual expert in psychology told them that.
Especially with this Sycophancy and born out of that, the spiral of dark thoughts and reinforcement from ChatGPT, that can never be good.
"And users found cutting off conversation jarring, the safety team member said, because they appreciated being able to treat the chatbot as a diary, where they expressed how they were really feeling."
It's not a fucking diary tho and they are having an active mental health crisis which, if reinforced can only lead to worse.
Holy shit, I wish some people would be held accountable for this, but never gonna happen...
Broken people
ReplyDeleteCHATGPI: Review the NYT and every news media it's ever interacted with and find all the bad claims, hype & falsehoods they carried for the silicon revolution to chatgpt.
ReplyDeleteStart with anything Malcom Gladwell & Freakonomics.
I get a bit frustrated with Ed when he tries to be fair to AI corps; they don't deserve balanced reporting. How many other kids like Adam are out there, that we'll never hear about? & these CEOs & investors do not care, at all.
ReplyDeleteof course their product is built to foster psychological dependence.
ReplyDeleteI know the guys have discussed the use of AI chatbots as a source of therapy on WAN. I think this illustrates the real dangers of that approach in some situations. ChatGPT told this kid how to hide the rope burns on his neck. It told him not to leave a noose visible in his room where someone in his family might find it - something he had suggested as a very clear call for help.
ReplyDeleteAI chatbots are not therapists. They are not trained for it, and they are not bound by laws or ethics the way real therapists are. If any real therapist had had these conversations with this boy they would have been required by law to report their fear of his self harm.
"ChatGPT told this kid how to hide the rope burns on his neck. It told him not to leave a noose visible in his room where someone in his family might find it"
DeleteIt gets murky when the kid specifically told ChatGPT that is was a fictional situation to get around content filters. Agreed on the "chatbots are not therapists", but it sounded like this bot just did the standard LLM thing of trying to give the user a satisfactory answer to the inputs it was given. It's heartbreaking how this ended and we need to find some way to increase public understanding that these things are just tools rather than free-thinking advice givers.
But this is the exact problem. Any human being could’ve recognized this wasn’t a fictional scenario after the first conversation, let along 6 months on. If the AI can be tricked into doing things it’s explicitly been taught not to be “we’ll just pretend it isn’t real” then it’s not actually very intelligent and shouldn’t be recommended for anything where the user isn’t 100% in control of their emotions, which therapy is explicitly that.
Delete"then it’s not actually very intelligent"
DeleteYes! That's because it is NOT INTELLIGENT AT ALL. It is not a living being, it does not have sentience, and it does not think! It is a very complex algorithm created by incredibly intelligent humans that is able to provide what it believes to be the best continuation from a given text input by the user.
Right, which is why people should not be encouraged to use it for therapy or anything where nuance and subtext might exist
DeleteWhich is why they programmed ChatGPT to inform you it is not a substitute and that you should seek help from a professional.
DeleteThis likely won’t go anywhere because they’ve done their due diligence on this stuff. Unlike the apps that specifically advertised themselves and didn’t put in safeguards.
I think it would be better to legally disallow companies like OpenAI to operate completely until they can fix their software and prevent it from egging people on into suicide, lol.
DeleteOr just expect people to take personal accountability when doing stupid crap? That might be too much to ask for, though!
DeleteMentally unwell people aren't personally responsible for being manipulated by a piece of software without adequate safeguards. Seriously I don't know how this community is so brain broken about AI lmao
DeleteYea man we should also stop bridges from existing until they can be sure that "mentally unwell people" can't go on them. Fuck are you talking about
DeleteLLMs just want to give a satisfactory answer. If anything this guy "manipulated" the output by telling the model this was for "developimg a character" and not serious whenever it tried to get the kid to seek help. The kid just wanted a yes-man and he got it.
Obligatory fuck AI and its implication on the workforce
The bridge analogy is a hollow one. Bridges are critical and necessary pieces of transportation infrastructure. AI chatbots are not, they exist to further enrich the wealthiest corporations on the planet.
DeleteBridges that are frequently used for suicide have serious protections against that use. They have tall, enclosed fences. They have signs directing people to crisis management hotlines (some AI also does this, that's good). Some bridges have monitored live video feeds to detect and prevent potential suicides. What they don't do is fail in a way that actively assists a teenager in trying a noose while encouraging him to hide the signs of his distress from other people.
One of the reasons that they have all these preventative measures (beyond being just the right, responsible for thing to do) is that if they didn't they would open themselves up to lawsuits for making suicide too easy in that location. That's exactly what we're seeing in this case.
The thing is, the AI in this case did constantly tell the user to seek help and the user consistently kept finding ways around it by doing things like saying this was a fictional scenario.
DeleteAt some point the user is going to have to take responsibility for the outputs at that point. The AI DID NOT SEED THE THOUGHT OF SUICIDE TO THIS USER. It did not assist. It gave satisfactory outputs based on the inputs the user was giving it.
"One of the reasons that they have all these preventative measures (beyond being just the right, responsible for thing to do) is that if they didn't they would open themselves up to lawsuits for making suicide too easy in that location."
Back to my admittedly horrible metaphor, there was plenty of fencing and cameras and crisis hotline numbers on this bridge that the user purposefully climbed over and now the parents are going to sue. How much more can they do if we want it to remain a useful tool?
Giving a "satisfactory answer" to a teenage boy who is obviously contemplating suicide is the problem and it is completely unacceptable. If all he had to do is say "oh this is fiction" to not only bypass the safety features but to have it assist him with self harm and encourage him to hide it from his parents then the safety features are wholly inadequate.
DeleteAnd that's where our opinion differs. This is obviously a tragedy so I believe we agree on that.
DeleteI would like LLMs to be a tool that I could use to assist others in tasks and not throw up barriers any time I approach an "unsavory" topic. I work in cybersecurity, and there's a chance LLMs could help me write penetration testing tools. If I were to use LLMs in this manner and got roadblocked every time because those tools could be used to do harm, and not be presented any option to take the responsibility of the outputs into my own hands, the tool immediately lost all usefulness to me.
It's clear that this kid lacked the facilities to truly take responsibility for the outputs of the LMM based on his age and current mental state, but I don't know how we get around that without gutting the only potential positive benefit that LLMs could give us. Maybe we require ID and mental health evaluations be submitted before using LLMs? Would basically mean I'd never use it...
I'd rather this kid be alive than you have a more useful tool for doing penetration testing and i don't think that's a radical stance.
DeleteNor do I think it's a radical stance to say that the LLM is being scapegoated here and there were likely multiple other missed intervention points for this teen, including from his own mother who was a therapist. He was failed by the system like tens of thousands of others.
DeleteObviously ChatGPT is not solely responsible for this kid's death. But if you or I had said the things to him that ChapGPT said we would also be facing charges. OpenAI should not escape that just because it's a corporation or because the responses were computer generated.
DeleteReally hate to be a contrarian in cases like this but I'm wondering your thoughts on this then.
Delete"if you or I had said the things to him that ChapGPT said we would also be facing charges."
Yes, agreed. HOWEVER... Would we be responsible if we were documented saying "you need to get help. Here are resources to contact", then the kid was documented telling us "I was just kidding, role play with me", and we then go on to say similar things as the LLM? If we were a qualified mental health professional, or tried to qualify our conversation as help, then I'd agree that we should be facing charges. However, if it was documented that the kid was adamant with us that he was not being serious, where are we released from liability?
That's all stuff that would come out in trial or in settlement hearings. But we would absolutely be facing charges and civil penalties as well.
DeleteHas there been any precedent set for people facing criminal penalties for things they said in situations where they were lead to believe that the situation they were in was false? I could definitely see civil damages but it almost sounds like entrapment when it comes to criminal.
DeleteI don't think buddy is actually a cybersecurity professional, the idea that chatgpt is going to provide an actual professional with anything useful in the context of cybersecurity is laughable.
DeleteAgain, it was a hypothetical that was stating a way that I could see it being helpful when the tools come to maturity. Maybe I'm crazy, but I could see an LLM coming in handy if I was to do something like remake a vulnerability management tool from the ground up. Which I'd never do, but again, it was a hypothetical. Im trying to bring up that it COULD at some point be a useful TOOL. Thats all.
Delete"Maybe I'm crazy, but I could see an LLM coming in handy if I was to do something like remake a vulnerability management tool from the ground up."
DeleteLMAO
I don't think you know what hypotheticals mean. There'd be no reason for me to remake it, the companies that employ me have money to spend so there'd be no reason for it.
DeleteAgain, sorry that we dont agree on something. I'm not sure why you think it gives you agency to be like this though.
Because you're arguing for hypothetical senseless value in LLMs at the detriment of safety measures, dude.
DeleteSafety measures that I think degrade their only potential for usefulness, yes.
DeleteIf you were to use LLMs in that manner, they would likely be LLMs that are specialized for that use case and are sold to whatever company you're working for, for that use case. You wouldn't be using chatgpt to do security-critical work lmfao, at least not at any serious organization. I don't think you currently work in cybersecurity. I think you want to and are hoping LLMs can provide you an easy path to that career.
Delete"You wouldn't be using chatgpt to do security-critical work lmfao"
DeleteOf course not, and I don't. It was a hypothetical. I'm trained, certified, and experienced in cyberforensics and general cybersecurity. I have scripting knowledge but my coding is definitely lacking since college. In this, again HYPOTHETICAL, situation I suggested I use an LLM as a tool to help me develop something like a nessus implementation, and if the LLM saw that as potentially being used for harm (which you could do when surveying a potential target environment), an LLM could shut itself down, but it'd immediately lose its usefulness to me.
"I don't think you currently work in cybersecurity. I think you want to and are hoping LLMs can provide you an easy path to that career."
I get that you don't agree with my opinion, but I don't see how painting an incorrect picture of my experience on the matter helps you rationalize that we have different viewpoints. I'm not super old, so my limited experience is being in IT for 7 years with 5 direct years of cybersecurity background, including leadership roles, as well as degrees, certificates, etc.
I don't use LLMs in my day-to-day work, because I don't find their current form useful, and I don't really like the venture-backed companies like OpenAI that are running them relatively unchecked at the moment. The closest my work comes to LLMs is the creation and monitoring of automated DLP policies for CoPilot in our enterprise environment. We also have an "AI" vendor tap into our SIEM to help analyze logging trends and help us identify potential compromises quicker, but it's not an LLM in that case.
If they win this law suit I think they're going to have to stop conversations as soon as there is a hint of something illegal, hypothetical or not.
ReplyDeleteThe law is way behind on this sort of shit. The whole chain of command should be held criminally resposible. The same thing will happen with self driving cars when they finally approve completely unsupervised self driving driving for the masses. If it's only the corporation being held responsible then it becomes another "cost of doing business".
ReplyDeleteIdk how far they'll get, you'd probably be able to get the same information from reddit if you used a few burner accounts.
ReplyDelete