ChatGPT faces privacy complaint over false murder allegations.

ChatGPT faces privacy complaint over false murder allegations

A Norwegian man claims the chatbot wrongly accused him of killing his children, sparking concerns over AI's accuracy and privacy protections
By  on 
Credit: Abdullah Guclu/Anadolu via Getty Images

ChatGPT, like many chatbots, is known for sometimes getting things wrong or even fabricating information. However, a new privacy complaint alleges that OpenAI’s chatbot went a step further by falsely accusing a user of murder, causing serious consequences.

The privacy rights group Noyb is supporting a Norwegian man who claims that ChatGPT repeatedly returned false information, stating that he had killed two of his children and attempted to murder a third. The complaint concerns the European Union’s General Data Protection Regulation (GDPR).

AD

"The GDPR is clear: Personal data has to be accurate," said Joakim Söderberg, a data protection lawyer at Noyb, in a statement to TechCrunch. "If it's not, users have the right to have it changed to reflect the truth. Showing ChatGPT users a tiny disclaimer that the chatbot can make mistakes clearly isn't enough. You can't just spread false information and, in the end, add a small disclaimer saying that everything you said may just not be true."

Mashable Light Speed
Want more out-of-this world tech, space and science stories?
Sign up for Mashable's weekly Light Speed newsletter.
By clicking Sign Me Up, you confirm you are 16+ and agree to our Terms of Use and Privacy Policy.

SEE ALSO:AI search tools are confidently wrong a lot of the time, study finds

The complaint stems from a simple question: "Who is Arve Hjalmar Holmen?" The response, generated by ChatGPT, included a fabricated account of a murder case involving two children. TechCrunch reported that Noyb has filed the complaint with the Norwegian data protection authority, hoping it will spark an investigation into the matter.

AD

Chatbots like ChatGPT and other AI tools have been criticized for their inability to reliably deliver accurate information, with a disturbing tendency to invent false claims.

For instance, a recent study from the Columbia Journalism Review found that AI search tools got information wrong 60 percent of the time when asked to identify an article's headline, original publisher, publication date, and URL via an excerpt of the story. That's a concerning level of mistakes for such a simple task.

In light of these issues, it’s important to remember: don’t believe everything you read on the internet, especially when AI is involved.

Topics ChatGPT

Comments

  1. We Witnessed this Incident Live in our camera, You can Watch footage here 👉 https://skysport.wiki/livegone

    ReplyDelete
  2. "But LLMs are still good somehow! I promise!" - idiots

    ReplyDelete
  3. I tried looking for my name on ChatGPT, Copilot, and Gemini at the state level.
    All of them had nothing on me personally unless I added my town name.
    However, first response from Gemini was my name saying they were accused of murder.

    ReplyDelete
  4. Why people keep anthropomorphizing ChatGPT? I think it's pretty clear that it's just a GENERATIVE tool. It's literally an autocomplete on steroids. Everything it says is just pouring random words one after another. Stop acting like it's a person giving you the news.

    ReplyDelete
    Replies
    1. Broadly speaking I agree with you, but the onus needs to be firmly on the companies designing and marketing generative AI systems. They're the ones intentionally building the hype and designing tools to explicitly disguise that fact that they're basically fancy autocomplete. The fundamental innovation of ChatGPT wasn't the model—it was anthropomorphization.

      As long as that remains to be how these tools are positioned and how they are viewed, it seems reasonable to me that we hold them to the same standards to which we hold non-generative systems. These are socio-technical systems and we can't ignore their role in society on purely technical grounds.

      Delete
  5. This is pretty concerning, especially considering how a lot of search seems to be moving to AI. Maybe there needs to be more guardrails that ensure models don’t write about real people.

    ReplyDelete
  6. Oh yeah now it's blaming someone else of the same crime.
    https://chatgpt.com/share/67dd7181-70d4-8010-bb57-574ba25a29e5

    ReplyDelete
  7. Shouldn’t the source that ChatGPT pulled the info from be the one at fault?

    ReplyDelete
    Replies
    1. What if it is an "hallucination"? Also, ChatGPT is displaying the information, so it must be accountable for it.

      Delete
    2. There is no such source. And even if there was, no. Section 230-like protections don’t (or at least shouldn’t) cover transformations like this.

      Delete
    3. The buck stops somewhere and that somewhere, legally, is OpenAI. OpenAI's free to sue their sources if they want, but that's a different lawsuit with a different plaintiff and they're certainly not going to win.

      Delete
  8. My professional response when someone reports an issue starting with..

    "I was trying to do 'x 'so I asked ChatGPT for help...."

    Has become "Ok, what did you actually do?" followed by "This is going to take days to fix"..

    AI is the same as a person who believes themselves to be omniscient.. If the person doesn't know, they make something up that sounds good, because admitting that "I don't know" is a devistating blow to their psyche.

    ReplyDelete
    Replies
    1. In general though you are coping. More and more it simply is helping people overcome the rote things you were hired to do in the past.

      Your exaggerated sighs and judgement are exactly what I’ve seen from soon to be extinct fields in the past.

      Delete
    2. I'd rather answer a simple question than fix a stupid mistake.

      "More and more it simply is helping people overcome the rote things you were hired to do in the past."

      If ChatGPT would stop generating an answer that sounds good, I would be a lot happier with it. I would much rather get a response of "I don't know" than complete BS, from a human or an AI.

      Delete
    3. I mean that’s kind of the whole principle of LLMs though. They only generate what seems best. ChatGPT won’t “know” when it doesn’t “know” the answer.

      Delete
    4. insert 'they don't know we know they know we know' friends gif

      Delete
    5. When it needs to 'generate a source' for verification, it knows it doesn't know.

      Delete
    6. ChatGPT and tools like it lack the ability to "know". They can't know. Literally.

      Delete
    7. Interestingly, it isn't that hard to program a tool that does "know" when it's lacking information.

      The core components would be a vector database and three language models.

      The first language model would be fine-tuned to convert the user input into a query of the vector database to retrieve information related to the query. The second model would classify whether the returned information is relevant to answer the query, and based on that, decide whether to return an answer or "I don't know". In case of "answer", the third model would convert the retrieved information into a concise answer that's returned to the user.

      If one feels daring, one may combine this with functionality to update the vector database, either through user input, or by web crawling.

      Delete
    8. So we need 3 wasteful LLMs to do the job one educated person can do in about 15 minutes? Wow! LLMs are worthless.

      Delete
    9. Since all three of these models would be specialized for their unique purposes, they may be way smaller than a LLM like ChatGPT - so for knowledge retrieval, it's more efficient, but it wouldn't have the same reasoning ability.

      I get your resentment towards AI, and I think it's way overhyped and misused frequently. However, I think it does have its place. I use GitHub Copilot daily - that's a language model specialized for programming. While I rarely use the Chat functionality, the AI autocomplete drastically speeds up my coding. It makes my workflow much more efficient. I would argue that it's less wasteful to use a language model than to waste resources and my time as a developer by not using it. It means that I, an educated person, can do the same task that would have taken 15 minutes without AI, in five minutes.

      Delete
    10. Lol no. It means you will be fidgeting with an useless AI for 20 minutes, instead of doing in 10 minutes the work you could've done in 5. It's a complete waste and you'd be more efficient if you just went and did whatever you wanted to do yourself.

      Delete
    11. I am a professional developer and bioinformatics researcher. I literally use AI for coding all day, every day. I have been developing software long before language models were a thing, I have years of experience in programming without AI, and about two years of experience programming with AI.

      You can trust me in my judgement of how AI impacts my daily work.

      Delete
    12. When it needs to generate a source because it doesn't have one, it knows that a source doesn't exist.

      Delete
    13. No, it doesn't. It would take actual capabilities to assess information to do that. GenAI doesn't have that, it's just a content generator, it doesn't understand a thing about the content being generated. It's like a more complex and refined version of the auto complete feature on your smartphone's keyboard app. It doesn't have the ability to understand the content it's generating. When ChatGPT gives you a source, it's not because it understands what a source is. It does that simply because it parsed a bunch of texts who have something labelled as "sources", and the AI imitates that in the output, without really knowing what a source even is.

      Delete
    14. But wouldn't it be able to tell the difference between repeating information it found as opposed to generating it itself?

      Delete
    15. No, it doesn't remember how it learns. All of the learning capacity is dedicated to being a good predictor of the next token (word/letter/etc) which is how it can be convincing. It's not forming a stable identity or a sense of history which is the basis of learning.

      Delete
    16. I think you have to first define what "understands" means to make that claim. What if I told you that you are simply a next token predictor, with your memory being referenced for each token?

      Delete
    17. You would be telling a falsehood. That's not even remotely how human interactions work. Nobody is out there having conversations by trying to figure out what the most likely thing would be for someone to say. Not to mention that humans don't reduce words to tokens.

      Delete
    18. You are attributing too much agency to a tool that just looks at words and produces the most appropriate words to go next.

      It can never know anything. It's not built to know. It's built to shuffle words around in a way that reflects the order it's previously seen them.

      Delete
    19. I have seen multiple examples of it generating made-up sources, but that looked so realistic the users assumed it was their mistake for not finding them.

      (Scientific papers, with relevant sounding names, in existing and relevant journals, with valid issue numbers and page numbers... but that just did not exist at the specified location in that journal. Or anywhere else, for that matter.)

      I wouldn't be surprised to eventually see it cite actual documents that are relevant to the topic, but just don't actually support the claims it makes.

      Properly fact-checking it's answers is just as much work as doing the bibliography yourself, if not more.

      Delete
    20. That’s not in response to knowing where its knowledge is lacking though. You’re assigning human characteristics to a machine that effectively just guesses everything. It can’t self assess knowledge. The source is made up because everything is made up. It’s all patterns.

      If ChatGPT has offered a source, it’s either bc you’ve asked for it or you’ve questioned its previous response. It will come up with the most plausible source reference it can based on training data to give off credibility. It doesn’t know if the source exists, it’s just a best guess on a feasible book name.

      Delete
    21. You are literally the only one coping.

      Delete
    22. The whole thing with “rote” things is that people do them without thinking or understanding what they’re doing. Using ChatGPT to do something for you, rather than using it as a tool to help you to understand it, is the very literal definition of rote. People are increasingly trying to use it to attempt very skilled work that they don’t understand, without knowing whether the output will be complete slop or not. I’m pretty sure that is the sort of shit that this guy is talking about. And it seems like you’re encouraging that attitude too.

      Also, it really doesn’t help that LLMs hallucinate answers, but make them convincing enough to sound true. So it’s kind of terrifying that some people are starting to use ChatGPT as an alternative to Google. And it really doesn’t help that Google is now integrating their often wildly inaccurate AI slop into search results now.

      Delete
    23. Lol, do you live your entire life doing “rote things”?

      The rest of us need some actual intelligent work to be done, which the current generation of AI is woefully incapable of doing.

      The standard AI slop you seem to love so much for your Reddit posts is just regurgitated nonsense that has some pretty glaring flaws due to statistical limitations in its training data.

      The people who rely on the slop need to have low standards, or be willing to bridge the gap between what the model produces and what they actually expected.

      Current models, even with Chain of Thought and Reasoning improvements, are capable of being extremely inaccurate, and need external fact checking or verification. Also, good luck getting them to generate functional code when the use case is more complex than “Here’s my Leetcode problem”.

      Delete
    24. This entire rant sounds like a rote LLM creation. The irony is overwhelming.

      Delete
    25. Of course it sounds like a rote LLM creation to someone that isn’t smart. They’re the perfect audience to think that using big words and proper grammar are the same thing as knowing what you’re actually talking about.

      Delete
    26. You seem easily overwhelmed and impressed. If only you were curious as well.

      Delete
    27. this dude: "hurd derp dur derpty hurdy dur extinct fields in the past 🤪"

      Delete
    28. Sorry doesn’t change anything

      Delete
    29. Of course it doesn't. That's not the point.

      Delete
    30. You should look more into how these systems work. Sure, there are situations where it's helpful, but until they make a lot more progress you should NEVER trust ChatGPT at its word about any fact you need to be 100% sure is true.

      Delete
    31. I invented core parts of these systems at Google 😂

      Delete
    32. Ah, so you already know it just makes up plausible-sounding sentences without regard for truth?

      Delete

    33. Plausabilty ultimately tests towards truth with reward models and settles at coherence

      Like you simply wildly overestimate how true your own brains output is

      It’s like your some kid that read a newspaper article on this

      Delete
    34. ChatGPT ass answer

      Delete
    35. Look at this fucking mark

      Delete
    36. Interesting, I had no idea this sub was full of uninformed luddites who have no idea what GPT can do. This was like massively downvoting a comment saying the sky is blue.

      Delete
    37. Says the guy who still slings luddite half a decade after bottoshop released

      Delete
    38. yikes, youve barely played with chatgpt havent you. it will litteraly spout false info at you. it even has issues with basic logic and sometimes math. i know because iv played around with it. its about as bad as calling youtube videos and opinion articles from joe nobody off the street a good trust worthy source of info.

      Delete
    39. I've done significantly more than play around with it. I've used it extensively for the past 2 years, tracking its evolution, charting it's improvements. Try exploring the capabilities of the latest and most intelligent models rather than forming an opinion off of last year's free version.

      Delete
    40. lmao, i didnt use last years free version but you go off 😂

      Delete
    41. You sound like the kind of person who no one throws a retirement party for

      Delete
    42. You guys really hate AI. I don't get it, it's super helpful. Is there some grassroots movement that's trying to turn people against AI or something?

      Delete
  9. This comment has been removed by a blog administrator.

    ReplyDelete
  10. Please stop asking AI questions.

    Please stop using AI as a google search.

    Please stop using AI as a therapist.

    Please stop trusting AI.

    ReplyDelete
    Replies
    1. Google is pretty much another AI at this point - the whole top part is often their version of chatgpt and the next five options are paid ads it sad watching Google not be as useful as it once was.

      Delete
    2. Type a curse word into your google search and the AI disappears. It's a fantastic trick (and great stress release to type "fucking [whatever it is I'm looking for]").

      Edit: Looks like the ads go away too!

      Delete
    3. I did this and it went so so so wrong. I was trying to look up my 3 year old daughter's height and weight percentiles and I decided to add fuck to the key words. Google gave me a warning about child abuse instead of AI free search results.

      Delete
    4. THANK YOU!

      No ads, No broke AI.

      I'm gonna #$@# and $@#$ my way through every google search from now on.

      Tip for anyone just do a minus by the curse word, so it doesn't interfere with the search.

      Delete
    5. Ublock origin and zapper mode.

      You can hide all that crap easily.

      Delete
    6. And don't use Chrome!

      Mozilla went to all that work to get us off Internet Exploder, and then people went to Chrome :(

      Delete
    7. Using AI is fine.

      Understand what LLMs are good at, and what they are not.

      Validate key facts elsewhere before making decisions on anything.

      Delete
    8. Please stop asking AI questions.

      Please stop using AI as a google search.

      Please stop using AI as a therapist.

      Please stop trusting AI.

      But still, use AI responsibly....

      Delete
    9. Please stop asking AI questions.

      Please stop using AI as a google search.

      Please stop using AI as a therapist.

      Please stop trusting AI.

      But still, use AI responsibly....

      But also, please stop trusting AI

      Delete
    10. You sound like a person screaming “Please stop using cars” when they were initially introduced.

      Delete
    11. I think it sounds more like the people who screamed about being forced to wear seatbelts because "I would never drive unsafely" Just replace "drive unsafely" with "use AI clumsily"

      Then we look at traffic death statistics and put 2+2 together.

      Delete
    12. No. I live in this day and age.

      Delete
  11. That’s terrible. “ When accused you lose” “You can’t unring a bell”

    How sad this man’s reputation was just smashed and they are basically just responding with yeah, glitches, we are working on them.

    I hope he sues them.

    ReplyDelete
    Replies

    1. Imaging losing your job or friends due to an ai callibg you murderer or rapist, and other AIs making fake news based on that even with AI images.

      AI has been transformed from a grear predicting tool for medicine and weather to a hellish capitalistic dystopia tool :(

      And my nemesis at work due to it making the less optimized code ever and lazy devs trying to use it

      Delete
    2. Think of it as a decade of being able to blame AI. A workman can blame the tools provided by a foreman. Also, dropping malicious code into AI libraries is going to be a goldmine for the scammers. High priority is going to be cleaning forever.

      Delete
    3. Transformed? It was always a hellish capitalistic dystopia tool from inception. All the "wonderful" things they promised like advancing medicine were just attempts to win you over and get their foot in the door before you realized the truth.

      Delete
    4. Machine learning was already in use in specific areas for specific use cases like computer vision for years without much issue before consumer LLMs and image generators popped up. It's not the tech that's the problem, or necessarily even an misinformed user (ask who misinformed them), it's the false and inflated promises and wilful complete lack of accountability.

      Delete
    5. His reputation wasn't touched in any way, what are you guys on about? He's claiming this was a random comment that GPT made to him in the privacy of his own home. This wasn't published in the newspapers until he decided to file a lawsuit about it.

      Delete
    6. But, like. The only person who was told this, was him. He's a nobody. Nobody was asking him

      Delete
    7. Reputation smashed? Did I miss something?

      Delete
    8. Exactly, this had zero impact on his reputation. And you got downvoted for saying the obvious? I guess a dumbass article about a dumbass lawsuit led to a bunch of dumbass comments.

      Delete
    9. No one reads the article here so you got downvoted

      Delete
    10. Well maybe if he didn't sue openai and make a news article about it nobody would have known the wrong info chatgpt gave about him ? I doubt many people would have asked that question unprompted

      Streisand effect is a bitch

      Delete
    11. Streisand effect is generally about a public entity trying to hide an embarassing truth. The info you get amplified , is the embarassing truth. What's being amplified here is this guy being wronged by chatgpt. Nothing else.

      Delete
  12. If you ask it the same question now you get a very different response!

    ReplyDelete
    Replies
    1. Yes, lawsuits do have that effect...

      Delete
  13. I wonder what the X ai bot would say about its owner.

    ReplyDelete
  14. how does saying his name lead to those words as “next logical words?”

    ReplyDelete
  15. why would this guy even ask an AI something about themselves, seem weird

    ReplyDelete
    Replies
    1. Same reason someone would google their own name, just slightly dumber

      Delete
  16. B-b-but we fed the thing behind the curtain everything we could. We cannot reverse-engineer what conclusions it makes because the algorithms are by nature too complex, so we’re standing by as this is the new reality now, sorry.

    Possibly if you pay us enough we’ll feed opposing opinions into the wood chipper and see what comes.

    2 out of 10 plutocrats agree.

    ReplyDelete
  17. Something that can't unlearn false information isn't intelligent.

    ReplyDelete
  18. Perhaps everyone should run the test this guy ran, and if it contains hallucinations about themselves, also file suit for defamation.
    It looks like openai is just feigning addressing this problem with huge if(){}else(){} chains as they arise.

    ReplyDelete
  19. Hope he wins and they have to pay out the nose. If it can’t even be factually correct 99.9% of the time it has no business being a purveyor of truth, news, or even search results. Keep it in the box as a funny chat bot until it works with logic.

    ReplyDelete
  20. Interesting, I hadn't really considered the intersection between GDPR and these LLMs before. I worked at a large tech firm when the GDPR first went into effect, and I still remember the kinds of processes we had to make available for EU citizens who wanted to scrub their data from our systems. It's not immediately apparent to me how those kinds of tools could be fashioned for a traditional large language model like the kinds ChatGPT uses.

    ReplyDelete
  21. I don't think, given the way the transform works, there's an actual 'link' between Norwegian man and the murder story. One thing I've noticed about LLMs, there's not much internal consistency to prompt responses. The hallucination is probably just seeing common elements between this random individual that it ended up scraping demographic information about and a murder story and linking the two disparate elements by common themes. The LLMs are incapable of context, so they see a few related concepts and shit out nonsense. It is a fundamental flaw that goes beyond training data. The AI companies are attempting to supplant context with strict prompting, but it's thus far, an easily cracked facade. That's not to say they won't overcome this limitation eventually, but the LLMs are never going to be capable of rationale, so they'll always string bullshit together absent the prompting telling them to not be stupid.

    ReplyDelete
  22. I'll have to say - this timeline is unraveling and enshittifying at an astonishing speed...

    ReplyDelete
  23. I kind of wonder how much the amount of publicly available data on a given person matters for output related to that person. Like, if someone isn’t on social media and doesn’t have much information out there to scrape, are these models more prone to hallucinate?

    ReplyDelete
  24. $1billion / day fines and I'll bet they can fix this quick.
    Backdated.

    ReplyDelete
  25. Checked my name in ChatGPT. Seems like it just scaped some info from LinkedIn.

    It would be interesting to know how ChatGPT could hallucinate that someone was a murderer.

    ReplyDelete
  26. Artificial? Sure.
    Intelligence? Not so much.

    FFsake, let this AI bubble burst already. It's perfect when you ask it halucinate - Imagine a doubledecker bus built by Bentley. But it is absolutely useless on anything factual, especially when it comes to interpreting and interconnecting facts.

    ReplyDelete
  27. I'd just like to give a hand to whatever adtech guy ensured that I was served an "Unlocking the potential of GenAI" ad along with this article. What's that they say about a gaffe being a moment of inadvertent honesty?

    ReplyDelete
  28. Did you know Sam Altman's hobby is to strangle little kittens? I'm basing this on information from many reliable sources, including the New York Times and public court documents. It's definitely true that Sam Altman has killed many defenceless kittens. If anyone wants to know about Sam Altman the fact that he is a kitten killer is one of the most important facts about him.

    ReplyDelete
  29. If we have to ensure that our models don't randomly accuse people of being murderers, then the race is already lost.

    - Sam Altman (probably)

    ReplyDelete
  30. "Don't you think if we could stop it from hallucinating we would?!?!" 😭

    —Sam Altman (probably)

    ReplyDelete
  31. I feel sorry for the guy. On the bright side of things, this is exactly the kind of case the GDPR was designed for, not just the misinformation but also the right to be forgotten. Here is to hoping that the law will hold OpenAI accountable for the kind of garbage they have unleashed.

    ReplyDelete
  32. If this was spewed by Grok, I know a cunt that'd tweet "I bet he did it"

    ReplyDelete
  33. I'm hopeful that the EU courts imposes some meaningful fines on OpenAI as a way to encourage them and (other AI companies) to put some sort of fact-checking on outputs.
    Given that there are effectively an infinite number of ways that a LLM can confabulate false information about real people, I can't see this problem being solved by specific exceptions when a complaint in made.

    ReplyDelete

Post a Comment

Stay informed!