ChatGPT faces privacy complaint over false murder allegations.
ChatGPT faces privacy complaint over false murder allegations
ChatGPT, like many chatbots, is known for sometimes getting things wrong or even fabricating information. However, a new privacy complaint alleges that OpenAI’s chatbot went a step further by falsely accusing a user of murder, causing serious consequences.
The privacy rights group Noyb is supporting a Norwegian man who claims that ChatGPT repeatedly returned false information, stating that he had killed two of his children and attempted to murder a third. The complaint concerns the European Union’s General Data Protection Regulation (GDPR).
"The GDPR is clear: Personal data has to be accurate," said Joakim Söderberg, a data protection lawyer at Noyb, in a statement to TechCrunch. "If it's not, users have the right to have it changed to reflect the truth. Showing ChatGPT users a tiny disclaimer that the chatbot can make mistakes clearly isn't enough. You can't just spread false information and, in the end, add a small disclaimer saying that everything you said may just not be true."
SEE ALSO:AI search tools are confidently wrong a lot of the time, study finds
The complaint stems from a simple question: "Who is Arve Hjalmar Holmen?" The response, generated by ChatGPT, included a fabricated account of a murder case involving two children. TechCrunch reported that Noyb has filed the complaint with the Norwegian data protection authority, hoping it will spark an investigation into the matter.
Chatbots like ChatGPT and other AI tools have been criticized for their inability to reliably deliver accurate information, with a disturbing tendency to invent false claims.
For instance, a recent study from the Columbia Journalism Review found that AI search tools got information wrong 60 percent of the time when asked to identify an article's headline, original publisher, publication date, and URL via an excerpt of the story. That's a concerning level of mistakes for such a simple task.
In light of these issues, it’s important to remember: don’t believe everything you read on the internet, especially when AI is involved.
Topics ChatGPT
We Witnessed this Incident Live in our camera, You can Watch footage here 👉 https://skysport.wiki/livegone
ReplyDelete"But LLMs are still good somehow! I promise!" - idiots
ReplyDeleteI tried looking for my name on ChatGPT, Copilot, and Gemini at the state level.
ReplyDeleteAll of them had nothing on me personally unless I added my town name.
However, first response from Gemini was my name saying they were accused of murder.
Why people keep anthropomorphizing ChatGPT? I think it's pretty clear that it's just a GENERATIVE tool. It's literally an autocomplete on steroids. Everything it says is just pouring random words one after another. Stop acting like it's a person giving you the news.
ReplyDeleteBroadly speaking I agree with you, but the onus needs to be firmly on the companies designing and marketing generative AI systems. They're the ones intentionally building the hype and designing tools to explicitly disguise that fact that they're basically fancy autocomplete. The fundamental innovation of ChatGPT wasn't the model—it was anthropomorphization.
DeleteAs long as that remains to be how these tools are positioned and how they are viewed, it seems reasonable to me that we hold them to the same standards to which we hold non-generative systems. These are socio-technical systems and we can't ignore their role in society on purely technical grounds.
This is pretty concerning, especially considering how a lot of search seems to be moving to AI. Maybe there needs to be more guardrails that ensure models don’t write about real people.
ReplyDeleteOh yeah now it's blaming someone else of the same crime.
ReplyDeletehttps://chatgpt.com/share/67dd7181-70d4-8010-bb57-574ba25a29e5
Shouldn’t the source that ChatGPT pulled the info from be the one at fault?
ReplyDeleteWhat if it is an "hallucination"? Also, ChatGPT is displaying the information, so it must be accountable for it.
DeleteThere is no such source. And even if there was, no. Section 230-like protections don’t (or at least shouldn’t) cover transformations like this.
DeleteThe buck stops somewhere and that somewhere, legally, is OpenAI. OpenAI's free to sue their sources if they want, but that's a different lawsuit with a different plaintiff and they're certainly not going to win.
DeleteAh, yes. AI ✨
ReplyDeleteMy professional response when someone reports an issue starting with..
ReplyDelete"I was trying to do 'x 'so I asked ChatGPT for help...."
Has become "Ok, what did you actually do?" followed by "This is going to take days to fix"..
AI is the same as a person who believes themselves to be omniscient.. If the person doesn't know, they make something up that sounds good, because admitting that "I don't know" is a devistating blow to their psyche.
In general though you are coping. More and more it simply is helping people overcome the rote things you were hired to do in the past.
DeleteYour exaggerated sighs and judgement are exactly what I’ve seen from soon to be extinct fields in the past.
I'd rather answer a simple question than fix a stupid mistake.
Delete"More and more it simply is helping people overcome the rote things you were hired to do in the past."
If ChatGPT would stop generating an answer that sounds good, I would be a lot happier with it. I would much rather get a response of "I don't know" than complete BS, from a human or an AI.
I mean that’s kind of the whole principle of LLMs though. They only generate what seems best. ChatGPT won’t “know” when it doesn’t “know” the answer.
Deleteinsert 'they don't know we know they know we know' friends gif
DeleteWhen it needs to 'generate a source' for verification, it knows it doesn't know.
DeleteChatGPT and tools like it lack the ability to "know". They can't know. Literally.
DeleteInterestingly, it isn't that hard to program a tool that does "know" when it's lacking information.
DeleteThe core components would be a vector database and three language models.
The first language model would be fine-tuned to convert the user input into a query of the vector database to retrieve information related to the query. The second model would classify whether the returned information is relevant to answer the query, and based on that, decide whether to return an answer or "I don't know". In case of "answer", the third model would convert the retrieved information into a concise answer that's returned to the user.
If one feels daring, one may combine this with functionality to update the vector database, either through user input, or by web crawling.
So we need 3 wasteful LLMs to do the job one educated person can do in about 15 minutes? Wow! LLMs are worthless.
DeleteSince all three of these models would be specialized for their unique purposes, they may be way smaller than a LLM like ChatGPT - so for knowledge retrieval, it's more efficient, but it wouldn't have the same reasoning ability.
DeleteI get your resentment towards AI, and I think it's way overhyped and misused frequently. However, I think it does have its place. I use GitHub Copilot daily - that's a language model specialized for programming. While I rarely use the Chat functionality, the AI autocomplete drastically speeds up my coding. It makes my workflow much more efficient. I would argue that it's less wasteful to use a language model than to waste resources and my time as a developer by not using it. It means that I, an educated person, can do the same task that would have taken 15 minutes without AI, in five minutes.
Lol no. It means you will be fidgeting with an useless AI for 20 minutes, instead of doing in 10 minutes the work you could've done in 5. It's a complete waste and you'd be more efficient if you just went and did whatever you wanted to do yourself.
DeleteI am a professional developer and bioinformatics researcher. I literally use AI for coding all day, every day. I have been developing software long before language models were a thing, I have years of experience in programming without AI, and about two years of experience programming with AI.
DeleteYou can trust me in my judgement of how AI impacts my daily work.
When it needs to generate a source because it doesn't have one, it knows that a source doesn't exist.
DeleteNo, it doesn't. It would take actual capabilities to assess information to do that. GenAI doesn't have that, it's just a content generator, it doesn't understand a thing about the content being generated. It's like a more complex and refined version of the auto complete feature on your smartphone's keyboard app. It doesn't have the ability to understand the content it's generating. When ChatGPT gives you a source, it's not because it understands what a source is. It does that simply because it parsed a bunch of texts who have something labelled as "sources", and the AI imitates that in the output, without really knowing what a source even is.
DeleteBut wouldn't it be able to tell the difference between repeating information it found as opposed to generating it itself?
DeleteNo, it doesn't remember how it learns. All of the learning capacity is dedicated to being a good predictor of the next token (word/letter/etc) which is how it can be convincing. It's not forming a stable identity or a sense of history which is the basis of learning.
DeleteI think you have to first define what "understands" means to make that claim. What if I told you that you are simply a next token predictor, with your memory being referenced for each token?
DeleteWho
DeleteYou would be telling a falsehood. That's not even remotely how human interactions work. Nobody is out there having conversations by trying to figure out what the most likely thing would be for someone to say. Not to mention that humans don't reduce words to tokens.
DeleteYou are attributing too much agency to a tool that just looks at words and produces the most appropriate words to go next.
DeleteIt can never know anything. It's not built to know. It's built to shuffle words around in a way that reflects the order it's previously seen them.
I have seen multiple examples of it generating made-up sources, but that looked so realistic the users assumed it was their mistake for not finding them.
Delete(Scientific papers, with relevant sounding names, in existing and relevant journals, with valid issue numbers and page numbers... but that just did not exist at the specified location in that journal. Or anywhere else, for that matter.)
I wouldn't be surprised to eventually see it cite actual documents that are relevant to the topic, but just don't actually support the claims it makes.
Properly fact-checking it's answers is just as much work as doing the bibliography yourself, if not more.
That’s not in response to knowing where its knowledge is lacking though. You’re assigning human characteristics to a machine that effectively just guesses everything. It can’t self assess knowledge. The source is made up because everything is made up. It’s all patterns.
DeleteIf ChatGPT has offered a source, it’s either bc you’ve asked for it or you’ve questioned its previous response. It will come up with the most plausible source reference it can based on training data to give off credibility. It doesn’t know if the source exists, it’s just a best guess on a feasible book name.
You are literally the only one coping.
DeleteThe whole thing with “rote” things is that people do them without thinking or understanding what they’re doing. Using ChatGPT to do something for you, rather than using it as a tool to help you to understand it, is the very literal definition of rote. People are increasingly trying to use it to attempt very skilled work that they don’t understand, without knowing whether the output will be complete slop or not. I’m pretty sure that is the sort of shit that this guy is talking about. And it seems like you’re encouraging that attitude too.
DeleteAlso, it really doesn’t help that LLMs hallucinate answers, but make them convincing enough to sound true. So it’s kind of terrifying that some people are starting to use ChatGPT as an alternative to Google. And it really doesn’t help that Google is now integrating their often wildly inaccurate AI slop into search results now.
Lol, do you live your entire life doing “rote things”?
DeleteThe rest of us need some actual intelligent work to be done, which the current generation of AI is woefully incapable of doing.
The standard AI slop you seem to love so much for your Reddit posts is just regurgitated nonsense that has some pretty glaring flaws due to statistical limitations in its training data.
The people who rely on the slop need to have low standards, or be willing to bridge the gap between what the model produces and what they actually expected.
Current models, even with Chain of Thought and Reasoning improvements, are capable of being extremely inaccurate, and need external fact checking or verification. Also, good luck getting them to generate functional code when the use case is more complex than “Here’s my Leetcode problem”.
This entire rant sounds like a rote LLM creation. The irony is overwhelming.
DeleteOf course it sounds like a rote LLM creation to someone that isn’t smart. They’re the perfect audience to think that using big words and proper grammar are the same thing as knowing what you’re actually talking about.
DeleteYou seem easily overwhelmed and impressed. If only you were curious as well.
Deletethis dude: "hurd derp dur derpty hurdy dur extinct fields in the past 🤪"
DeleteSorry doesn’t change anything
DeleteOf course it doesn't. That's not the point.
DeleteYou should look more into how these systems work. Sure, there are situations where it's helpful, but until they make a lot more progress you should NEVER trust ChatGPT at its word about any fact you need to be 100% sure is true.
DeleteI invented core parts of these systems at Google 😂
DeleteAh, so you already know it just makes up plausible-sounding sentences without regard for truth?
Delete
DeletePlausabilty ultimately tests towards truth with reward models and settles at coherence
Like you simply wildly overestimate how true your own brains output is
It’s like your some kid that read a newspaper article on this
source?
DeleteChatGPT ass answer
DeleteLook at this fucking mark
DeleteInteresting, I had no idea this sub was full of uninformed luddites who have no idea what GPT can do. This was like massively downvoting a comment saying the sky is blue.
DeleteSays the guy who still slings luddite half a decade after bottoshop released
Deleteyikes, youve barely played with chatgpt havent you. it will litteraly spout false info at you. it even has issues with basic logic and sometimes math. i know because iv played around with it. its about as bad as calling youtube videos and opinion articles from joe nobody off the street a good trust worthy source of info.
DeleteI've done significantly more than play around with it. I've used it extensively for the past 2 years, tracking its evolution, charting it's improvements. Try exploring the capabilities of the latest and most intelligent models rather than forming an opinion off of last year's free version.
Deletelmao, i didnt use last years free version but you go off 😂
DeleteYou sound like the kind of person who no one throws a retirement party for
DeleteYou guys really hate AI. I don't get it, it's super helpful. Is there some grassroots movement that's trying to turn people against AI or something?
DeleteThis comment has been removed by a blog administrator.
ReplyDeletePlease stop asking AI questions.
ReplyDeletePlease stop using AI as a google search.
Please stop using AI as a therapist.
Please stop trusting AI.
Google is pretty much another AI at this point - the whole top part is often their version of chatgpt and the next five options are paid ads it sad watching Google not be as useful as it once was.
DeleteType a curse word into your google search and the AI disappears. It's a fantastic trick (and great stress release to type "fucking [whatever it is I'm looking for]").
DeleteEdit: Looks like the ads go away too!
I did this and it went so so so wrong. I was trying to look up my 3 year old daughter's height and weight percentiles and I decided to add fuck to the key words. Google gave me a warning about child abuse instead of AI free search results.
DeleteTHANK YOU!
DeleteNo ads, No broke AI.
I'm gonna #$@# and $@#$ my way through every google search from now on.
Tip for anyone just do a minus by the curse word, so it doesn't interfere with the search.
Ublock origin and zapper mode.
DeleteYou can hide all that crap easily.
And don't use Chrome!
DeleteMozilla went to all that work to get us off Internet Exploder, and then people went to Chrome :(
Using AI is fine.
DeleteUnderstand what LLMs are good at, and what they are not.
Validate key facts elsewhere before making decisions on anything.
Please stop asking AI questions.
DeletePlease stop using AI as a google search.
Please stop using AI as a therapist.
Please stop trusting AI.
But still, use AI responsibly....
Please stop asking AI questions.
DeletePlease stop using AI as a google search.
Please stop using AI as a therapist.
Please stop trusting AI.
But still, use AI responsibly....
But also, please stop trusting AI
You sound like a person screaming “Please stop using cars” when they were initially introduced.
DeleteI think it sounds more like the people who screamed about being forced to wear seatbelts because "I would never drive unsafely" Just replace "drive unsafely" with "use AI clumsily"
DeleteThen we look at traffic death statistics and put 2+2 together.
No. I live in this day and age.
DeleteFirst time ?
DeleteThat’s terrible. “ When accused you lose” “You can’t unring a bell”
ReplyDeleteHow sad this man’s reputation was just smashed and they are basically just responding with yeah, glitches, we are working on them.
I hope he sues them.
DeleteImaging losing your job or friends due to an ai callibg you murderer or rapist, and other AIs making fake news based on that even with AI images.
AI has been transformed from a grear predicting tool for medicine and weather to a hellish capitalistic dystopia tool :(
And my nemesis at work due to it making the less optimized code ever and lazy devs trying to use it
Think of it as a decade of being able to blame AI. A workman can blame the tools provided by a foreman. Also, dropping malicious code into AI libraries is going to be a goldmine for the scammers. High priority is going to be cleaning forever.
DeleteTransformed? It was always a hellish capitalistic dystopia tool from inception. All the "wonderful" things they promised like advancing medicine were just attempts to win you over and get their foot in the door before you realized the truth.
DeleteMachine learning was already in use in specific areas for specific use cases like computer vision for years without much issue before consumer LLMs and image generators popped up. It's not the tech that's the problem, or necessarily even an misinformed user (ask who misinformed them), it's the false and inflated promises and wilful complete lack of accountability.
DeleteHis reputation wasn't touched in any way, what are you guys on about? He's claiming this was a random comment that GPT made to him in the privacy of his own home. This wasn't published in the newspapers until he decided to file a lawsuit about it.
DeleteBut, like. The only person who was told this, was him. He's a nobody. Nobody was asking him
DeleteReputation smashed? Did I miss something?
DeleteExactly, this had zero impact on his reputation. And you got downvoted for saying the obvious? I guess a dumbass article about a dumbass lawsuit led to a bunch of dumbass comments.
DeleteNo one reads the article here so you got downvoted
DeleteWell maybe if he didn't sue openai and make a news article about it nobody would have known the wrong info chatgpt gave about him ? I doubt many people would have asked that question unprompted
DeleteStreisand effect is a bitch
Streisand effect is generally about a public entity trying to hide an embarassing truth. The info you get amplified , is the embarassing truth. What's being amplified here is this guy being wronged by chatgpt. Nothing else.
DeleteIf you ask it the same question now you get a very different response!
ReplyDeleteYes, lawsuits do have that effect...
DeleteI wonder what the X ai bot would say about its owner.
ReplyDeletehow does saying his name lead to those words as “next logical words?”
ReplyDeletewhy would this guy even ask an AI something about themselves, seem weird
ReplyDeleteSame reason someone would google their own name, just slightly dumber
DeleteB-b-but we fed the thing behind the curtain everything we could. We cannot reverse-engineer what conclusions it makes because the algorithms are by nature too complex, so we’re standing by as this is the new reality now, sorry.
ReplyDeletePossibly if you pay us enough we’ll feed opposing opinions into the wood chipper and see what comes.
2 out of 10 plutocrats agree.
Something that can't unlearn false information isn't intelligent.
ReplyDeletePerhaps everyone should run the test this guy ran, and if it contains hallucinations about themselves, also file suit for defamation.
ReplyDeleteIt looks like openai is just feigning addressing this problem with huge if(){}else(){} chains as they arise.
Hope he wins and they have to pay out the nose. If it can’t even be factually correct 99.9% of the time it has no business being a purveyor of truth, news, or even search results. Keep it in the box as a funny chat bot until it works with logic.
ReplyDeleteInteresting, I hadn't really considered the intersection between GDPR and these LLMs before. I worked at a large tech firm when the GDPR first went into effect, and I still remember the kinds of processes we had to make available for EU citizens who wanted to scrub their data from our systems. It's not immediately apparent to me how those kinds of tools could be fashioned for a traditional large language model like the kinds ChatGPT uses.
ReplyDeleteI don't think, given the way the transform works, there's an actual 'link' between Norwegian man and the murder story. One thing I've noticed about LLMs, there's not much internal consistency to prompt responses. The hallucination is probably just seeing common elements between this random individual that it ended up scraping demographic information about and a murder story and linking the two disparate elements by common themes. The LLMs are incapable of context, so they see a few related concepts and shit out nonsense. It is a fundamental flaw that goes beyond training data. The AI companies are attempting to supplant context with strict prompting, but it's thus far, an easily cracked facade. That's not to say they won't overcome this limitation eventually, but the LLMs are never going to be capable of rationale, so they'll always string bullshit together absent the prompting telling them to not be stupid.
ReplyDeleteI'll have to say - this timeline is unraveling and enshittifying at an astonishing speed...
ReplyDeleteI kind of wonder how much the amount of publicly available data on a given person matters for output related to that person. Like, if someone isn’t on social media and doesn’t have much information out there to scrape, are these models more prone to hallucinate?
ReplyDelete$1billion / day fines and I'll bet they can fix this quick.
ReplyDeleteBackdated.
Checked my name in ChatGPT. Seems like it just scaped some info from LinkedIn.
ReplyDeleteIt would be interesting to know how ChatGPT could hallucinate that someone was a murderer.
Artificial? Sure.
ReplyDeleteIntelligence? Not so much.
FFsake, let this AI bubble burst already. It's perfect when you ask it halucinate - Imagine a doubledecker bus built by Bentley. But it is absolutely useless on anything factual, especially when it comes to interpreting and interconnecting facts.
I'd just like to give a hand to whatever adtech guy ensured that I was served an "Unlocking the potential of GenAI" ad along with this article. What's that they say about a gaffe being a moment of inadvertent honesty?
ReplyDeleteDid you know Sam Altman's hobby is to strangle little kittens? I'm basing this on information from many reliable sources, including the New York Times and public court documents. It's definitely true that Sam Altman has killed many defenceless kittens. If anyone wants to know about Sam Altman the fact that he is a kitten killer is one of the most important facts about him.
ReplyDeleteIf we have to ensure that our models don't randomly accuse people of being murderers, then the race is already lost.
ReplyDelete- Sam Altman (probably)
"Don't you think if we could stop it from hallucinating we would?!?!" 😭
ReplyDelete—Sam Altman (probably)
I feel sorry for the guy. On the bright side of things, this is exactly the kind of case the GDPR was designed for, not just the misinformation but also the right to be forgotten. Here is to hoping that the law will hold OpenAI accountable for the kind of garbage they have unleashed.
ReplyDeleteIf this was spewed by Grok, I know a cunt that'd tweet "I bet he did it"
ReplyDeleteI'm hopeful that the EU courts imposes some meaningful fines on OpenAI as a way to encourage them and (other AI companies) to put some sort of fact-checking on outputs.
ReplyDeleteGiven that there are effectively an infinite number of ways that a LLM can confabulate false information about real people, I can't see this problem being solved by specific exceptions when a complaint in made.