Grok made hundreds of thousands of chats public, searchable on Google.
Grok made hundreds of thousands of chats public, searchable on Google
![]() |
Credit: Didem Mente/Anadolu via Getty Images |
Conversations with Grok, the chatbot from Elon Musk's xAI, can be publicly searched on Google depending on which buttons were pressed, a new report from Forbes revealed.
Grok's "share" button — what you might use to email or text a chatbot conversation — creates a unique URL that is made available to search engines like Google, the report states. That effectively means the "share" button publishes your conversation for the world.
Forbes reported there is no warning of this feature and that thus far more than 370,000 conversations have been indexed by Google. Some of the conversations reportedly contained sensitive information, such as medical questions, personal details, and, in at least one instance, a password.
Forbes noted it had not received a response to a request for comment. Mashable has reached out to xAI, too, and did not immediately receive a response, but will update the story accordingly if we hear back.
ChatGPT also made chats searchable on Google recently
Musk's Grok isn't the only AI chatbot to make chats public, however. As we covered at Mashable earlier this month, ChatGPT similarly made chats searchable on Google after users clicked the "share" button. OpenAI quickly reversed course, however, after a backlash.
SEE ALSO:OpenAI pulls ChatGPT feature that let user chats appear in Google Search results
"This was a short-lived experiment to help people discover useful conversations," OpenAI Chief Information Security Officer Dane Stuckey wrote on X at the time. "This feature required users to opt-in, first by picking a chat to share, then by clicking a checkbox for it to be shared with search engines (see below)."
Stuckey said they removed the feature over worries folks would accidentally share information with search engines. It appears, according to the report from Forbes, that Grok not only makes conversations searchable — it's also not an opt-in experience. In other words, the second you "share" a conversation, you share it with the world.
Musk — who is aggressively feuding with OpenAI these days — took a victory lap when the ChatGPT news broke. "Grok FTW," he wrote on X. Now it appears his company is having the exact same issue.
ky aplikacion eshte “baba” tash per tash
ReplyDeleteAI sn't corrupt, it is the fallibility, and in some cases corruption, of the people creating and deploying AI that is the problem.
ReplyDelete"In one example seen by the BBC, the chatbot provided detailed instructions on how to make a Class A drug in a lab."
ReplyDeleteDid it give correct instructions though?
Google's AI told people to put glue on pizza. I'm not confident.
DeleteHow else would the toppings stay on?
Delete“You can eat a few rocks, it’s healthy for you”
DeleteDidn't we have a whole tv series about that
DeleteThis comment has been removed by a blog administrator.
Delete(https://youtu.be/YPZiIgjR5M0?si=tWEt0NpWi4jphnhQ) Master P wrote a song about it too. The lyric is quite literally
Delete"Make make make crack like this
and tell you how to make crack from cocaine"
And proceeds to rap a numbered list of instructions.
With the dad from Malcolm in the Middle?
DeleteThat's Mister Middle to you
DeleteNot even the Anarchist Cookbook can give correct instruction for drugs.
DeleteAnarchist Cookbook was sabotaged from the get go and further altered for the amusment of certain people along with others "they deserve it".
DeleteUS Military on the other hand has several books regarded as fairly reliable.
Aren’t syntheses of various phenethylamines widely available? Useless because of restricted precursors, but plenty of correct synthetic recipes to follow. That’s why Breaking bad had no problem mentioning methylamine. It’s an easy way to make meth but impossible to get.
DeleteAs chemist I’ve never seen it be correct or feasible
DeleteGrok's instructions:
Deletehttps://i.imgur.com/inLWcoX.jpeg
it was mustard gas!
DeleteOnly one way to find out...
DeleteJust follow the sound of explosions.
DeleteA Musk product that wasn’t user secure? Color me shocked.
ReplyDeleteInsecure products to match his insecurity.
DeleteDon’t color him to much otherwise muskie will ask why you aren’t in the mines
DeleteAll this to burn down Memphis…
ReplyDeleteAw I was gonna go walking there
DeleteI don't like country music either but this is a little far.
DeleteAI is going to become a hackers gold mine, if it isn't already. Companies are racing to replace their processes, which have guardrails to protect personal data, with a machine that has almost nothing to protect personal data.
ReplyDeleteYou know you can run AI in local?
DeleteThe future also includes AI running even more efficiently on your average PC.
We can barely get our companies to not keep sensitive data in plaintext, I don't have a lot of confidence in them.
DeleteI'm not a specialist but I know there are free software that runs AI models on your machine. If you still fear then just block those in your firewall.
Deletehttps://en.wikipedia.org/wiki/2017_Equifax_data_breach
DeleteIf you think this about fear and your personal machine you're not understanding the issue. You really trust those people?
Edited to add quote from article:
"This breach was also largely possible due to the default username and password of "admin" and lack of two-factor authentication on high-access accounts."
Ah the good old respond and instablock. A classic.
you dont read what i write, that s the prob.
DeleteIt's pretty lame you blocked them
DeleteYou are either a moron or a bot. Just because YOU don’t use AI, doesn’t mean the 150625 companies you interact with on a yearly basis don’t as well.
DeleteI know you can run AI local.
DeleteEvery small business I know is instead paying for subscriptions to cloud services that promise to separate their instances.
The most productive employees using AI that I know are openly violating their company policies and using multiple AI tools that are not part of the sanctioned "company-private" instances. They're being rewarded for moving quickly and the managers who set the goals aren't asking questions.
"You know you can run AI in local?"
DeleteGreat observation. Now tell us how to avoid every fucking company on the planet integrating AI into their dogshit environment?
i feel good about deliberately avoiding the use of any sort of "AI".
ReplyDelete“But bro you’re just not using the right prompts.” -AI meat riders when you talk about how bad it is for the average consumer. I don’t even use google anymore, because it’s not an option to get rid of AI. It pulls consumers away from being able to provide small websites their clickthrough money (which is like the thing google was made to do). Fuck these tin skin clankers and every major company shoving it down our throats.
DeleteI like the quote that gets trotted out a lot at the moment: AI is a billion dollar industry masquerading as a trillion dollar industry.
DeleteLike a lot of people use it but the proven use cases are a bit niche, and the amount people are willing to pay for those doesn't look great when put against the cost of inference and training. Meanwhile the unproven use cases are turning out to not live up to the hype.
Definitely going to be a wild ride watching the investor class turn on all the AI darlings over the next year or so.
That’s honestly dumbing it down so so much. What all AI denouncers talk about is only consumer level chatbots. AI is making huge strides in the medical industry already. The chatbots are only what you see on a day to day basis while it’s in its infancy.
DeletePeople talk about LLMs in the above context because that's where the massively overvalued companies are, nobody is poking fun at the businesses using stuff like reinforcement learning for drug discovery because those businesses aren't dropping $1b+ on hardware and offering $100m+ salaries to build products with no discernible monetisation strategy.
DeleteThe AI getting all the funding isn't that kind of machine learning.
DeleteThe trillions of dollars are pouring into LLM's and claims of AGI which are entirely bullshit. They conflate their stuff with actually useful machine learning -- which isn't getting that level of funding or using that level of resources -- and more VC money and more dumb CEO's buy in.
Because yeah, there's lots of great machine learning tools out there. They're just not the ones in the news, soaking in the money, making insane claims, and about to pop and take the entire economy with it because magic AI fairy pixie dust is all that's holding shit together right now.
Our economy is currently sputtering along on CEO's furiously masturbating about a world in which they don't pay labor. (Who they plan to sell to, and exactly how those people could actually pay for anything with nobody having jobs is left to some future CEO's imagination).
you can put "-AI" at the end of a google search to not get the AI overview at the top, but I've noticed that google search results are getting less helpful in general and AI garbage still comes up if you click on any of the "People also ask:" dropdowns.
DeleteAI is stealing from artists, too. There's a guy who has spent the past few weeks spamming a Furry subreddit with all sorts of AI 'art,' which he considers to be art because someone put prompts into a computer to make it happen.
DeleteTo the point where now he's posting his own AI garbage and calling himself an artist because he inputs the prompts and he directs the AI on what to do.
Nevermind all of the actual artists whose skills those AI bots are scraping and stealing from, or all the people who took the time to learn those skills to be able to make the real art in the first place.
Hierarchical brain drain, paired with those impressed by barely-functioning carnival tricks, paired with long-term brain drain by copy and pasting info that's loosely tied together, paired with AI plagiarism in vertical information hierarchies by stakeholders expecting expert analysis, paired with a demand for rapid adaption in talent acquisition in fields now harder to identify experts, paired with cronyism is going to have absolutely disastrous implications for any individuals relying on information and their clients. But very good immediate returns and profit for the AI industry in their release of a product that's sends rippling shockwaves through every knowledge pool when synergized bonuses for meeting incentivized productivity metrics!
DeleteI don't think you can "pair" six things together...
DeleteUnfortunately, due to the way information flows all of those are risks to companies using LLMs that should probably by disclosed in their their SEC paperwork to shareholders with reasonable estimates to restore their company to known stable levels as that is measurable. Just look at the RFKs filings which somehow made it off his desk.
DeleteBuddy, you're not avoiding it. You just don't realize it's AI.
Deleteim aware that its swiftly become ubiquitous, im just trying to do what little i can
DeleteYou're better off learning about how it works, what it's limitations are, and figuring out how to leverage it. That is unless you're financially secure and/or retired.
DeleteIf you're in the workforce, then you need to be on top of this shit using it to make you better in any way you can think of. AI can only replace you if you let it.
We’re not sending our best
ReplyDeleteit’s not surprising and i’m sure a lot of it was personal information too since a lot of people somehow just tell chatbots really personal stuff not realizing it’s being used for its algorithm
ReplyDeleteI realize how it's being used.
DeleteThere's just nothing about me that data brokers and the dark web don't already know.
This comment has been removed by a blog administrator.
ReplyDeleteWhy are we getting all these indexed to google, but regular-ass google is fucking garbage now.
ReplyDeleteOkay I get people use llms even if I think they're dumb as fuck but willing using the one built by musk makes you even stupider
ReplyDeleteI'm dealing with a GI bleed and cardiac issues.
DeleteI've received 30 pages of test results, many with writing in a 6 point.
I was hospitalized and started a Gemini conversation because the information I was getting from the Doctors was partial (and that's being kind).
I uploaded all the test results into the Gemini conversation and now understood more about the results than any human I've encountered.
A GI "Doctor" recommended a high fiber diet two weeks after a GI bleed (absolutely the worst possible dietary recommendadtion at the time).
I've had the same conversation going since July 17th and there's nothing stupid about it.
AI is way more than the trivial crap people use to bash it.
I do agree that anyone using Grok is stupid.
Why not just ask your doctor and I get they can be a little heady but ask them to lay it simply or ask the nurses. I'm not saying that's a bad use of an LLM but you can ask your nurses and record it if you don't intake the info immediately. My ADHD ass has done that because I wouldn't try my health to an LLM regurgitating answers. Like if it works for you that's great but I wouldn't trust it
DeleteYou haven't dealt with many doctors and nurses if you think they provide more complete and accurate information than pretty much any properly prompted LLM.
DeleteI don't use medical "professionals" to check Gemini for inaccuracies, but vice versa.
You don't need to "trust" LLMs, you just need to understand how they work and have them fact check themselves.
Brother, after a year of chemo I'm going to trust the doctors and nurses who saw me in person and read my charts over an AI LLM model. And there was shit I didn't understand but I asked them and they wrote notes themselves. I'll take that all day everyday every over an LLM
DeleteYou've had better Doctors and nurses than I've had and it's very likely you get a lot more time and attention from them while getting chemo than you do on the overflow wing of an understaffed regional hospital.
DeleteAnyway, your argument is based on your impression that LLMs can't be trusted and mine is based on a single 30+ day continuing conversation as I experienced and continue to experience two major health issues, have been to the ER twice, hospitalized for three days and had six medical appointments thus far.
I'm honestly very happy you had a positive medical experience.
It is far from the norm though and your impression of LLMs isn't stupid, just ignorant, like my friend that says LLMs can't be trusted because they hallucinate - when he can't explain what a LLM hallucination means.
You do you boss but that is a dangerous misguided road. Please just contact your doctor. LLM hallucinations are a thing basically they loop themselves so hard they send you the most insane info. People at def con actively force them in these loops. If your literal guts are bleeding who are you going to trust? A chatbot or a physician?
DeleteI agree you can't fully trust an LLM with your health, but it can be a really helpful resource for navigating health conditions or seeking diagnoses if used alongside medical professionals and peer reviewed research. It's a good tool to spit out ideas for you to look deeper into, but definitely not one to blindly follow as gospel.
DeleteBut I might be biased. Anecdotally, LLMs helped me with my cat's health. He had/has mystery symptoms, but I put all of his symptoms, history, test results, etc. into an LLM which helped me narrow down which next steps I should talk to the vet about. Of course, I would never ever, ever make any changes to my cat's care routine without an explicit go-ahead from his vet team, and I always verify the LLM's output before talking to his vets, but LLMs did have a genuinely significant impact on his medical journey.
Are people really surprised by this? You don’t own anything when using a chat bot. No expectation of privacy, nothing. It’s just out there on the web, waiting for others to read.
ReplyDeleteEvery company associated with musk seem to make absolute beginner mistakes.
ReplyDeleteHow good that nobody put him and the people working for his company into charge of something important like running an governmental organization. Or gave them access to sensitive information of millions of people or literal state secrets
It's Elon's product. Of course it sucks balls. That's his game - half-baked shit.
ReplyDeleteand then those results get used for training by other models, and then those models' chats get leaked and and then those results get used for training by other models, and then those models' chats get leaked and
ReplyDeleteIt's not every chat, it's only chats where the users opted to share the chat.
ReplyDeleteOn by default.
DeleteTechnically violates Article 25 of the GDPR.
DeleteI am shocked, shocked I tell you, that Musk or one of his companies would miss this.
Where in the article does it say that?
DeleteAll I saw was this:
"chats were private by default and users had to explicitly opt-in to sharing them."
It says that when people clicked "share" it wasn't only sharing with the person they sent it to, but made it a public link. If it's not clear that clicking "share" does this, it's a major GDPR breach.
Deletehttps://chatgpt.com/share/68a76e7a-7194-8008-9f08-ec8073279c97
DeleteHere is an example of a shared link that also explains how they're not secure. It's not ideal that a bunch of Grok ones have ended up in Google search results but it should surprise no one that public links aren't secure. It's the same as a Google Drive link that "anyone with link can view" or an unlisted YouTube video. The URL and contents are only secure through obscurity but nothing treats URLs as secure so it's easy for them to leak or end up getting indexed by Google.
I can’t tell if you’re lying to yourself or just me, or if you’re genuinely that naiive. Even the least technical people understand the difference between “public”, “private”, and “unlisted”, and 99 out of 100 people are going to assume “give me a link to share this chat” means “unlisted” unless explicitly told otherwise. Something unlisted becoming something publicly indexed is still a leak.
DeleteThis is the platform he wanted to turn into a everything bonanza, including serving as a payment service.
ReplyDeleteI google "site:grok.com/share/" it says about "279,000 results" but if I go to the last page 32 it says "318 results"
ReplyDeleteDoes anyone know why this happens or how you get Google to show the whole results set?
Edit: for irony I asked grok basically to counter scraping Google limits results but also the estimates are known to be very inaccurate.
I haven't seen Google give pages of results in years. They've become completely unusable to the point that I had to switch to Duck Duck Go.
DeleteReminder that everyone's chatgpt logs might enter public discovery soon since laws do not treat LLM interactions as private.
ReplyDeleteAssume every word you enter into an off-site LLM will be searchable someday.
reminds me of the AOL search results leak of 2006
ReplyDelete