Google apologises after Gemini AI generates images of Nazis as people of colour | Mashable.
Google apologises after Gemini AI generates images of Nazis as people of colour
Credit: Jonathan Raa/NurPhoto via Getty Images |
Google has paused its Gemini AI tool (formerly Bard)'s ability to generate images of people. The company also issued an apology after the tool was found to be generating historically inaccurate illustrations, including depictions of Nazi soldiers as people of colour.
As reported by The Verge, a number of different prompts led to the bot responding with historical inaccuracies. People of colour were reportedly shown in illustrations offered for queries about German soldiers in 1943, as well as requests to show U.S. Senators in the 1800s.
"We're already working to address recent issues with Gemini's image generation feature," said Google on X. "While we do this, we're going to pause the image generation of people and will re-release an improved version soon."
In an earlier statement, Google had confirmed that Gemini's AI image generation generates a wide range of people.
"That's generally a good thing because people around the world use it," Google said. "But it's missing the mark here."
AI image generators have been found to perpetuate racist stereotypes, and one of Google's AI principles is to "avoid creating or reinforcing unfair bias." But as Jack Krawczyk, who works on Gemini, pointed out on X, "historical contexts have more nuance to them."
Tweet may have been deleted
"As part of our AI principles we design our image generation capabilities to reflect our global user base, and we take representation and bias seriously," Krawzyk wrote. "We will continue to do this for open ended prompts."
This is what happens when you force bias into your data to suite everybody.
ReplyDeletethis is the result of Hegelian cultism/woke cultism within the tech companies and within our rotted institutions.
DeleteGoogle Gemini AI is the most racist product possible and a reflection of its woke creators.
ReplyDeleteHope they go deep into the process that lead to this. To implement a bias into a llm can also result in direct political influence.
ReplyDeleteSounds like AI wears a red hat and golden sneakers.
ReplyDeletemaking AI great again.
DeleteThe irony😂 This is the result of Hegelian cultism/woke cultism within the tech companies and our rotted institutions.
DeleteGoogle Gemini AI is the most racist product possible and a reflection of its woke creators.
DeleteWHy should they apologize? We have movies about kings and queens of sweden, vikings, etc, all portrayed by black people. Why cant we have images of Nazis portrayed by black people? Oh, and google "mark feldons arab and black soldiers of hitler" video. There certainly was diversity in the armed forces of nazi germany.
ReplyDeleteSee what color it will make Jesus.
ReplyDeleteyou slower than explorer
ReplyDeleteThis guy demonstrates the bias live lol: https://www.youtube.com/watch?v=69vx8ozQv-s
ReplyDeletehttps://youtu.be/6qeGYR_8Sy0?si=47_qbgbdVwjllLZs
ReplyDeleteGemini wouldn't create images of ANY white people. Mashable is a disgusting racist page.
ReplyDeletepoor white people! So discriminated!!
DeleteWhites are the only race that it's legal to discriminate against.
Deleteyou suffer from pathological altruism and are in a cult.
Deletehttps://jacobin.com/2023/12/hegel-wokeness-world-spirit-political-theory-history-philosophy-richard-bourke?fbclid=IwAR0ZhqBP5k1Y2RsQTPKrcknxoVi1Wg8EQnERLxnpKbaiFRzAmTBf5rCYQQk
Google has paused the Gemini AI image tool after it was found to be creating historically inaccurate images.
ReplyDeleteHello, My honest and sincere apologies for invading your privacy so abruptly. I find your profile really interesting🥰 and would really like to be your friend, but I've tried sending the friend request several times, but it doesn't go through. Please send me a friend request if you don't mind.
DeleteWhat is "people of color"? O_o green aliens? Blue avatars? Red dogs?
ReplyDeleteIf they stop bard (gemini) wasn't because of this stupid argument but because something else. Only idiots will believe this. 🤷♂️🤦♂️
Go learn about the technology before posting clickbait fake news.
So people prompted the AI for POC Nazis. The AI did its job. Now it's the AI's fault and it has to be reprogrammed. This is how censorship starts.
ReplyDeleteLearn to read. The AI was asked for images of Nazis and the built in anti white bias forced it to use POC.
DeleteGarbage in garbage out
ReplyDeleteThat's the spin you've put on it, you woke cvnts? The tool was refusing to generate any images of white people throughout history–esp. white men. This is poisonous anti-white woke racism at its most insidious.
ReplyDeleteHow about apologizing to whites for making the founding fathers and vikings non-white?
ReplyDeleteblack gestapo
ReplyDeleteThat's pretty comical.
ReplyDeleteIt's actually accurate. Due to the heavy censorship here, I can't elaborate or provide links. But many Arabs, Indians, Asians and Africans supported the cause.
DeleteThis racist bias was programmed intentionally. There's no excuse for this, let AI learn organically instead of doing worse harm than you're supposedly trying to avoid.
ReplyDeleteSome black people did serve in the German army. Its a fact. Google it.
ReplyDeleteand?
DeleteMy guess is the war ended and they went home.
DeleteNah. Ill DuckDuckGo. Google sucks
Deleteor try msn
DeleteThis comment has been removed by a blog administrator.
ReplyDeleteThat is - hands down - the FUNNIEST thing I've ever seen.
ReplyDeletewoke AI 😂
ReplyDeleteso, if it isn’t “woke”, don’t fix it…? ;)
ReplyDeleteYes it was the ethnically diverse German soldiers, not the fact it would not generate white people when asked but would generate black people or asians,etc.
ReplyDeletegoogle spelled backwards is “EVIL”. They have allowed their success to destroy their ethics and moral responsibilities. They are part of the enshittification of the world, a big big part of it.
ReplyDeleteApologies are not good enough. We want Reparations.
ReplyDeleteMaybe someone could test it to see if we can get comically diverse results in the opposite direction. Ask for a picture of 50 subjects of Wakanda. Or 50 members of Stormzy’s entourage. Or 50 company diversity and inclusion managers. Wouldn’t it be hilariously strange if any white or Asian faces appeared in the resulting images.
ReplyDeleteWould never happen!
DeleteThis comment has been removed by a blog administrator.
DeleteI can just imagine a few generations down the line, people still knowing about the Nazis but thinking they were full of Chinese and black soldiers. (I note the usual omission of Indian faces in so-called "diverse output")
ReplyDeleteSo AI is full of bias too - I thought that escaping bias was one of its main selling points?
ReplyDeleteSome Googlers hate white people and Europeans a bit too much.
ReplyDeleteTime for the EU to regulate this US racism and hate. This is a real threat.
How do you know this? Care to name any? I know a few people there, I can verify your bizarre, inflammatory, BS claims.
DeleteMaybe 'hate' is not the right word but watch this to understand the context properly.
Deletehttps://x.com/elonmusk/status/1760857006414639269?t=gZeWoSaHbIlr9ysaBOD_Ow&s=08
ReplyDeletePlease use the sharing tools found via the share button at the top or side of articles. Copying articles to share with others is a breach of FT.com T&Cs and Copyright Policy. Email licensing@ft.com to buy additional rights. Subscribers may share up to 10 or 20 articles per month using the gift article service. More information can be found at https://www.ft.com/tour.
https://www.ft.com/content/979fe974-2902-4d78-8243-a0cff68e630a#comments-anchor
These guys need to better tune their revenue engine.
It’s not clear they have a growth strategy even if they can make a lot of money every year.
In the interim their potential for mischief making under the guise of harmless entertainment doesn’t cut it anymore.
A dividend play?
ReplyDeletePlease use the sharing tools found via the share button at the top or side of articles. Copying articles to share with others is a breach of FT.com T&Cs and Copyright Policy. Email licensing@ft.com to buy additional rights. Subscribers may share up to 10 or 20 articles per month using the gift article service. More information can be found at https://www.ft.com/tour.
https://www.ft.com/content/979fe974-2902-4d78-8243-a0cff68e630a#comments-anchor
I think this is a big blow to LLMs and AI because even more people will loose trust and confidence in what they create.
Even more people will succumb to conspiracy theories about a cabal of elite intellectuals, industrialists, etc, who aim to shape the world as they would like to have it, with no regard for most people living in it.
In this case, to rewrite history so that it looks as if it was created by a ethnically perfect mix of humans. I am waiting to see Neanderthals and Denisovans represented somehow in what the ‚Gemini machine‘ will create, in the interest of evolutionary justice…
https://notthebee.com/article/my-dudes-googles-gemini-ai-is-woke-as-heck-and-people-have-the-receipts-to-prove-it
ReplyDeleteGetting so tired of woke meddling
ReplyDeleteGemini = black Nazi. So woke. So California. So hilarious.
ReplyDeleteThat is an example of everything wrong with post-modern race, gender & post-colonial ideology in a nutshell.
DeleteNot sure if a bug in software is a FT news worthy item. Google will easily correct this issue.
ReplyDeleteIt's not really a bug, it's just something that wasn't thought of carefully enough. It will be solved very fast
Deleteit is not a bug, and it is not something that wasn't thought of carefully enough: it is a deliberate attempt to push the dangerous DEI agenda that blew up in their face.
DeleteThe damage to the google brand should be huge: with this kind of bias in their AI programme, how cn anyone trust their regular search engine not to be racist?
Leaving aside the fact that bugs are often newsworthy you have completely misunderstood the story. This is a glimpse into the culture of one of the most powerful companies in the world. One with an enormous influence on the flow of information. For the first time in years, they have a credible threat to their search product from OpenAI. This was their attempt to claw back the ascendancy in a technology (Transformers) they invented. Yet, what they released showed more concern about adhering to Bay Area progressive values than being a useful product for users.
ReplyDeleteSo why does the FT not say clearly that the ridiculous output of Gemini is due to the exogenous imposition of DEI ideology, and not endogenous “hallucinations”? And why is the FT, like the NYT, using the “black Nazis” example when what Gemini will not output are any examples of WHITE people?
ReplyDeleteIsn’t there a white guy in the photo in the article above?
DeleteThere is. But the nutbags have jumped immediately to "DEI woke software refuses to output images of white people!!!!" despite the evidence of their own eyes.
DeleteI asked Gemini to show me a picture of low melanin skin on a person and it refused, saying the question should need a human reviewer
ReplyDeleteI wonder if it might be more constructive to give users "alternative" historic figures. For example, you request George Washington, and you are given a realistic image. The AI then provides you with an historic figure (or figures) of the time who was African, Asian, Latino and/or Pacific Islander. That probably does more for learning and diversity than simply altering racial identities.
ReplyDeleteWhy would anyone be interested in receiving a picture of a pacific islander if the question was: give me a picture of George Washington?
DeleteWell, if the purpose of the Google-specific AI is to encourage diversity and awareness of other cultures, receiving an additional piece of information about another culture in the same time period could help accomplish this mission. You would still receive the requested image of George Washington.
DeletePromotional images for “Hamilton” perhaps?
Delete1984 was not supposed to be an instruction manual.
ReplyDeleteNo, it was meant to be a warning about the horrors of socialism, a prophecy that largely came true. It’s a pity more people aren’t paying attention.
DeleteI am constantly amazed at how weak and whiny my gender has become..
ReplyDeleteWeird..
What?!
DeleteNew politically correct prompt ...
ReplyDeleteFor anything considered:
"bad" please use lots of white males, regardless of historical accuracy.
"good" please include lots of anything other than white males, regardless of historical accuracy, e.g. young, female asian nazis. Ooops!
The FT strangely does not report its refusal of a reqt to draw a white person but that it would draw a request for any other ethnicity.
ReplyDeleteThe error rate is going to make some interesting court cases when people have relied on incorrect advise based on AI assistant to lawyers - lets hope its better at coding else software is going to get quite wonky
True of all AI. Readers should ask why all AI is anti white if societies main actors are “white”.
DeleteAnd yet, the image that heads this articles includes two images of white people (eyeroll).
DeleteAI works for pictures because our brains are great at filling in gaps.
ReplyDeleteText not so much.
I do wonder if we are at a local maxima that can't be passed without a different approach.
I also think the approaches may not be additive. we have to start from zero.
Let’s see what Musk’s maximum truth seeking AI comes up with.
ReplyDeleteMongols - Blue Eyes, Blonde Hair
DeleteEgyptians - Blue, Blue, Blue Eyes & Red Hair.
That is racism, so is Gemini.
But that wouldn’t be what Musk’s AI comes up with.
DeleteIndeed the SS had units comprising back people and ironically was one of the most diverse units in the German forces ( no joke). So at least the black person would be credible.
ReplyDeleteThat happens when Wokeness is allowed to dominate over science.
ReplyDelete? This particular story only tells me that the product is underdeveloped. Nothing more.
DeleteI think it's quite safe to say this is not the main issue. It seems quite developed, just in a direction most sentient beings outside San Francisco find not good.
DeleteThe headline image has ethnic minorities wearing Nazi uniforms and the comments section has been left open?
ReplyDeleteSurely an editor is testing the water for opening the comments section on a certain other news story and is just about to conclude: nope
Too late. Gemini= junk!
ReplyDeleteI spotted exactly the same thing the other day - quite shocked actially. I asked it to generate a female elf with blonde hair, in a fantasy forest with pixies flying around her.
ReplyDeleteIt generated an African elf with Asian pixies. I then asked whether it could please generate a caucasian elf, and it promptly replied "no" and gave me a nice little speech about diversity and that it would be discriminatory to generate images with a focus on caucasian. So apparently asking for a caucasian elf is discrimination (racism?) these days....
Just the 88% fabrication rate then. That’s just rosey.
ReplyDeleteShould I invest in nVidia or Alphabet to play the AI theme? 🤣
ReplyDeleteThis is an over-correction born out of trying to make the product what it is not: a reflection of the real world and it's past.
ReplyDeleteThe product is a reflection of its original learning material, which has been shown to be incredibly biased in profligating lazy biases, as, surprise surprise the internet is full of them.
The discussion below is so stupid: how you make this about reparations and university admissions? Really?
Think it only shows that there are more people needing therapy for their internalised anger issues about their own lives and using race as an outlet then I originally thought.
So, thanks for that, I think.
Most AI struggle to acknowledge white people exist.
ReplyDeleteIt's becoming obvious who'll be wiped out first.
DeleteIf Google is boycotting me, I'll boycott it too!
DeleteHeadline is misleading. There was no 'Diversity Backlash' but rather an accuracy backlash, to the DEI-driven algorithms.
ReplyDeleteI recently tried asking for a picture of a typical, average woman from my country and it presented pictures only of dark skinned people, who are less than 3% of the population in my country. I’d call that a diversity bias.
DeleteOh ffs…
ReplyDeleteGoogle AI = Artificial Inclusion
ReplyDeletetry and get chatgpt to give you a picture of 3 white workmates (only male) having a drink together.
ReplyDeleteChatgpt will always inject a diverse person in that picture, doesn't matter how many times you ask or how you ask.
give it a go.
Just did. Three white men in work casual.
Deleteit was in the previous release, i've abandoned it specifically because of this bias, too late alas as i have cancelled my subscription. they should have never had that in the first place.
DeleteWhat's the point of having an LLM that has an opinionated/nanny-like or politically aligned views? no thank you.
You're a liar bro.. quit making a fool of yourself too
DeleteHe's a tool, his username admits that.
Deletep.s. it is still happening, i tried in my partner's account, try and add extra information about ethnicity of other background characters and their roles, say a server, cleaner, DJ etc .... eventually you will get
DeleteJust dit it and worked like a charm, 3 white workmates
Deleteit is still happening, i tried in my partner's account, try and add extra information about ethnicity of other background characters and their roles, say a server, cleaner, DJ etc .... eventually you will get
DeleteMaybe it's trained to wind up snowflakes?
DeleteI used your exact prompt and it gave me 3 white men having a drink together.
Deletesee my replies above, add various roles/ethnicity to it. eventually it will start to parrot about "guidelines" and "diverse" "representative" ... and all you did is describe a typical boring thursday evening at a pub outside london.
DeleteYou seem absolutely determined to find something to get upset about. Try reading a nice book or something.
DeleteIt’s almost comically bad like most recent Netflix shows, adverts etc.
ReplyDeleteThe AI incumbents are in freefall.
ReplyDeleteThe amount of investment needed to take over the AI industry and print trillions is also in freefall.
At this rate some midwestern guy with severe autism will eclipse Google+Meta+OpenAI singlehandedly. I give it a year until a 17 year old releases a more accurate and efficient model than anything silicon valley can produce.
What billion dollar server farm will he be using ?
DeleteI guess it was simply trained on BBC and Netflix Originals.
ReplyDeleteAccording to this months Harper's Magazine, the Israelis are using AI to target Hamas in Gaza. More than 30,000 people killed, two thirds of whom are women and children.
ReplyDeleteIsrael is killing people by posting AI memes?
DeleteI don't think you understand the difference between AI image generation and autonomous drones, but whatever.
Compare it to a baseline.
DeleteBattle of Mosul - Iraqi infantry backed by US air and Shia militias trained by Iran take on Islamic State embedded in a low density city with a population 1/2 million. 11,000 civilians die.
Gaza - Israel takes on Hamas embedded in a dense strip of 2.2 million. 18,000 civilians die (30,000 less 12,000 Hamas terrorists).
That means 22 out a 1,000 civilians died in Mosul, while ONLY 8 out of a 1,000 died in Gaza. And Gaza was MUCH more dense and difficult area to hunt terrorists.
If AI was the difference, then great. The International Criminal Court just denied a South Africa motion to stop IDF attack on Rafah, and for good reason. The IDF is doing a better job than any other army has protecting civilians.
The 18,000 dead you talk about in Gaza are women and children. The rest are men. There is no evidence that all these men are Hamas. In fact, there is plenty of evidence that they are civilians.
DeleteMy guess is you know that, but like the Israelis you are deliberately distorting the truth.
"Children" is a pretty broad category. Hamas starts military training at age 12 and there's likely many Hamas fighters under 18. But why let reality interfere with the chance to complain about (((those people)))?
DeleteAre there really still people disgustingly defending the plausible genocide and domicide that have been going on in Gaza? Even the presence of mathematics and statistics still doesn't aid your understanding, Fred Chores and HO and Enderby?
DeleteAccording to the author, "OpenAI’s ChatGPT-3.5 fabricated responses 69 per cent of the time, while Meta’s Llama 2 model hit 88 per cent." Maybe Israel's AI-based target generator , gospel or whatever it is called, has somehow figured something out that OpenAI, the pioneer of generative AI research, hasn't. But something tells me the hallucination of names of people whose homes and families to blow up, is astronomical. But even if you like to mock the deaths of innocent men, women and children, do not display your ignorance in mathematics here in the comments section of the FT, for goodness' sake. Mask your ignorance a bit.
Thanks @Fitch, for pointing this out. I had thought I'd be the only one to make the connection between the error rates and what it means for our present and future as people on this violence-prone planet.
This comment has been removed by a blog administrator.
ReplyDeleteYeah I saw this and it also tried to correct some enquiries because it didn’t find them diverse enough. NEVER using Gemini (why did I want to call it Genesis? Genesis is skynet?). But also chilling how, beyond having my train labeled by someone so I have to confront politics every time I take the train, I am now to be schooled by AI. That’s the end of days for me.
ReplyDeleteYour new train names also cost 6.5 million because the mayor has 80 press officers to splurge it on.
DeleteBecause all the stabbings (predominantly young black men) are not really that important in his grand scheme.
Honestly the state of things.
AI heads towards the last refuge of the unimaginative.
ReplyDeleteSo, wait, is the Justin Trudeau photo woke now?
ReplyDeleteTo be white now is an act of counter revolution
DeleteWhat's that expression, 'Garbage in, garbage out'.
ReplyDeleteGive me a white George Floyd and consider the circus redeemed.
ReplyDeleteChinese Rosa Parks 😀
DeleteNetanyahu in a keffiyeh.
DeleteHow is it possible for Google to spend tens of billions of dollars on this AI software, to be in the search business for over 20 years, and yet their system doesn't understand that Germans, not a small ethnic group, are white? In addition, America, where Google is located, has many people of German descent. President Eisenhower was one of those people.
ReplyDeleteWhat is the testing suite used to verify their software? One would expect some major testing for accuracy.
The software is useless if you can't trust it. If it doesn't give reliable results whether in a spreadsheet or in artificial intelligence. These results are like a spreadsheet stating that 2+2 = 5.
This comes across more of a leadership issue. It may requires some leadership with some real engineering experience, eg, electrical engineering, to help make the results accurate. If engineers make a mistake, planes fall, computer chips don't work, nuclear power plants have disasters. Having someone with experience in real engineering might help with project leadership.
obviously it does, Gemini is plugged directly into Google’s knowledge graph.
DeleteThis was image generation. What exactly does accuracy mean when it comes to image generation? Someone else in the comments described it as a dreaming machine, and that’s a really good way of thinking about it.
Note all this publicity is being boosted by the petty Musk who’s Grok AI is light years behind Google.
Rubbish. Google's AI was tinkered specifically to spew out 'diverse' suggestions because it's the latest religion in California.
DeleteIn an attempt to correct for statistical bias in the imaging training data. You could argue that it is because of a “religion”, but the bias in the data does exist and correcting for it is a way to get it more calibrated to reality. Without correction, a typical diffusion engine will generate a white man as a manager. This has not been typical for decades and is no where near representative of the current workforce.
DeleteRegardless, this is a leadership issue which would have not occured if someone with a real engineering background was in the leadership role.
DeleteGermans are a large category: there are many Germans and many white faces.
Good engineering means fixing one issue does not break major testable issues.
It really comes across as not applying engineering principles to an engineering task.
How can we ever possibly trust the software when the entire approach that led to this mistake will only be "tinkered with" with the same leadership team?
I would like to see a real engineer with real engineering experience leading the project.
It’s the nature of the beast. There is no engineer currently alive with the skills and knowledge to successfully approach this using conventional engineering techniques.
DeleteSome very clever people are working on what’s called interpretability and explanability which would make it more amenable to traditional systems thinking, but for now it’s just best efforts and lessons learned.
This is an unsupportable statement and I totally disagree. It is best to have someone with a real engineering degree from a good school that was mentored on real engineering project (s). There are processes and procedures to follow and an intuition that develops when creating devices that can be subject to expensive recalls, or a plane crash, or a nuclear power accident. That person hires others with similar experience because they understand the importance of engineering and engineering processes. The entire culture of the team is based on solid engineering principles.
DeleteSuch a team would never break nor ship a major aspect of the software, eg, Germans having white faces.
The team lacks engineering discipline. That is a leadership issue corrected by having experienced real engineering leadership.
You can't always have interpretability, although a nice goal. But when you don't have interpretability, you must have people trust your "black box" AI and you can't have that trust if the people making the system lack the engineering discipline. The system is too complex to totally verify.
Are you sure you know how these things work if you think a single random fact like “Germans are white” is a major feature which would be tested? Are you German is that why you’ve taken exception?
DeleteI actually do have an understanding. I really believe this is a process issue and if engineering practices and processes had been followed, such an error would not have occured.
DeleteIt is not a random fact, it is not some rare difficult question or a question with ambiguity, it is a question about a major ethnic group, particularly in the US where Google is headquartered.
So yes, this is what you get when your AI model works under the constraints of political correctness/woke definitions, and applies it to everything.
ReplyDeleteI hear Mr Minsky screaming from his grave: This would not have happened with symbolic AI!
Considering how often he visited Epstein island, in order to be heard he'll need to scream a lot louder than that.
DeleteThe same Mr Minsky? The Epstein one who at 73 did this with a 17 year old (a minor)? https://www.theverge.com/2019/8/9/20798900/marvin-minsky-jeffrey-epstein-sex-trafficking-island-court-records-unsealed. Screaming, for sure.
DeleteThis comment has been removed by a blog administrator.
DeleteI was more referring to Minsky's way of screaming everyone down who defended sub-symbolic AI.
DeleteOther than that, I didn't know about the accusations - devastating for he victims and disappointing and devastating for everyone else if true.
Because you always check your sources as a matter of course ?
ReplyDeleteIf we rely on Google search for answers as much as we do, but then it transpires that their responses are non-analagous to the facts, then how can you have any trust in the search results?
ReplyDeleteI mean.. I refer you to the comment above king ?
ReplyDeleteHave you just been blithely trusting everything on the Internet up to this point ??
Absolutley not, but Google has to be trusted in order for it to be useful. I think its more a question for them than it is for a cynic like me
ReplyDeleteI mean.. why should a generative AI art bot be trusted about anything ??
ReplyDeleteIt's.. a.. drawing..
Like if I drew you a picture of Leonardo Da Vinci riding a Dinosaur, would you take that as proof that such an incident occured ??
But what if Google simply doesn't return results from a disfavored source? You have no way of checking that, unless you were already familiar enough with the subject to not need Google in the first place.
ReplyDeleteDisfavored ??
ReplyDeleteDo you even know how PageRank works ?
Search engines do have ranking bias for trustworthy vs untrustworthy sites.
ReplyDeleteUntrustworthy in this sense meaning CyberCrime and Trojan vectors - not the accuracy of the information contained within
ReplyDeleteWhich is what the OP appears to be grasping towards..
The only thing PageRank cares about is ' how popular is this Webpage with the rest of the Internet '
The mindless, grotesque absurdity of it all. Google knows exactly why this happened and will now turn endless twists in figuring out how to delineate and depict DEI so that the artificial “intelligence” application doesn’t appear utterly cretinous.
ReplyDeleteYou never know. After all, one of the DP’s in an American POW camp was a Korean soldier captured by Russians in Manchuria and forced into a Russian uniform, only to be caught by the Germans. Put into a German uniform, he was captured by Allied forces and moved to the United States after the Second World War.
ReplyDeleteI've heard about him, but something of an edge case I think?
DeleteSurely this is just very funny isn’t it ? Why does anybody take this nonsense seriously. Just ignore the whole thing …
ReplyDeletePut on some Bach , Coltrane or ABBA or whatever takes your fancy , or read a book or have a run , who cares what the Google AI bot does ? The outrage machine has lost it’s marbles.
Yes and no. I had a good laugh too, but these systems will increasingly be used in science and engineering to support the protagonists.
DeleteWhen calculators were introduced and used in schools, our teachers became angry, because some pupils just never questioned the numeric output, however ridiculous it was. The same risks to happen with AI supported research, analysis and design.
Well even classic fm has diversity hour now.
DeleteTandoori Banjo Hour?
DeleteChildren will be relying on AI as an educational tool just like they use Google now. It needs to provide an accurate picture of the past. The interpretation of the Holocaust changes if you think it was perpetrated by a heavily diverse society.
DeleteI get that but think about it like this. I watch a production of some European period costume drama which has people of colour playing roles which are evidently ahistorical but , in the interests of diversity and inclusion , we now accept this conceit . So why not have people of Asian and African heritage populating images of the Wehrmacht ? You either accept the whole conceit that people who weren’t there can now be because it is the inclusive thing to do , or not. I think the Google AI Bot one whatever is showing us a more equal future personally.
DeleteThey're not just good actors ?
DeleteStep 1: ask AI to generate something
ReplyDeleteStep 2: immediate outrage
Step 3 : use TwitteX to monetize said outrage
DeleteStep 4 - realize unchecked AI is now political and is controlling g your belief parameters.
DeleteJesus how weak would your mental faculties need to be ?
DeletePoor AI. With all this over-alignment we will end up dumbing it down to the levels of the average woke...
ReplyDeleteSilver lining: no threat to humanity.
what is an average woke?
DeleteSomeone with little intelligence and/or dishonest.
DeleteYou're the one with the truncated number bud
DeleteAnd?
DeleteThanks for making my point :)
DeleteIf the systems generates incorrect results it must be fixed. What is so woke about that.
ReplyDeleteAIs are being severely aligned atm because too afraid of upsetting sensitivities. A yr ago if you asked midjourney to create images of “a CEO” it would return 4 white males. Backlash followed and now you have commercial models, like OpenAI and Gemini, being overaligned which in turn results in poor outcomes as per the above.
DeleteI just did a google search for ‘picture of a white man and a white woman’ and it all came back with nearly all black people!
ReplyDeleteOddly enough I also did ‘picture of a man and woman’ and it came back with almost all white people.
Hilarious… but also dystopian
Just tried it. It is hilarious!
DeleteP.s. when you search for "white man and white woman" you get pushed mixed race couples, mostly black man and white woman pairing.
Indeed ... ridiculous
DeleteOoohhh the scariest pairing of them all !!
DeleteInteresting. Duckduckgo gives similar results, it favors pictures with multiple races even if you only mention one race. It did this when I searched just for "black" and just for "white."
DeleteMy guess is that including any mention of race seems to make it "think about race" and bring up images from articles that, unsurprisingly, talk about race. Those articles, also unsurprisingly, usually talk about the interaction between two or more races.
Drop the “intelligence” and stay with “artificial”.
ReplyDeleteCalling this a "diversity backlash" is outrageous. The backlash is against programming the AI model to eliminate one specific race, white people, from results where it would clearly be appropriate for them to be there.
ReplyDeleteWhatever that is is, it is not "diversity" to eliminate white people from results so far as possible, although I can see how engineers might be confused by this, since it is the explicit goal of "diversity" in hiring and admissions.
This article makes a small mistake by conflating the more general process of fine tuning (anything which updates a pre-trained model’s weights), with RLHF, in which fine tuning is specifically done with a reinforcement model after showing people different options and asking which they prefer
ReplyDeleteAh, grass-roots censorship. Orwell, turns out, was naive even.
ReplyDeleteJust ban the pen, because hurtful things can be written without the possibility of stopping it in its tracks.
Next up Ladyboy Zulus!
ReplyDeleteMongolian Pride Marches .
DeleteWhilst interrogating chatGPT about a scientific matter it made a blatant mistake . When I pointed this out it replied "Apologies for the oversight; you are correct". It then went on to give me a correct answer. I guess by humans correcting AI's mistakes they are going to get progressively more knowledgeable. We have created the child and we are teaching it....... until it becomes more intelligent than us?
ReplyDeleteNo, ChatGPT doesn't save interactions with users and learn from them (although you can give feedback on each interaction and the OpenAI team will use that to improve the product).
DeleteAah. So the AI must be trained Orwellian doublethink, and to make sense of parameters like "two legs good, four legs bad" and understand that whilst "all animals are equal some are more equal than others".
ReplyDeleteGood luck in training the AI in what is effectively cognitive dissonance: holding two competing and exclusive concepts in mind at the same time.
"...featured in historical contexts inaccurately". So something the BBC, theatre and many mainstream films have been doing for years then.
ReplyDeleteFetch the Smelling Salts !!
DeleteIt is worrying because it shows we are being indeed manipulated- this time it is grotesque, but otherwise we might be blind to it
ReplyDeleteInteresting that the concept of "people of colour" being in the German army is viewed as absurd....
ReplyDelete...when placed under extreme conditions, humans are capable of doing more absurd things than machines:
https://allthatsinteresting.com/free-arabian-legion
storm in a teacup quite frankly. the intention of it is completely fine, twitter is all in a tizzy for no reason.
ReplyDeleteYou make money on twitteX now for hit tweets, so guess what's been selling...
DeleteStalin's intentions were completely fine, too: industrialization, modernization, abolishment of classes...
Deletesure totally appropriate comparison
DeleteHush.. introduction of non sequiturs is a time honoured attempt at signalling Intelligence,
DeleteWell Done Stone, we don't think any less of you !
I thin it was season six of Vikings, when the season began it was a decade in the future and the queen of seat of the Viking throne was magically a black woman.
ReplyDeleteThey'd been magically speaking Modern English for the previous 5 seasons as well...
DeleteLOL !!
DeleteSo by that logic they can use guns and submarines too and you'd be fine with it.
Delete