Google apologises after Gemini AI generates images of Nazis as people of colour | Mashable.

Google apologises after Gemini AI generates images of Nazis as people of colour

The company has paused the tool after it was found to be creating historically inaccurate images.
By Matthews Martins on 
Credit: Jonathan Raa/NurPhoto via Getty Images

Google has paused its Gemini AI tool (formerly Bard)'s ability to generate images of people. The company also issued an apology after the tool was found to be generating historically inaccurate illustrations, including depictions of Nazi soldiers as people of colour.

As reported by The Verge, a number of different prompts led to the bot responding with historical inaccuracies. People of colour were reportedly shown in illustrations offered for queries about German soldiers in 1943, as well as requests to show U.S. Senators in the 1800s.

"We're already working to address recent issues with Gemini's image generation feature," said Google on X. "While we do this, we're going to pause the image generation of people and will re-release an improved version soon."

In an earlier statement, Google had confirmed that Gemini's AI image generation generates a wide range of people.

"That's generally a good thing because people around the world use it," Google said. "But it's missing the mark here."

AI image generators have been found to perpetuate racist stereotypes, and one of Google's AI principles is to "avoid creating or reinforcing unfair bias." But as Jack Krawczyk, who works on Gemini, pointed out on X, "historical contexts have more nuance to them."

"As part of our AI principles we design our image generation capabilities to reflect our global user base, and we take representation and bias seriously," Krawzyk wrote. "We will continue to do this for open ended prompts."

Topics  Apps & Software Artificial Intelligence Google

Comments

  1. This is what happens when you force bias into your data to suite everybody.

    ReplyDelete
    Replies
    1. this is the result of Hegelian cultism/woke cultism within the tech companies and within our rotted institutions.

      Delete
  2. Google Gemini AI is the most racist product possible and a reflection of its woke creators.

    ReplyDelete
  3. Hope they go deep into the process that lead to this. To implement a bias into a llm can also result in direct political influence.

    ReplyDelete
  4. Sounds like AI wears a red hat and golden sneakers.

    ReplyDelete
    Replies
    1. making AI great again.

      Delete
    2. The irony😂 This is the result of Hegelian cultism/woke cultism within the tech companies and our rotted institutions.

      Delete
    3. Google Gemini AI is the most racist product possible and a reflection of its woke creators.

      Delete
  5. WHy should they apologize? We have movies about kings and queens of sweden, vikings, etc, all portrayed by black people. Why cant we have images of Nazis portrayed by black people? Oh, and google "mark feldons arab and black soldiers of hitler" video. There certainly was diversity in the armed forces of nazi germany.

    ReplyDelete
  6. See what color it will make Jesus.

    ReplyDelete
  7. you slower than explorer

    ReplyDelete
  8. This guy demonstrates the bias live lol: https://www.youtube.com/watch?v=69vx8ozQv-s

    ReplyDelete
  9. https://youtu.be/6qeGYR_8Sy0?si=47_qbgbdVwjllLZs

    ReplyDelete
  10. Gemini wouldn't create images of ANY white people. Mashable is a disgusting racist page.

    ReplyDelete
    Replies
    1. poor white people! So discriminated!!

      Delete
    2. Whites are the only race that it's legal to discriminate against.

      Delete
    3. you suffer from pathological altruism and are in a cult.
      https://jacobin.com/2023/12/hegel-wokeness-world-spirit-political-theory-history-philosophy-richard-bourke?fbclid=IwAR0ZhqBP5k1Y2RsQTPKrcknxoVi1Wg8EQnERLxnpKbaiFRzAmTBf5rCYQQk

      Delete
  11. Google has paused the Gemini AI image tool after it was found to be creating historically inaccurate images.

    ReplyDelete
    Replies
    1. Hello, My honest and sincere apologies for invading your privacy so abruptly. I find your profile really interesting🥰 and would really like to be your friend, but I've tried sending the friend request several times, but it doesn't go through. Please send me a friend request if you don't mind.

      Delete
  12. What is "people of color"? O_o green aliens? Blue avatars? Red dogs?
    If they stop bard (gemini) wasn't because of this stupid argument but because something else. Only idiots will believe this. 🤷‍♂️🤦‍♂️
    Go learn about the technology before posting clickbait fake news.

    ReplyDelete
  13. So people prompted the AI for POC Nazis. The AI did its job. Now it's the AI's fault and it has to be reprogrammed. This is how censorship starts.

    ReplyDelete
    Replies
    1. Learn to read. The AI was asked for images of Nazis and the built in anti white bias forced it to use POC.

      Delete
  14. Garbage in garbage out

    ReplyDelete
  15. That's the spin you've put on it, you woke cvnts? The tool was refusing to generate any images of white people throughout history–esp. white men. This is poisonous anti-white woke racism at its most insidious.

    ReplyDelete
  16. How about apologizing to whites for making the founding fathers and vikings non-white?

    ReplyDelete
  17. That's pretty comical.

    ReplyDelete
    Replies
    1. It's actually accurate. Due to the heavy censorship here, I can't elaborate or provide links. But many Arabs, Indians, Asians and Africans supported the cause.

      Delete
  18. This racist bias was programmed intentionally. There's no excuse for this, let AI learn organically instead of doing worse harm than you're supposedly trying to avoid.

    ReplyDelete
  19. Some black people did serve in the German army. Its a fact. Google it.

    ReplyDelete
  20. This comment has been removed by a blog administrator.

    ReplyDelete
  21. That is - hands down - the FUNNIEST thing I've ever seen.

    ReplyDelete
  22. so, if it isn’t “woke”, don’t fix it…? ;)

    ReplyDelete
  23. Yes it was the ethnically diverse German soldiers, not the fact it would not generate white people when asked but would generate black people or asians,etc.

    ReplyDelete
  24. google spelled backwards is “EVIL”. They have allowed their success to destroy their ethics and moral responsibilities. They are part of the enshittification of the world, a big big part of it.

    ReplyDelete
  25. Apologies are not good enough. We want Reparations.

    ReplyDelete
  26. Maybe someone could test it to see if we can get comically diverse results in the opposite direction. Ask for a picture of 50 subjects of Wakanda. Or 50 members of Stormzy’s entourage. Or 50 company diversity and inclusion managers. Wouldn’t it be hilariously strange if any white or Asian faces appeared in the resulting images.

    ReplyDelete
    Replies
    1. Would never happen!

      Delete
    2. This comment has been removed by a blog administrator.

      Delete
  27. I can just imagine a few generations down the line, people still knowing about the Nazis but thinking they were full of Chinese and black soldiers. (I note the usual omission of Indian faces in so-called "diverse output")

    ReplyDelete
  28. So AI is full of bias too - I thought that escaping bias was one of its main selling points?

    ReplyDelete
  29. Some Googlers hate white people and Europeans a bit too much.

    Time for the EU to regulate this US racism and hate. This is a real threat.

    ReplyDelete
    Replies
    1. How do you know this? Care to name any? I know a few people there, I can verify your bizarre, inflammatory, BS claims.

      Delete
    2. Maybe 'hate' is not the right word but watch this to understand the context properly.
      https://x.com/elonmusk/status/1760857006414639269?t=gZeWoSaHbIlr9ysaBOD_Ow&s=08

      Delete

  30. Please use the sharing tools found via the share button at the top or side of articles. Copying articles to share with others is a breach of FT.com T&Cs and Copyright Policy. Email licensing@ft.com to buy additional rights. Subscribers may share up to 10 or 20 articles per month using the gift article service. More information can be found at https://www.ft.com/tour.
    https://www.ft.com/content/979fe974-2902-4d78-8243-a0cff68e630a#comments-anchor

    These guys need to better tune their revenue engine.

    It’s not clear they have a growth strategy even if they can make a lot of money every year.

    In the interim their potential for mischief making under the guise of harmless entertainment doesn’t cut it anymore.

    A dividend play?

    ReplyDelete

  31. Please use the sharing tools found via the share button at the top or side of articles. Copying articles to share with others is a breach of FT.com T&Cs and Copyright Policy. Email licensing@ft.com to buy additional rights. Subscribers may share up to 10 or 20 articles per month using the gift article service. More information can be found at https://www.ft.com/tour.
    https://www.ft.com/content/979fe974-2902-4d78-8243-a0cff68e630a#comments-anchor

    I think this is a big blow to LLMs and AI because even more people will loose trust and confidence in what they create.

    Even more people will succumb to conspiracy theories about a cabal of elite intellectuals, industrialists, etc, who aim to shape the world as they would like to have it, with no regard for most people living in it.

    In this case, to rewrite history so that it looks as if it was created by a ethnically perfect mix of humans. I am waiting to see Neanderthals and Denisovans represented somehow in what the ‚Gemini machine‘ will create, in the interest of evolutionary justice…

    ReplyDelete
  32. https://notthebee.com/article/my-dudes-googles-gemini-ai-is-woke-as-heck-and-people-have-the-receipts-to-prove-it

    ReplyDelete
  33. Getting so tired of woke meddling

    ReplyDelete
  34. Gemini = black Nazi. So woke. So California. So hilarious.

    ReplyDelete
    Replies
    1. That is an example of everything wrong with post-modern race, gender & post-colonial ideology in a nutshell.

      Delete
  35. Not sure if a bug in software is a FT news worthy item. Google will easily correct this issue.

    ReplyDelete
    Replies
    1. It's not really a bug, it's just something that wasn't thought of carefully enough. It will be solved very fast

      Delete
    2. it is not a bug, and it is not something that wasn't thought of carefully enough: it is a deliberate attempt to push the dangerous DEI agenda that blew up in their face.

      The damage to the google brand should be huge: with this kind of bias in their AI programme, how cn anyone trust their regular search engine not to be racist?

      Delete
  36. Leaving aside the fact that bugs are often newsworthy you have completely misunderstood the story. This is a glimpse into the culture of one of the most powerful companies in the world. One with an enormous influence on the flow of information. For the first time in years, they have a credible threat to their search product from OpenAI. This was their attempt to claw back the ascendancy in a technology (Transformers) they invented. Yet, what they released showed more concern about adhering to Bay Area progressive values than being a useful product for users.

    ReplyDelete
  37. So why does the FT not say clearly that the ridiculous output of Gemini is due to the exogenous imposition of DEI ideology, and not endogenous “hallucinations”? And why is the FT, like the NYT, using the “black Nazis” example when what Gemini will not output are any examples of WHITE people?

    ReplyDelete
    Replies
    1. Isn’t there a white guy in the photo in the article above?

      Delete
    2. There is. But the nutbags have jumped immediately to "DEI woke software refuses to output images of white people!!!!" despite the evidence of their own eyes.

      Delete
  38. I asked Gemini to show me a picture of low melanin skin on a person and it refused, saying the question should need a human reviewer

    ReplyDelete
  39. I wonder if it might be more constructive to give users "alternative" historic figures. For example, you request George Washington, and you are given a realistic image. The AI then provides you with an historic figure (or figures) of the time who was African, Asian, Latino and/or Pacific Islander. That probably does more for learning and diversity than simply altering racial identities.

    ReplyDelete
    Replies
    1. Why would anyone be interested in receiving a picture of a pacific islander if the question was: give me a picture of George Washington?

      Delete
    2. Well, if the purpose of the Google-specific AI is to encourage diversity and awareness of other cultures, receiving an additional piece of information about another culture in the same time period could help accomplish this mission. You would still receive the requested image of George Washington.

      Delete
    3. Promotional images for “Hamilton” perhaps?

      Delete
  40. 1984 was not supposed to be an instruction manual.

    ReplyDelete
    Replies
    1. No, it was meant to be a warning about the horrors of socialism, a prophecy that largely came true. It’s a pity more people aren’t paying attention.

      Delete
  41. I am constantly amazed at how weak and whiny my gender has become..

    Weird..

    ReplyDelete
  42. New politically correct prompt ...
    For anything considered:
    "bad" please use lots of white males, regardless of historical accuracy.
    "good" please include lots of anything other than white males, regardless of historical accuracy, e.g. young, female asian nazis. Ooops!

    ReplyDelete
  43. The FT strangely does not report its refusal of a reqt to draw a white person but that it would draw a request for any other ethnicity.

    The error rate is going to make some interesting court cases when people have relied on incorrect advise based on AI assistant to lawyers - lets hope its better at coding else software is going to get quite wonky

    ReplyDelete
    Replies
    1. True of all AI. Readers should ask why all AI is anti white if societies main actors are “white”.

      Delete
    2. And yet, the image that heads this articles includes two images of white people (eyeroll).

      Delete
  44. AI works for pictures because our brains are great at filling in gaps.
    Text not so much.
    I do wonder if we are at a local maxima that can't be passed without a different approach.
    I also think the approaches may not be additive. we have to start from zero.

    ReplyDelete
  45. Let’s see what Musk’s maximum truth seeking AI comes up with.

    ReplyDelete
    Replies
    1. Mongols - Blue Eyes, Blonde Hair
      Egyptians - Blue, Blue, Blue Eyes & Red Hair.

      That is racism, so is Gemini.

      Delete
    2. But that wouldn’t be what Musk’s AI comes up with.

      Delete
  46. Indeed the SS had units comprising back people and ironically was one of the most diverse units in the German forces ( no joke). So at least the black person would be credible.

    ReplyDelete
  47. That happens when Wokeness is allowed to dominate over science.

    ReplyDelete
    Replies
    1. ? This particular story only tells me that the product is underdeveloped. Nothing more.

      Delete
    2. I think it's quite safe to say this is not the main issue. It seems quite developed, just in a direction most sentient beings outside San Francisco find not good.

      Delete
  48. The headline image has ethnic minorities wearing Nazi uniforms and the comments section has been left open?

    Surely an editor is testing the water for opening the comments section on a certain other news story and is just about to conclude: nope

    ReplyDelete
  49. Too late. Gemini= junk!

    ReplyDelete
  50. I spotted exactly the same thing the other day - quite shocked actially. I asked it to generate a female elf with blonde hair, in a fantasy forest with pixies flying around her.

    It generated an African elf with Asian pixies. I then asked whether it could please generate a caucasian elf, and it promptly replied "no" and gave me a nice little speech about diversity and that it would be discriminatory to generate images with a focus on caucasian. So apparently asking for a caucasian elf is discrimination (racism?) these days....

    ReplyDelete
  51. Just the 88% fabrication rate then. That’s just rosey.

    ReplyDelete
  52. Should I invest in nVidia or Alphabet to play the AI theme? 🤣

    ReplyDelete
  53. This is an over-correction born out of trying to make the product what it is not: a reflection of the real world and it's past.

    The product is a reflection of its original learning material, which has been shown to be incredibly biased in profligating lazy biases, as, surprise surprise the internet is full of them.

    The discussion below is so stupid: how you make this about reparations and university admissions? Really?

    Think it only shows that there are more people needing therapy for their internalised anger issues about their own lives and using race as an outlet then I originally thought.

    So, thanks for that, I think.

    ReplyDelete
  54. Most AI struggle to acknowledge white people exist.

    ReplyDelete
    Replies
    1. It's becoming obvious who'll be wiped out first.

      Delete
    2. If Google is boycotting me, I'll boycott it too!

      Delete
  55. Headline is misleading. There was no 'Diversity Backlash' but rather an accuracy backlash, to the DEI-driven algorithms.

    ReplyDelete
    Replies
    1. I recently tried asking for a picture of a typical, average woman from my country and it presented pictures only of dark skinned people, who are less than 3% of the population in my country. I’d call that a diversity bias.

      Delete
  56. Google AI = Artificial Inclusion

    ReplyDelete
  57. try and get chatgpt to give you a picture of 3 white workmates (only male) having a drink together.

    Chatgpt will always inject a diverse person in that picture, doesn't matter how many times you ask or how you ask.

    give it a go.

    ReplyDelete
    Replies
    1. Just did. Three white men in work casual.

      Delete
    2. it was in the previous release, i've abandoned it specifically because of this bias, too late alas as i have cancelled my subscription. they should have never had that in the first place.

      What's the point of having an LLM that has an opinionated/nanny-like or politically aligned views? no thank you.

      Delete
    3. You're a liar bro.. quit making a fool of yourself too

      Delete
    4. He's a tool, his username admits that.

      Delete
    5. p.s. it is still happening, i tried in my partner's account, try and add extra information about ethnicity of other background characters and their roles, say a server, cleaner, DJ etc .... eventually you will get

      Delete
    6. Just dit it and worked like a charm, 3 white workmates

      Delete
    7. it is still happening, i tried in my partner's account, try and add extra information about ethnicity of other background characters and their roles, say a server, cleaner, DJ etc .... eventually you will get

      Delete
    8. Maybe it's trained to wind up snowflakes?

      Delete
    9. I used your exact prompt and it gave me 3 white men having a drink together.

      Delete
    10. see my replies above, add various roles/ethnicity to it. eventually it will start to parrot about "guidelines" and "diverse" "representative" ... and all you did is describe a typical boring thursday evening at a pub outside london.

      Delete
    11. You seem absolutely determined to find something to get upset about. Try reading a nice book or something.

      Delete
  58. It’s almost comically bad like most recent Netflix shows, adverts etc.

    ReplyDelete
  59. The AI incumbents are in freefall.

    The amount of investment needed to take over the AI industry and print trillions is also in freefall.

    At this rate some midwestern guy with severe autism will eclipse Google+Meta+OpenAI singlehandedly. I give it a year until a 17 year old releases a more accurate and efficient model than anything silicon valley can produce.

    ReplyDelete
    Replies
    1. What billion dollar server farm will he be using ?

      Delete
  60. I guess it was simply trained on BBC and Netflix Originals.

    ReplyDelete
  61. According to this months Harper's Magazine, the Israelis are using AI to target Hamas in Gaza. More than 30,000 people killed, two thirds of whom are women and children.

    ReplyDelete
    Replies
    1. Israel is killing people by posting AI memes?

      I don't think you understand the difference between AI image generation and autonomous drones, but whatever.

      Delete
    2. Compare it to a baseline.

      Battle of Mosul - Iraqi infantry backed by US air and Shia militias trained by Iran take on Islamic State embedded in a low density city with a population 1/2 million. 11,000 civilians die.
      Gaza - Israel takes on Hamas embedded in a dense strip of 2.2 million. 18,000 civilians die (30,000 less 12,000 Hamas terrorists).

      That means 22 out a 1,000 civilians died in Mosul, while ONLY 8 out of a 1,000 died in Gaza. And Gaza was MUCH more dense and difficult area to hunt terrorists.

      If AI was the difference, then great. The International Criminal Court just denied a South Africa motion to stop IDF attack on Rafah, and for good reason. The IDF is doing a better job than any other army has protecting civilians.

      Delete
    3. The 18,000 dead you talk about in Gaza are women and children. The rest are men. There is no evidence that all these men are Hamas. In fact, there is plenty of evidence that they are civilians.
      My guess is you know that, but like the Israelis you are deliberately distorting the truth.

      Delete
    4. "Children" is a pretty broad category. Hamas starts military training at age 12 and there's likely many Hamas fighters under 18. But why let reality interfere with the chance to complain about (((those people)))?

      Delete
    5. Are there really still people disgustingly defending the plausible genocide and domicide that have been going on in Gaza? Even the presence of mathematics and statistics still doesn't aid your understanding, Fred Chores and HO and Enderby?

      According to the author, "OpenAI’s ChatGPT-3.5 fabricated responses 69 per cent of the time, while Meta’s Llama 2 model hit 88 per cent." Maybe Israel's AI-based target generator , gospel or whatever it is called, has somehow figured something out that OpenAI, the pioneer of generative AI research, hasn't. But something tells me the hallucination of names of people whose homes and families to blow up, is astronomical. But even if you like to mock the deaths of innocent men, women and children, do not display your ignorance in mathematics here in the comments section of the FT, for goodness' sake. Mask your ignorance a bit.

      Thanks @Fitch, for pointing this out. I had thought I'd be the only one to make the connection between the error rates and what it means for our present and future as people on this violence-prone planet.

      Delete
  62. This comment has been removed by a blog administrator.

    ReplyDelete
  63. Yeah I saw this and it also tried to correct some enquiries because it didn’t find them diverse enough. NEVER using Gemini (why did I want to call it Genesis? Genesis is skynet?). But also chilling how, beyond having my train labeled by someone so I have to confront politics every time I take the train, I am now to be schooled by AI. That’s the end of days for me.

    ReplyDelete
    Replies
    1. Your new train names also cost 6.5 million because the mayor has 80 press officers to splurge it on.

      Because all the stabbings (predominantly young black men) are not really that important in his grand scheme.

      Honestly the state of things.

      Delete
  64. AI heads towards the last refuge of the unimaginative.

    ReplyDelete
  65. So, wait, is the Justin Trudeau photo woke now?

    ReplyDelete
    Replies
    1. To be white now is an act of counter revolution

      Delete
  66. What's that expression, 'Garbage in, garbage out'.

    ReplyDelete
  67. Give me a white George Floyd and consider the circus redeemed.

    ReplyDelete
  68. How is it possible for Google to spend tens of billions of dollars on this AI software, to be in the search business for over 20 years, and yet their system doesn't understand that Germans, not a small ethnic group, are white? In addition, America, where Google is located, has many people of German descent. President Eisenhower was one of those people.

    What is the testing suite used to verify their software? One would expect some major testing for accuracy.

    The software is useless if you can't trust it. If it doesn't give reliable results whether in a spreadsheet or in artificial intelligence. These results are like a spreadsheet stating that 2+2 = 5.

    This comes across more of a leadership issue. It may requires some leadership with some real engineering experience, eg, electrical engineering, to help make the results accurate. If engineers make a mistake, planes fall, computer chips don't work, nuclear power plants have disasters. Having someone with experience in real engineering might help with project leadership.

    ReplyDelete
    Replies
    1. obviously it does, Gemini is plugged directly into Google’s knowledge graph.

      This was image generation. What exactly does accuracy mean when it comes to image generation? Someone else in the comments described it as a dreaming machine, and that’s a really good way of thinking about it.

      Note all this publicity is being boosted by the petty Musk who’s Grok AI is light years behind Google.

      Delete
    2. Rubbish. Google's AI was tinkered specifically to spew out 'diverse' suggestions because it's the latest religion in California.

      Delete
    3. In an attempt to correct for statistical bias in the imaging training data. You could argue that it is because of a “religion”, but the bias in the data does exist and correcting for it is a way to get it more calibrated to reality. Without correction, a typical diffusion engine will generate a white man as a manager. This has not been typical for decades and is no where near representative of the current workforce.

      Delete
    4. Regardless, this is a leadership issue which would have not occured if someone with a real engineering background was in the leadership role.
      Germans are a large category: there are many Germans and many white faces.
      Good engineering means fixing one issue does not break major testable issues.

      It really comes across as not applying engineering principles to an engineering task.
      How can we ever possibly trust the software when the entire approach that led to this mistake will only be "tinkered with" with the same leadership team?
      I would like to see a real engineer with real engineering experience leading the project.

      Delete
    5. It’s the nature of the beast. There is no engineer currently alive with the skills and knowledge to successfully approach this using conventional engineering techniques.

      Some very clever people are working on what’s called interpretability and explanability which would make it more amenable to traditional systems thinking, but for now it’s just best efforts and lessons learned.

      Delete
    6. This is an unsupportable statement and I totally disagree. It is best to have someone with a real engineering degree from a good school that was mentored on real engineering project (s). There are processes and procedures to follow and an intuition that develops when creating devices that can be subject to expensive recalls, or a plane crash, or a nuclear power accident. That person hires others with similar experience because they understand the importance of engineering and engineering processes. The entire culture of the team is based on solid engineering principles.

      Such a team would never break nor ship a major aspect of the software, eg, Germans having white faces.
      The team lacks engineering discipline. That is a leadership issue corrected by having experienced real engineering leadership.

      You can't always have interpretability, although a nice goal. But when you don't have interpretability, you must have people trust your "black box" AI and you can't have that trust if the people making the system lack the engineering discipline. The system is too complex to totally verify.

      Delete
    7. Are you sure you know how these things work if you think a single random fact like “Germans are white” is a major feature which would be tested? Are you German is that why you’ve taken exception?

      Delete
    8. I actually do have an understanding. I really believe this is a process issue and if engineering practices and processes had been followed, such an error would not have occured.
      It is not a random fact, it is not some rare difficult question or a question with ambiguity, it is a question about a major ethnic group, particularly in the US where Google is headquartered.

      Delete
  69. So yes, this is what you get when your AI model works under the constraints of political correctness/woke definitions, and applies it to everything.

    I hear Mr Minsky screaming from his grave: This would not have happened with symbolic AI!

    ReplyDelete
    Replies
    1. Considering how often he visited Epstein island, in order to be heard he'll need to scream a lot louder than that.

      Delete
    2. The same Mr Minsky? The Epstein one who at 73 did this with a 17 year old (a minor)? https://www.theverge.com/2019/8/9/20798900/marvin-minsky-jeffrey-epstein-sex-trafficking-island-court-records-unsealed. Screaming, for sure.

      Delete
    3. This comment has been removed by a blog administrator.

      Delete
    4. I was more referring to Minsky's way of screaming everyone down who defended sub-symbolic AI.


      Other than that, I didn't know about the accusations - devastating for he victims and disappointing and devastating for everyone else if true.

      Delete
  70. Because you always check your sources as a matter of course ?

    ReplyDelete
  71. If we rely on Google search for answers as much as we do, but then it transpires that their responses are non-analagous to the facts, then how can you have any trust in the search results?

    ReplyDelete
  72. I mean.. I refer you to the comment above king ?


    Have you just been blithely trusting everything on the Internet up to this point ??

    ReplyDelete
  73. Absolutley not, but Google has to be trusted in order for it to be useful. I think its more a question for them than it is for a cynic like me

    ReplyDelete
  74. I mean.. why should a generative AI art bot be trusted about anything ??

    It's.. a.. drawing..

    Like if I drew you a picture of Leonardo Da Vinci riding a Dinosaur, would you take that as proof that such an incident occured ??

    ReplyDelete
  75. But what if Google simply doesn't return results from a disfavored source? You have no way of checking that, unless you were already familiar enough with the subject to not need Google in the first place.

    ReplyDelete
  76. Disfavored ??

    Do you even know how PageRank works ?

    ReplyDelete
  77. Search engines do have ranking bias for trustworthy vs untrustworthy sites.

    ReplyDelete
  78. Untrustworthy in this sense meaning CyberCrime and Trojan vectors - not the accuracy of the information contained within

    Which is what the OP appears to be grasping towards..

    The only thing PageRank cares about is ' how popular is this Webpage with the rest of the Internet '

    ReplyDelete
  79. The mindless, grotesque absurdity of it all. Google knows exactly why this happened and will now turn endless twists in figuring out how to delineate and depict DEI so that the artificial “intelligence” application doesn’t appear utterly cretinous.

    ReplyDelete
  80. You never know. After all, one of the DP’s in an American POW camp was a Korean soldier captured by Russians in Manchuria and forced into a Russian uniform, only to be caught by the Germans. Put into a German uniform, he was captured by Allied forces and moved to the United States after the Second World War.

    ReplyDelete
    Replies
    1. I've heard about him, but something of an edge case I think?

      Delete
  81. Surely this is just very funny isn’t it ? Why does anybody take this nonsense seriously. Just ignore the whole thing …
    Put on some Bach , Coltrane or ABBA or whatever takes your fancy , or read a book or have a run , who cares what the Google AI bot does ? The outrage machine has lost it’s marbles.

    ReplyDelete
    Replies
    1. Yes and no. I had a good laugh too, but these systems will increasingly be used in science and engineering to support the protagonists.

      When calculators were introduced and used in schools, our teachers became angry, because some pupils just never questioned the numeric output, however ridiculous it was. The same risks to happen with AI supported research, analysis and design.

      Delete
    2. Well even classic fm has diversity hour now.

      Delete
    3. Tandoori Banjo Hour?

      Delete
    4. Children will be relying on AI as an educational tool just like they use Google now. It needs to provide an accurate picture of the past. The interpretation of the Holocaust changes if you think it was perpetrated by a heavily diverse society.

      Delete
    5. I get that but think about it like this. I watch a production of some European period costume drama which has people of colour playing roles which are evidently ahistorical but , in the interests of diversity and inclusion , we now accept this conceit . So why not have people of Asian and African heritage populating images of the Wehrmacht ? You either accept the whole conceit that people who weren’t there can now be because it is the inclusive thing to do , or not. I think the Google AI Bot one whatever is showing us a more equal future personally.

      Delete
    6. They're not just good actors ?

      Delete
  82. Step 1: ask AI to generate something
    Step 2: immediate outrage

    ReplyDelete
    Replies
    1. Step 3 : use TwitteX to monetize said outrage

      Delete
    2. Step 4 - realize unchecked AI is now political and is controlling g your belief parameters.

      Delete
    3. Jesus how weak would your mental faculties need to be ?

      Delete
  83. Poor AI. With all this over-alignment we will end up dumbing it down to the levels of the average woke...

    Silver lining: no threat to humanity.

    ReplyDelete
    Replies
    1. what is an average woke?

      Delete
    2. Someone with little intelligence and/or dishonest.

      Delete
    3. You're the one with the truncated number bud

      Delete
    4. Thanks for making my point :)

      Delete
  84. If the systems generates incorrect results it must be fixed. What is so woke about that.

    ReplyDelete
    Replies
    1. AIs are being severely aligned atm because too afraid of upsetting sensitivities. A yr ago if you asked midjourney to create images of “a CEO” it would return 4 white males. Backlash followed and now you have commercial models, like OpenAI and Gemini, being overaligned which in turn results in poor outcomes as per the above.

      Delete
  85. I just did a google search for ‘picture of a white man and a white woman’ and it all came back with nearly all black people!

    Oddly enough I also did ‘picture of a man and woman’ and it came back with almost all white people.

    Hilarious… but also dystopian

    ReplyDelete
    Replies
    1. Just tried it. It is hilarious!

      P.s. when you search for "white man and white woman" you get pushed mixed race couples, mostly black man and white woman pairing.

      Delete
    2. Indeed ... ridiculous

      Delete
    3. Ooohhh the scariest pairing of them all !!

      Delete
    4. Interesting. Duckduckgo gives similar results, it favors pictures with multiple races even if you only mention one race. It did this when I searched just for "black" and just for "white."

      My guess is that including any mention of race seems to make it "think about race" and bring up images from articles that, unsurprisingly, talk about race. Those articles, also unsurprisingly, usually talk about the interaction between two or more races.

      Delete
  86. Drop the “intelligence” and stay with “artificial”.

    ReplyDelete
  87. Calling this a "diversity backlash" is outrageous. The backlash is against programming the AI model to eliminate one specific race, white people, from results where it would clearly be appropriate for them to be there.

    Whatever that is is, it is not "diversity" to eliminate white people from results so far as possible, although I can see how engineers might be confused by this, since it is the explicit goal of "diversity" in hiring and admissions.

    ReplyDelete
  88. This article makes a small mistake by conflating the more general process of fine tuning (anything which updates a pre-trained model’s weights), with RLHF, in which fine tuning is specifically done with a reinforcement model after showing people different options and asking which they prefer

    ReplyDelete
  89. Ah, grass-roots censorship. Orwell, turns out, was naive even.

    Just ban the pen, because hurtful things can be written without the possibility of stopping it in its tracks.

    ReplyDelete
  90. Next up Ladyboy Zulus!

    ReplyDelete
  91. Whilst interrogating chatGPT about a scientific matter it made a blatant mistake . When I pointed this out it replied "Apologies for the oversight; you are correct". It then went on to give me a correct answer. I guess by humans correcting AI's mistakes they are going to get progressively more knowledgeable. We have created the child and we are teaching it....... until it becomes more intelligent than us?

    ReplyDelete
    Replies
    1. No, ChatGPT doesn't save interactions with users and learn from them (although you can give feedback on each interaction and the OpenAI team will use that to improve the product).

      Delete
  92. Aah. So the AI must be trained Orwellian doublethink, and to make sense of parameters like "two legs good, four legs bad" and understand that whilst "all animals are equal some are more equal than others".

    Good luck in training the AI in what is effectively cognitive dissonance: holding two competing and exclusive concepts in mind at the same time.

    ReplyDelete
  93. "...featured in historical contexts inaccurately". So something the BBC, theatre and many mainstream films have been doing for years then.

    ReplyDelete
  94. It is worrying because it shows we are being indeed manipulated- this time it is grotesque, but otherwise we might be blind to it

    ReplyDelete
  95. Interesting that the concept of "people of colour" being in the German army is viewed as absurd....
    ...when placed under extreme conditions, humans are capable of doing more absurd things than machines:
    https://allthatsinteresting.com/free-arabian-legion

    ReplyDelete
  96. storm in a teacup quite frankly. the intention of it is completely fine, twitter is all in a tizzy for no reason.

    ReplyDelete
    Replies
    1. You make money on twitteX now for hit tweets, so guess what's been selling...

      Delete
    2. Stalin's intentions were completely fine, too: industrialization, modernization, abolishment of classes...

      Delete
    3. sure totally appropriate comparison

      Delete
    4. Hush.. introduction of non sequiturs is a time honoured attempt at signalling Intelligence,

      Well Done Stone, we don't think any less of you !

      Delete
  97. I thin it was season six of Vikings, when the season began it was a decade in the future and the queen of seat of the Viking throne was magically a black woman.

    ReplyDelete
    Replies
    1. They'd been magically speaking Modern English for the previous 5 seasons as well...

      Delete
    2. So by that logic they can use guns and submarines too and you'd be fine with it.

      Delete

Post a Comment

Stay informed!