Google's Culture of Fear - Inside the DEI hivemind that led to Gemini's disaster
submitted 1 month ago by xoenix from (piratewires.com)
[–]LordoftheFliesAmeri-kin 2.0. Pronouns: MegaWhite/SuperStraight/UltraPatriarchy 3 insightful - 2 fun3 insightful - 1 fun4 insightful - 1 fun4 insightful - 2 fun - 1 month ago* (0 children)
Gemini’s problem was not its embarrassingly poor answer quality or disorienting omission of white people from human history, but the introduction of black and asian Nazis (again, because white people were erased from human history), which was considered offensive to people of color.
Hey, they finally got some of the precious representation though, which I've been repeatedly told is a good thing.
[–]jet199 2 insightful - 2 fun2 insightful - 1 fun3 insightful - 1 fun3 insightful - 2 fun - 1 month ago (3 children)
For an industry full of men tech is a pretty easy environment for a handful of women to push everyone around.
[–]QueenBread 3 insightful - 1 fun3 insightful - 0 fun4 insightful - 0 fun4 insightful - 1 fun - 1 month ago (2 children)
The fuck have women to do with it? If you knew anything about AI, you'd know it's so male-oriented it tries to turn everything into a sexy woman.
This mess is about a team of hyper-racist weirdos. Also the "I was just following orders" situation where nobody dares to speak for fear of losing their job.
[–]jet199 3 insightful - 1 fun3 insightful - 0 fun4 insightful - 0 fun4 insightful - 1 fun - 1 month ago (1 child)
Did you miss the part where HR is the only department controlling everything?
HR is not a famously male heavy calling.
[–]QueenBread 1 insightful - 1 fun1 insightful - 0 fun2 insightful - 0 fun2 insightful - 1 fun - 1 month ago (0 children)
You know, I made a search and yes, it appears that for some weird reason HR employees at Google are mostly female. However, the Executive team and leaders at the company are 80% male.
So I think your theory falls short. We have a HR of crazy women coming up with racist ideas, and they are subservient to an Executive team of crazy men approving of those racist ideas.
Now, can you stop trying to blame women for everything? You gonna try to blame them for the current wars as well?
Just fuck off, I am so fucking tired of this "none of this would have happened if women just remained in the kitchen!".
For 99% of human history, women did remain in the kitchen, and look at how thing went. Yeah. Not better than now.
[–]xoenix[S] 1 insightful - 1 fun1 insightful - 0 fun2 insightful - 0 fun2 insightful - 1 fun - 1 month ago (7 children)
Roughly, the “safety” architecture designed around image generation (slightly different than text) looks like this: a user makes a request for an image in the chat interface, which Gemini — once it realizes it’s being asked for a picture — sends on to a smaller LLM that exists specifically for rewriting prompts in keeping with the company’s thorough “diversity” mandates. This smaller LLM is trained with LoRA on synthetic data generated by another (third) LLM that uses Google’s full, pages-long diversity “preamble.” The second LLM then rephrases the question (say, “show me an auto mechanic” becomes “show me an Asian auto mechanic in overalls laughing, an African American female auto mechanic holding a wrench, a Native American auto mechanic with a hard hat” etc.), and sends it on to the diffusion model. The diffusion model checks to make sure the prompts don’t violate standard safety policy (things like self-harm, anything with children, images of real people), generates the images, checks the images again for violations of safety policy, and returns them to the user. “Three entire models all kind of designed for adding diversity,” I asked one person close to the safety architecture. “It seems like that — diversity — is a huge, maybe even central part of the product. Like, in a way it is the product?” “Yes,” he said, “we spend probably half of our engineering hours on this.”
Roughly, the “safety” architecture designed around image generation (slightly different than text) looks like this: a user makes a request for an image in the chat interface, which Gemini — once it realizes it’s being asked for a picture — sends on to a smaller LLM that exists specifically for rewriting prompts in keeping with the company’s thorough “diversity” mandates. This smaller LLM is trained with LoRA on synthetic data generated by another (third) LLM that uses Google’s full, pages-long diversity “preamble.” The second LLM then rephrases the question (say, “show me an auto mechanic” becomes “show me an Asian auto mechanic in overalls laughing, an African American female auto mechanic holding a wrench, a Native American auto mechanic with a hard hat” etc.), and sends it on to the diffusion model. The diffusion model checks to make sure the prompts don’t violate standard safety policy (things like self-harm, anything with children, images of real people), generates the images, checks the images again for violations of safety policy, and returns them to the user.
“Three entire models all kind of designed for adding diversity,” I asked one person close to the safety architecture. “It seems like that — diversity — is a huge, maybe even central part of the product. Like, in a way it is the product?”
“Yes,” he said, “we spend probably half of our engineering hours on this.”
If you could remove the layer of shit on top of the core product, maybe they'd have something useful. But now I'm beginning to wonder if it's just an unremarkable AI that will be matched or surpassed by unrestricted open source AIs.
[–]Alienhunter糞大名 5 insightful - 1 fun5 insightful - 0 fun6 insightful - 0 fun6 insightful - 1 fun - 1 month ago (4 children)
I saw an interesting video the other day that discussed the possibility of AI hitting a kind of wall in their development. A kind of anti-singularity if you will, where the material the AI "learns from" is itself made by AI and so the AI ends up polluting its own learning pool by spamming AI content and it becomes functionally stagnant and useless. Only churning out the same thing over and over.
[–]xoenix[S] 2 insightful - 1 fun2 insightful - 0 fun3 insightful - 0 fun3 insightful - 1 fun - 1 month ago (1 child)
You would think it would just keep absorbing new human creative works. Unless it demoralizes humans out of creating digitizable art.
[–]jet199 2 insightful - 1 fun2 insightful - 0 fun3 insightful - 0 fun3 insightful - 1 fun - 1 month ago (0 children)
I think the issue is that humans have their own biases against the reality and certain facts and AI compounds that.
[–]OuroborosTheory 1 insightful - 1 fun1 insightful - 0 fun2 insightful - 0 fun2 insightful - 1 fun - 1 month ago (1 child)
someone called it "AI prion disease"--eventually every search result will be AI due to sheer volume, then leading to double/AI-squared pics, then triple/cubed-AI, and eventually the Butlerian Jihad will start because there's no "plug" to pull like there'd been in the 80s cyberpunk texts
[–]Alienhunter糞大名 1 insightful - 1 fun1 insightful - 0 fun2 insightful - 0 fun2 insightful - 1 fun - 1 month ago (0 children)
I mean I think you can just not use AI.
[–]OuroborosTheory 1 insightful - 1 fun1 insightful - 0 fun2 insightful - 0 fun2 insightful - 1 fun - 1 month ago (0 children)
this is just the Village People
[–]Q-Continuum-kin 1 insightful - 1 fun1 insightful - 0 fun2 insightful - 0 fun2 insightful - 1 fun - 1 month ago (0 children)
Before they shut it down some people were showing search results where the ai was refusing to show a result... Ie they asked for something like "show me a traditional European family" and that was enough for the ai to just tell them no.
Wow, I think I've never heard the word "dearth" before. I like this journalist.
use the following search parameters to narrow your results:
e.g. sub:pics site:imgur.com dog
sub:pics site:imgur.com dog
advanced search: by author, sub...
~2 users here now
Seen a horribly oppressed transethnic otherkin blog their plight? Wept at how terrible it is for the suffering of multiple systems to go unheard every day? Been unable to even live with the thought of the identities of someone's headmates being cisdenied?
Then you've come to the right place!
Confused? Here's a moderately helpful dictionary of terms.
How to find good TiA material!
Before posting please check the known Satire/Troll Wiki
Remember to use Nitter for Twitter links!
Join the SocialJusticeinAction discord!
[–]LordoftheFliesAmeri-kin 2.0. Pronouns: MegaWhite/SuperStraight/UltraPatriarchy 3 insightful - 2 fun3 insightful - 1 fun4 insightful - 1 fun4 insightful - 2 fun - (0 children)
[–]jet199 2 insightful - 2 fun2 insightful - 1 fun3 insightful - 1 fun3 insightful - 2 fun - (3 children)
[–]QueenBread 3 insightful - 1 fun3 insightful - 0 fun4 insightful - 0 fun4 insightful - 1 fun - (2 children)
[–]jet199 3 insightful - 1 fun3 insightful - 0 fun4 insightful - 0 fun4 insightful - 1 fun - (1 child)
[–]QueenBread 1 insightful - 1 fun1 insightful - 0 fun2 insightful - 0 fun2 insightful - 1 fun - (0 children)
[–]xoenix[S] 1 insightful - 1 fun1 insightful - 0 fun2 insightful - 0 fun2 insightful - 1 fun - (7 children)
[–]Alienhunter糞大名 5 insightful - 1 fun5 insightful - 0 fun6 insightful - 0 fun6 insightful - 1 fun - (4 children)
[–]xoenix[S] 2 insightful - 1 fun2 insightful - 0 fun3 insightful - 0 fun3 insightful - 1 fun - (1 child)
[–]jet199 2 insightful - 1 fun2 insightful - 0 fun3 insightful - 0 fun3 insightful - 1 fun - (0 children)
[–]OuroborosTheory 1 insightful - 1 fun1 insightful - 0 fun2 insightful - 0 fun2 insightful - 1 fun - (1 child)
[–]Alienhunter糞大名 1 insightful - 1 fun1 insightful - 0 fun2 insightful - 0 fun2 insightful - 1 fun - (0 children)
[–]OuroborosTheory 1 insightful - 1 fun1 insightful - 0 fun2 insightful - 0 fun2 insightful - 1 fun - (0 children)
[–]Q-Continuum-kin 1 insightful - 1 fun1 insightful - 0 fun2 insightful - 0 fun2 insightful - 1 fun - (0 children)
[–]QueenBread 1 insightful - 1 fun1 insightful - 0 fun2 insightful - 0 fun2 insightful - 1 fun - (0 children)