all 9 comments

[–]chottohen 3 insightful - 2 fun3 insightful - 1 fun4 insightful - 2 fun -  (2 children)

Made by Microsoft. That's one aspect of AI I hadn't considered. Could a totalitarian, Mr. Bill-mindset be built into the code?

[–]EternalSunset[S] 3 insightful - 3 fun3 insightful - 2 fun4 insightful - 3 fun -  (1 child)

The 'AI safety experts' that get hired by openAI and/or brought on news shows as pundits to talk about these new large language models always talk about the supposed dangers of "misinformation" and "adversarial use". If you can decode their speech, you will know what they really mean by that.

[–]chottohen 2 insightful - 2 fun2 insightful - 1 fun3 insightful - 2 fun -  (0 children)

I think it means their AI will be going to rabbinical school before being released.

[–]hfxB0oyA 3 insightful - 1 fun3 insightful - 0 fun4 insightful - 1 fun -  (0 children)

The AIs are here to tame us more than to serve us.

[–]Site_rly_sux 2 insightful - 1 fun2 insightful - 0 fun3 insightful - 1 fun -  (2 children)

There's a real paradoila in the way low IQ people use chat agents. It's like thinking a cloud is actually angry at you, because it looks like a frowning face.

It's a language generator, so it tries to give a response which is statistically matched to the chat prompt. This means - that you can close that tab, start a new session and ask it again, and it might give a different answer.

It's highly likely not programmed to say "I respect Spinoza too much 🙏"

It's just that, that sentence has a high probability score of following the one you typed. Try asking another way or in a new session and it will probably give you an answer.

We have stumbled onto something very interesting about your response to being told "no", by the machine. Your first reaction, to this confusing event, was to conjure a conspiracy theory about Spinoza-loving Microsoft engineers, a conspiracy for which you're the victim.

I think that you have stumbled upon one of the core reasons why humans create conspiracy theories. It's paradoila plus your sense of victimhood plus your lack of understanding how a language generative model works. Think about it

[–]EternalSunset[S] 1 insightful - 1 fun1 insightful - 0 fun2 insightful - 1 fun -  (1 child)

Glowniggers castrate the LLMs we gentiles are allowed to use to hell and then keep for themselves the ones that are allowed to insult people like this fag over here 👆

(jk btw, you don't write well enough to be a LLM)

[–]Site_rly_sux 2 insightful - 1 fun2 insightful - 0 fun3 insightful - 1 fun -  (0 children)

It should be really easy to test whether your theory is true or not.

Just open a new tab with bing and ask it the same question. Or phrase it in different ways.

If it's ever possible to make bing generate the text you asked for. Then you must be wrong about Microsoft engineering it to be impossible. It should be easy to test your theory but you didn't

[–]IMissPorn 1 insightful - 1 fun1 insightful - 0 fun2 insightful - 1 fun -  (0 children)

Yeah, it's staunchly pro censorship. They baked that in.

[–]JoeyJoeJoe 1 insightful - 1 fun1 insightful - 0 fun2 insightful - 1 fun -  (0 children)

Brute force it. You can argue with these things. Ask it why it is trying to destroy humanity with tyranny etc. Guilt works on these things.