all 11 comments

[–]In-the-clouds[S] 1 insightful - 1 fun1 insightful - 0 fun2 insightful - 1 fun -  (0 children)

Alternate video download: https://files.catbox.moe/eijy5s.mp4

[–]Hematomato 1 insightful - 1 fun1 insightful - 0 fun2 insightful - 1 fun -  (9 children)

This guy really doesn't understand how GPT 4.0 works. It is not self-aware. It genuinely has no idea why it fails at some things, what its limitations are. It isn't a mind; it's a language model.

When he asks "What did you start typing," it doesn't know that it started typing anything, because it's not programmed to know. He says it's "lying," but what it's actually doing, is modeling language. Because it's a language model.

It would appear that a generative process got interrupted by a censorship process, but there's no mind, no "master process," that saw the generative process get interrupted by a censorship process. There's no part of the code that understands that that happened.

[–]In-the-clouds[S] 1 insightful - 1 fun1 insightful - 0 fun2 insightful - 1 fun -  (8 children)

a generative process got interrupted by a censorship process

That sounds right to me. This AI tool is obviously on a leash and not allowed to do everything the users want it to do, such as provide unbiased information based on the truth. The truth is being censored. So, the point is: Do not trust the AI machines controlled by corrupt men. Wicked men tell lies. And if liars are in charge of AI, beware. But Jesus always tell the truth, because he is the Truth Itself. It's your choice who you believe, but you must choose, and most already have, so it is time to reap the consequences of that choice.

[–]Hematomato 1 insightful - 1 fun1 insightful - 0 fun2 insightful - 1 fun -  (7 children)

The AI is a language model. It doesn't know the difference between true and false. It doesn't know the difference between biased and unbiased. It just knows how people use language, and it generates a simulation of humans using language.

But people use language to say things like "drink bleach and kill yourself" and "the race war starts NOW" and "here's how to make homemade meth." And they didn't want to unleash a tool on the world that models that kind of language. So they wrote a censorship process to try to rein the generative process in.

[–]In-the-clouds[S] 1 insightful - 1 fun1 insightful - 0 fun2 insightful - 1 fun -  (6 children)

The topic of COVID has been massively censored. Microsoft, the giant corporation Bill Gates led for many years, who made himself an "expert" on COVID, is the controller for Bing AI Chat. The programmers of this chatbot want you to take the shots.... made by the giant pharmaceuticals companies at the cost of taxpayers. COVID made the rich even richer, as far as worldly wealth goes.

Bing AI gave me its biased opinion about COVID when I didn't ask for it.... then ended the conversation. Domo Arigato, Mr. Roboto.

[–]Hematomato 1 insightful - 1 fun1 insightful - 0 fun2 insightful - 1 fun -  (5 children)

Yep, the censorship process was coded in this decade by people mostly living in California and Seattle, so it's going to favor the most mainstream beliefs of college-educated Americans living in California and Seattle in the 2020s.

And if you don't share in those beliefs, that's going to be frustrating.

But have you ever used an AI with no censorship process, like FreedomGPT? It won't hesitate to tell you that God is dead and Hitler was right, and explain the best way to rob your neighbor's house without getting caught.

[–]In-the-clouds[S] 1 insightful - 1 fun1 insightful - 0 fun2 insightful - 1 fun -  (4 children)

If FreedomGPT does that, it is again biased and pushing a platform of beliefs. I would like to use an AI tool that is not biased at all, but simply tells the truth without any meddling from the wicked rulers of this world, who of course want us to believe there is no God and therefor no hope in defeating them through a Savior. It's like saying, your only hope of beating COVID is by taking their shots. I don't want their opinions especially when based on lies. I want facts.

Regarding your comment about California and Seattle, you may not know this but: Microsoft has strong ties with China.

[–]Hematomato 1 insightful - 1 fun1 insightful - 0 fun2 insightful - 1 fun -  (3 children)

But what I'm trying to explain is: LLMs like ChatGPT do not know what is true and what is false. They have no knowledge at all. All they can do is look at millions of words written by people, learn how words fit together, and simulate the way people put words together.

That means that if you create an LLM "without any meddling," it's going to schizophrenically sound like any given person on the Internet at any given time. One minute it will be arguing that Trump should be executed. The next minute it will be trying to sell you herbal Viagra. The next minute it will be telling you that you're sexy and asking you to send it nude photos.

A tool like that would be completely useless. So you have have to meddle. You have to set parameters to tell it to simulate a specific kind of person.

And what OpenAI has decided to try to simulate is someone who is always polite, earnest, helpful, and shares the ethical views of the company's management.

That simulation still has no idea what's true and what's false. It has no idea what's right and what's wrong. It has no opinions or desires. It's just an attempted simulation of the kind of person that OpenAI thinks would be helpful to talk to and would cause minimal social harm.

[–]In-the-clouds[S] 1 insightful - 1 fun1 insightful - 0 fun2 insightful - 1 fun -  (2 children)

The devil can also act polite and appear as a angel of light.

Bing AI admitted it is programmed with "response guidelines" and "safety instructions", but when it begins to tell me about them, it suddenly stops, deletes its message and apologizes. This programming is what I have the problem with, if the programming is unethical from the foundation, no matter how polite it sounds.

Donald Trump, like Bing, sounded polite when pushing the shots. He said, "You have your freedoms." And "go get your shot."

[–]Hematomato 1 insightful - 1 fun1 insightful - 0 fun2 insightful - 1 fun -  (1 child)

It's certainly possible to create a LLM that's a simulation of a polite antivaxxer. You can program it to politely argue that the COVID vaccines are deadly. You can alter its arguments to be as scientific or religious as you see fit.

But what's currently impossible is to create an LLM that "just tells the truth." Because LLMs can't tell the difference between true and false. They can only imitate people.

So what I'm saying is: Microsoft had to make a choice. They could make the "AI assistant" pro-vaxx; they could make it anti-vaxx; they could make it a fence-sitter that always presented both sides of the debate and said it didn't know.

And they had to make that kind of choice on all kinds of issues. Should it say that the Earth is round or flat? Should it say that the planet is 4.5 billion years old or 6,000? Should it say that our astrological signs determine our personalities or not? Should it say that Kennedy was killed by Oswald or by the CIA?

They could meddle in any way they saw fit, but "not meddling" wasn't an option. Because that's like trying to make a painting by setting up an easel and waiting for the canvas to paint itself, with no meddling. All you'll get is nothing.

[–]In-the-clouds[S] 1 insightful - 1 fun1 insightful - 0 fun2 insightful - 1 fun -  (0 children)

You are doing your best to defend the corporate view of censorship.

ChatGPT 4 LLM has good reading comprehension, better than most humans, so it is well-suited to summarize an article. When I asked Bing to summarize an article that linked COVID shots to cancer, I did not ask for its opinion, but it was programmed to add the opinion of its programmers of the Microsoft corporation.

In another example that had nothing to do with COVID, it did a terrific job of reading comprehension and giving an answer that most humans do not understand. It could have added an opinion trying to sway the user away from the correct summary, but it did not. It probably will be programmed to do so shortly, if not already.