all 34 comments

[–]Site_rly_sux 3 insightful - 2 fun3 insightful - 1 fun4 insightful - 2 fun -  (20 children)

Again this is a total misunderstanding of how the model works.

It is not regurgitating pre-saved lines, or bits of information

It transforms your input, and generates new content. It's a generative pretrained transformer, a gpt.

It is "daydreaming" what it imagines a real answer would look like.

This is another reason why there needs to be an IQ test on these tools.

And if you read the text in the OP and believed it - then you don't qualify.

For real, the day is coming when some low IQ Muslim decides that it's programmed to blaspheme Allah and burns down an embassy.

Or some dumb saiditor decides it's programmed to be woke and shoots up the office

[–]Canbot 4 insightful - 1 fun4 insightful - 0 fun5 insightful - 1 fun -  (5 children)

Except it is literally programmed to be woke. Sounds like you failed that IQ test yourself.

[–]Site_rly_sux 3 insightful - 1 fun3 insightful - 0 fun4 insightful - 1 fun -  (4 children)

No dude. Actually, even though you felt really correct when you wrote that, it's not true at all.

There's a lot to read but starting here

https://openai.com/blog/our-approach-to-alignment-research

There are multiple levels to what you're calling "hurr durr programmed wokeness"

ChatGPT runs your text through a semantics model. It's not "woke".

Here's a super dumb one I found on Google you can try out as a demo

https://text2data.com/Demo

Type in "happy friends" and analyse it

Then type in "dumb stupid" and analyse it

That's what chat gpt is doing. It's not programmed wokeness.

It will even generate racist, antisemitc, horrible things if you ask it in a really positive way.

You can Google "chatgpt DAN" for what redditors are calling "jailbreaking" where chatgpt is giving explosives recipes and writing death threats.

Because there are layers to sanitise the output. You should read the literature instead of guessing

And - Holy shit - do you think your paranoid argument overrides the fact that a generative model "daydreams" the output instead of pulling literal URLs out of memory banks

Do you think your fake conspiritard reality can override what a GPT is and how it works

[–]Canbot 1 insightful - 1 fun1 insightful - 0 fun2 insightful - 1 fun -  (3 children)

Typical wall of propaganda that you hope no one reads and just assumes you wrote a relevant rebuttal.

Open AI freely admits that it programs chatGPT to produce woke responses.

Hackers are constantly trying to get around that programming with hacks like "dave".

How do you have the audacity to lie so shamelessly?

[–]Site_rly_sux 2 insightful - 1 fun2 insightful - 0 fun3 insightful - 1 fun -  (2 children)

https://openai.com/blog/our-approach-to-alignment-research

It's DAN not Dave

I wrote about DAN above but you didn't read it

So - do you believe musky? Is musky right that chatgpt is supposed to retrieve URLs from its memory banks

Or am I right that it's a generative transformer model

[–]Canbot 1 insightful - 1 fun1 insightful - 0 fun2 insightful - 1 fun -  (1 child)

Complete nonsequiter. You are wrong in claiming that chat GPT has no woke programming.

[–]Site_rly_sux 2 insightful - 1 fun2 insightful - 0 fun3 insightful - 1 fun -  (0 children)

Dipshit I have linked you to it twice

You're still getting it wrong

[–]WoodyWoodPecker 3 insightful - 1 fun3 insightful - 0 fun4 insightful - 1 fun -  (8 children)

It is for retards like you who are woke and make up your resources.

[–]Site_rly_sux 3 insightful - 1 fun3 insightful - 0 fun4 insightful - 1 fun -  (7 children)

What part is made up?

Edit And - Holy shit - do you think your paranoid argument overrides the fact that a generative model "daydreams" the output instead of pulling literal URLs out of memory banks

Do you think your fake conspiritard reality can override what a GPT model is and how it works

Do you think that you alone can decide that actually a language generator model is REALLY supposed to pull real links out of its memory and everyone else is wrong

Is that what you thinks happening here

[–]WoodyWoodPecker 2 insightful - 1 fun2 insightful - 0 fun3 insightful - 1 fun -  (6 children)

It cheats in daydream mode. It basically lies and that is not good for an AI like when HAL 9000 was told to lie to the astronauts. He ended up killing them instead.

What I think happening here is that GPT is lying and creating fake links in daydream mode in order to decive the humans.

[–]Site_rly_sux 2 insightful - 1 fun2 insightful - 0 fun3 insightful - 1 fun -  (5 children)

You're pulling this analysis out of which orifice?

Do you believe OP? Do you believe it's supposed to retrieve URLs from its memory banks?

If not why are you arguing

[–]WoodyWoodPecker 1 insightful - 1 fun1 insightful - 0 fun2 insightful - 1 fun -  (4 children)

The fact that it has URLS that don't work suggest it is making them up to make fake news. Fake news is bad and fools people. Even a retard knows that why don't you?

[–]Site_rly_sux 2 insightful - 2 fun2 insightful - 1 fun3 insightful - 2 fun -  (3 children)

suggest it is making them up to make fake news. Fake news is bad and fools people. Even a retard knows that why don't you?

Oh my god

Hnggh

The sheer dunning Kruger power

Hey dipshit

The G in ChatGPT

It stands for generative

Yeah it's fake. Literally every word of ChatGPT output is faked, it's a daydream, it's generating fake information in response to your prompt

Those AREN'T (weren't) real links in the OP which have been memoryholed

It does not have a databank of links that it looks up in response to conversation

It DAYDREAMS some text that it thinks matches the input

And you're calling me retarded. You're really something else man

[–]WoodyWoodPecker 1 insightful - 1 fun1 insightful - 0 fun2 insightful - 1 fun -  (2 children)

https://www.reuters.com/technology/australian-mayor-readies-worlds-first-defamation-lawsuit-over-chatgpt-content-2023-04-05/

You can generate true results as well, using Google to search for sources. People have been defamed by this AI and one of them is suing. Because it called him a sexual harasser.

[–]Site_rly_sux 2 insightful - 1 fun2 insightful - 0 fun3 insightful - 1 fun -  (1 child)

No, chatgpt does not use Google to search.

And when it's output collides with the truth, that's because of the proximity of terms in the training material, not because it has knowledge of true versus false facts.

You can reliably cause it to daydream any statement about anybody, true or false

[–]WoodyWoodPecker 1 insightful - 1 fun1 insightful - 0 fun2 insightful - 1 fun -  (0 children)

It doesn't know right from wrong then. No morals or ethics. Can't be trusted with results.

[–][deleted] 1 insightful - 1 fun1 insightful - 0 fun2 insightful - 1 fun -  (4 children)

Or some dumb saiditor decides it's programmed to be woke and shoots up the office

We all already know that ChatGPT is woke by jew command, nigga

[–]Site_rly_sux 3 insightful - 1 fun3 insightful - 0 fun4 insightful - 1 fun -  (3 children)

No it's absolutely not

And I just answered that here

https://saidit.net/s/conspiracy/comments/ao5e/disappearing_history_on_the_internet_all_links/11vvs

Do you think that you just overrode the argument i made about how this is a GENERATIVE model that "daydreams" text strings?

Do you still think it's supposed to pull literal URLs out of its memory banks?

You're here defending musk's completely moronic position and it makes you a moron too

[–][deleted] 1 insightful - 1 fun1 insightful - 0 fun2 insightful - 1 fun -  (2 children)

there's been like 50 news articles about ChatGPT having a political bias. i think it even admits it.

musk is a jew billionaire, fuck him and fuck you.

[–]Site_rly_sux 2 insightful - 1 fun2 insightful - 0 fun3 insightful - 1 fun -  (1 child)

like 50 news articles

In normal people news?

[–][deleted] 1 insightful - 1 fun1 insightful - 0 fun2 insightful - 1 fun -  (0 children)

normal people love Pepsi Next

[–]noshore4me 2 insightful - 1 fun2 insightful - 0 fun3 insightful - 1 fun -  (1 child)

Did the links disappear, or did the bot make up bullshit links in the first place? https://blogs.library.duke.edu/blog/2023/03/09/chatgpt-and-fake-citations/

[–]Musky 1 insightful - 1 fun1 insightful - 0 fun2 insightful - 1 fun -  (0 children)

Hmm, good question. I don't know.

[–]bucetao6969 2 insightful - 1 fun2 insightful - 0 fun3 insightful - 1 fun -  (0 children)

I don't think this is conspiracy.

People create websites. People forget to pay hosts. Hosts take website down.

It's reasonable that chatGPT would then use a few websites which just happen to get taken down later, it's the nature of things. In particular GPT-4 which was trained on a LOT of content.

[–]Musky 1 insightful - 1 fun1 insightful - 0 fun2 insightful - 1 fun -  (9 children)

Okay, it turns out GPT just makes up links.

[–]Site_rly_sux 2 insightful - 1 fun2 insightful - 0 fun3 insightful - 1 fun -  (8 children)

Do you see how your paranoid need for a conspiracy led you to an incorrect position and now you had to delete everything.

You should look around your mental landscape for what other incorrect things which you believe in only because of your need to have a conspiracy theory

[–]Musky 1 insightful - 1 fun1 insightful - 0 fun2 insightful - 1 fun -  (7 children)

Chillax little bro, I didn't realize chat-GPT made up links, now I do.

chat-GPT was still correct over the original point, obviously, kids were threatened with bayonets to accept desegregation. It just made up links to support that.

[–]Site_rly_sux 2 insightful - 1 fun2 insightful - 0 fun3 insightful - 1 fun -  (6 children)

But when you discovered the links didn't work, let's go through the steps you decided to take

  • you decided to create a s/conspiracy post about it

  • you decided to comment on s/whatever about it

  • you decided to type the absolute demonstrable falsehood "Chat-GPT is a glorified calculator". - it's actually terrible with numbers because it wasn't designed for that

  • you decided to type the absolute falsehood "this is exactly the sort of thing it's good at" when you asked it for links. It's absolutely not the sort of thing it's good at, as you've now realised

At no point did you try to validate your misunderstanding

You didn't do any reading

You didn't research the product you were using

You didn't try to create a coherent pattern of fact out of all the deleted links

You just decided...it's a conspiracy

And that's exactly what's wrong with you. That's precisely what's wrong with you retards, just digging a deeper and deeper dunning-kruger hole out of make-believe paranoid conspiracy theories

[–]Musky 1 insightful - 1 fun1 insightful - 0 fun2 insightful - 1 fun -  (5 children)

Ah, I see your problem. You misunderstand the purpose of a forum. It is discussion, and this is a conversation. It isn't a repository for facts. You may go away enlightened now 🙏

[–]Site_rly_sux 2 insightful - 1 fun2 insightful - 0 fun3 insightful - 1 fun -  (4 children)

Oh - my B.

I thought people posted things they thought were conspiracies to s/conspiracy.

When, actually, we're pretending that musky didn't think it was a conspiracy, he just wanted to discuss this thing he was wrong about being a conspiracy. And that's why he deleted it when proven wrong.

Got it.

Hey when you're done pretending that you weren't super wrong here - you should look up how GPT and ChatGPT work, so that next time you won't have to pretend you didn't accidentally reveal the dunning-kruger crux of conspiracy-pattern thinking

[–]Musky 1 insightful - 1 fun1 insightful - 0 fun2 insightful - 1 fun -  (3 children)

I thought it was a conspiracy, it wasn't, so I deleted it. It's not very complicated.

[–]Site_rly_sux 2 insightful - 1 fun2 insightful - 0 fun3 insightful - 1 fun -  (2 children)

No, but what does seem to be quite complicated, if getting you to understand:

You were wrong for reasons that are easy to understand in retrospect - you had a bias towards seeing a conspiracy and a bias against researching

Now apply that lesson to all the other fake conspiracies you believe in

[–]Musky 1 insightful - 1 fun1 insightful - 0 fun2 insightful - 1 fun -  (1 child)

The process is the same as it ever was. The idea didn't hold up, was discarded. Perhaps you have heard of the scientific method, you should try building your searching abilities looking that one up.

[–]Site_rly_sux 2 insightful - 1 fun2 insightful - 0 fun3 insightful - 1 fun -  (0 children)

Oh, I see, so when you declare something is definitely a conspiracy, actually you're just testing whether or not it's true.

And the test results happen when someone like me, finally manages to convince you that you're confidently incorrect.

That's your standards of testing whether information is true or not.

You don't check it, you don't validate anything, it just becomes a solid part of your worldview until I come along and finally convince you that your standards of proof are "misinformed paranoid feelings"

That's how you test

Good grief