you are viewing a single comment's thread.

view the rest of the comments →

[–]TheJamesRocket 2 insightful - 1 fun2 insightful - 0 fun3 insightful - 1 fun -  (0 children)

One star has thrown a wrench in the long-held belief that the universe is only 13.7 billion years old.

And thats just the tip of the iceberg. All of their measurements on the redshifts of distant Galaxys (which are supposedly receding at near the speed of light) are all wrong. They didn't properly take into account the effect of parallax angle, which means they overestimate the distance of these Galaxys. Their measurements of redshift would also mean that distant Galaxys appeared -fully formed- as early as 100 million years after the Big Bang. This is not enough time for them to have actually evolved! Its equivalent to having multicellular animal life appear soon after the formation of the Earth itself!

There is a solution to these quandarys, but it requires ditching the Big Bang model entirely. ''The Universe is not only queerer than we suppose, but queerer than we can suppose.''

In any case, why would an AI suddenly go all Skynet?

Have you ever heard of the paperclip maximiser? This is a thought experiment to show the dangers of AI. Imagine you have an superintelligent AI that is programmed only to build paperclips? That sounds totally harmless, right? In fact, the thing might end up creating molecular nanotechnology and consuming all matter on Earth until it consists of nothing but paperclips. The AI wasn't trying to destroy humans deliberately, it was just pursuing its goal of building paperclips.

The desire to conquer and kill potential opponents rise in animals due to evolutionary reasons. Evolution has ingrained these instincts in them in order to make it more likely for creatures to pass their genes on.

This gets deeper into the issue of what singulatarians call 'friendliness programming.' Basically, this is how to design an AGI that won't deliberately or accidentally destroy the world if it gets too powerful. The programming language necessary to instill friendliness is fiendishly complicated. There are many reasons for this.

Carl Schulman has an instructive video on this subject: Super-intelligence does not imply benevolence. Eliezer Yudkowsky also describes this in a related video: The Challenge of Friendly AI.

Human values also exist within its own small 'possibility space.' Within the space of all possible minds that can exist, you will also find the space of all possible values that can exist. Our own human values are not even remotely close to being so called 'universal.' They are very narrow, and highly specific to our own evolutionary past. If we were to live in a 'perfect world' created by a non-human entity, that world would be a perfect nightmare for us.

The real danger from AI is that it would empower a handful of (((men))) with totalitarian power over the whole human race.

Oh, they would TRY to do that. But they would inevitably program the AI in the wrong way, and end up destroying humanity. Greedy psycopaths are literally the worst possible people you could put in charge of programming AI, because they would inevitably give the machine goals that are contradictory to the nature of a friendly AI. 'Maximise wealth for the Globalists!' will end out playing exactly the same as 'maximise paperclips!' But look on the bright side: At least their bodys would end up being converted into the substance they love above everything else; Money.

''Your precious atoms, gratefully accepted! We will need it.''