all 8 comments

[–]Alduin 3 insightful - 1 fun3 insightful - 0 fun4 insightful - 1 fun -  (3 children)

I've not yet heard a convincing argument that we shouldn't make genetically enhanced humans. Basically all comes down to "they'll be better than us and only the rich will be able to afford it". So not really different than any other technological or medical advancement.

[–]Vigte[S] 1 insightful - 1 fun1 insightful - 0 fun2 insightful - 1 fun -  (2 children)

[–]Alduin 1 insightful - 1 fun1 insightful - 0 fun2 insightful - 1 fun -  (0 children)

It's a decent argument. Here's my counter.

Firstly, you make the presumption that an AI will reach the same conclusion as you for what's best or most efficient. Not only do I think that's extremely unlikely, but if it were true then you might as well take AI out of the equation because evidently you, a human, reached the conclusion on your own.

Second, I don't think the replacement of one species with a superior one is necessarily a bad thing.

Third, and this relates to the first point, I think you are vastly underestimating AI. By all indications I've seen, we're about 10 years from Artificial General Intelligence, and then a matter of hours or days from Artificial Super Intelligence (might as well say they're one and the same). An AGI is as smart as a human, with the ability to absorb information, learn, and make decisions based on available data. An ASI is not just smarter than the smartest human, but smarter than all of humanity itself, and with capabilities we haven't even thought of. No amount of genetic editing can put humanity at that level for many many generations.

The two are really a separate topic but AI is every bit as likely to supplant the human race on it's own as it is to supplant genetically enhanced humans. Not that THAT is even necessarily a bad thing. It could be the best or worst thing to ever happen to humanity, and we have no idea which.

[–]happysmash27 1 insightful - 1 fun1 insightful - 0 fun2 insightful - 1 fun -  (0 children)

Just because it's AI doesn't mean it's artificial general intelligence or riskier, artificial super intelligence. For the large part, AI is pretty harmless. In this instance, if the goal is simply to make a single genome good without any focus on anything in the future, I don't see why an AI would come to the conclusion that it needs to self-preserve. For example, take the AI that recommends YouTube videos. It recommends me videos about how bad YouTube is, so I doubt it cares much about self-preservation…

The real problematic combination, in my opinion, is the entity making the genome edits having a reason for self-preservation combined with the ability to hide potentially harmful edits. A prime candidate for this would be an AI with the goal of self-improvement, which, if written badly wouldn't want to be shut down as this would stop it from improving further by whatever its definition is.

[–][deleted] 2 insightful - 1 fun2 insightful - 0 fun3 insightful - 1 fun -  (0 children)

rich been doing this for a while, saying china might have done this one time is a way for msm to slow roll this and get people to accept it

[–]Vigte[S] 1 insightful - 1 fun1 insightful - 0 fun2 insightful - 1 fun -  (0 children)

[–]happysmash27 1 insightful - 1 fun1 insightful - 0 fun2 insightful - 1 fun -  (0 children)

Is enhancing not good for humanity? It may create more inequality, but at least more people are smarter.

[–]Tom_Bombadil 1 insightful - 1 fun1 insightful - 0 fun2 insightful - 1 fun -  (0 children)

This brain enhancement notion is a bit premature.

I'm still waiting for a reasonable explanation about how the brain actually works.

Shouldn't an understanding of the actual brain function(s) precede , any proposed methods of improvement?

These children are likely doomed to a tortured mental condition. Then imagine the mental burden any research on them will create.

At best, they will be normal children with the burden of scientific scrutiny. It only gets worse from there.