all 4 comments

[–]Hematomato 1 insightful - 1 fun1 insightful - 0 fun2 insightful - 1 fun -  (3 children)

It's a bummer, I think, that our standard for autonomous cars isn't "are they safer than human-driven cars," but rather "are they perfect."

[–]noshore4me[S] 1 insightful - 1 fun1 insightful - 0 fun2 insightful - 1 fun -  (2 children)

I can see that, but when a human hits and kills an individual their license is revoked and they potentially do time and/or face civil penalties. How do you revoke the license of one autonomous car when the entire fleet uses the same software?

[–]Hematomato 1 insightful - 1 fun1 insightful - 0 fun2 insightful - 1 fun -  (1 child)

Why is punishment the purpose? Why isn't just statistical safety the purpose?

If 45,000 Americans a year are killed by human drivers, and 15,000 Americans a year would be killed by weird situations that autonomous cars couldn't figure out, by my reckoning that's 30,000 lives saved. By my reckoning it's three times better not to have a driver.

But instead we treat it like the machines murdered 15,000 people who didn't have to die, and we want to shut down the technology entirely.

[–]noshore4me[S] 1 insightful - 1 fun1 insightful - 0 fun2 insightful - 1 fun -  (0 children)

I never said it was the purpose, but if an individual cannot properly use a motor vehicle they are prevented from doing so through punishment. If a software cannot properly operate a motor vehicle, it's a systemic issue of the computer. If the numbers are lower as a whole in fatalities, good, but it doesn't address the fact that bad human drivers can be removed from the roads while bad autonomous drivers are simply clones of each other and if one clone makes a fatal error, all the clones have the same shortcoming.