Why is everyone afraid of artificial intelligence? I'm more afraid of natural human stupidity (which is already infinite according to Einstein!). Superintelligence would just balance that out ;-)
But seriously I think Nick Bostrom has some flawed arguments in his talk:
1. He assumes that the work which is necessary to create a higher and higher intelligent being is linear (or even sublinear) with the IQ. But maybe this is not the case. It could be a lot harder than that. What if it were exponentially more difficult to increase the IQ of an artificial intelligence? Then even if we would manage to have artificial intelligence half as smart as a human in e.g. 30 years, then to get it as smart as humans would take us 60 years. To make a machine twice as smart as humans might well take 120 years, and so on. So having a superintelligence smarter than all humans (if we add their IQs together) together would take 30 * 10 billion years. The universe will end before that is going to happen.
2. He assumes a superintelligence would (despite of its superintelligence) stick to its stupid first optimization task (e.g. make all humans smile). So for him a superintelligence is just a very powerful optimization process, which can do everything to achieve its goal. However I believe that a superintelligence would understand that its optimization task is in fact stupid and instead do something more useful. Maybe it would simply refuse to work with stupid humans, would self-destruct, because it understands that the universe is finite and life is without sense. From human experience we know that genius and madness is often close to each other. So stabilizing (preventing it from becoming crazy and selfdestruct) a superintelligent being might be a very difficult task.
3. He also assumes that we could not put a superintelligence into a save box. But that would only be possible, if the superintelligence would be capable of violating our known physical laws of e.g. electromagnetism and so on. And that would mean, that a superintelligence is automatically super powerful. This seems unlikely. Even though we humans are smarter than animals, we are not super powerful. Physical laws are also valid for us: If we make a mistake, then some animal might simply eat us, despite their lower intelligence. Superhigh intelligence doesn't automatically mean invulnerability and freedom of error.
But lets assume, he is right:
- A superintelligence can be created in a reasonable amount of time (< 1000 years)
- A superintelligence can be stable and not imediately self-destruct.
- A superintelligence can violate our known physical laws, become super powerful and free of error.
So why did this then not already happen somewhere in the universe? From his arguments I believe such a superintelligence should already exist somewhere in the universe. But how would we call such a superintelligence? We simply would call it God!
So all his reasoning seems to boil down to the fact that he assumes God (or Gods) exists in the universe. Some religions might agree with him and can surely give him good advice on how to deal with that situation.
1 Kommentar:
In the meantime others came to the same conclusion:
“What is going to be created will effectively be a god,” Levandowski tells me in his modest mid-century home on the outskirts of Berkeley, California. “It’s not a god in the sense that it makes lightning or causes hurricanes. But if there is something a billion times smarter than the smartest human, what else are you going to call it?”
See https://www.wired.com/story/anthony-levandowski-artificial-intelligence-religion/
Kommentar veröffentlichen