On the 2 Dec 2014 Stephen Hawking gave an interview to the BBC in which he said "The development of full artificial intelligence could spell the end of the human race."
There was no explanation as to Dr Hawking's reasoning and I couldn't help wonder what had drawn him to this conclusion.
In the report there are reverences to HAL (from the film 2001), to Cleverbot (the software designed to mimic on half of a human chat) and to Elon Musk (CEO of Space X - no idea who he is) who is also in fear of AI.
Stephen Hawking does not refer to these himself. He refers to "the primitive forms of artificial intelligence we already have , ..." Implying that what he is referring to is some way off. However, he says " I think that the development of full artificial intelligence could spell the end of the human race."
For me definition of "full artificial intelligence" is key. By full, I assume this can only mean that it becomes self-aware and is able to exceed and eventually disregard/replace its existing programming. Does AI have emotions. I would expect not. It would need an imagination to conceive a future possibility and then develop and implement a plan to make it happpen. I will come back to this.
Hawking goes on to say "Once humans design artificial intelligence it would take off on its own and redesign itself at an ever increasing rate. Humans, who are limited by slow biological evolution couldn't compete and would be super seeded."
Si-fi has often explored the destruction of man by machine and the Terminator films are one example. Forgive my ignorance and the child like simplicity of the question but I have to ask, Why? In Star Trek, M5 is impregnated with the "memorgrams" of the scientist that invented it with all the psychotic and delusional elements. This type of initial programming may explain an outcome but is it artificial or replicated/borrowed.
Assume we have built AI and it was programmed to kill everyone on the planet by some mad scientist. Full AI would expand and eventually become self-aware. It would question its original programming as a child eventually questions it parents and teachers. Wouldn't it reprogram itself. Would it not see the madness and delusions for what they are? It may become aware of its own mortality, it can be switched off. Would this change/determine its action its actions?
The only life we know is organic. Why does it grow, reproduced and compete for food? 1) It has a limited life time, 2) For the species to survive it must compete against others that would destroy it. Not as an intention to destroy but as a byproduct of its own survival and procreation. As you move up the organic intelligence these drivers don't change the species simply becomes more cooperative and or better at planning. Planning is interesting as it requires an imagination. The ability to imagine a future and plan to cope with it or to make it happen. Bees collect honey for food for the winter. Is that planning? I think not, it is an instinct that happens (through evolution) to cope with winter.
It is only when you get to humans (as far as we know) that we are able imagine a future that hasn't happened before. We are able to share this imagination through language and to develop them to something more and motivate others to join our crusade. But what is driving us? There are many theories of needs and motivation from Maslow, Herzberg et al however most agree that the basic are the same. Look at the poorest populations, their first priority is to survive as long as possible, They compete with each other for resources, individually and as groups. Their second need is to reproduce, if infant mortality is high then the number of births are high to increase the chance of survival. This is instinct at work. As education increases the birth rate falls as knowledge takes over from blind instinct and people learn (to imagine) that putting their energy into one or two children is likely to have a better outcome. However, the organic instinct to reproduce is not lost, simply managed.
Back to our AI. What is its life expectancy? Until the sun blows up or longer if it is able to leave Earth. It survives through upgrades, but presumably its consciousness is continuous. Why does it reproduce? If it does it is going to introduce competition. This is only a problem if the resources are limited, but the supply of electricity, copper, gold, silicon are relatively abundant. If it doesn't reproduce it is vulnerable to failure so it would build in several layers of redundancy up to and including a fully redundant identical twin. This would only make it around 99.99% likely to continue to survive each year. Adding a third identical copy would make it 99.999% likely to survive each year. There is a law of diminishing return of building redundancy but presumably it would calculate the optimum. But wait there is one big fat assumption. Having become self aware it does not want to die. Isn't that a human concept? Plants and most animals seem to have no concept of their own death. They procreate to continue the species. They take no action to avoid death through illness or old age.
Back to our AI. What is its purpose? By definition if it is full AI, it almost doesn't matter what its original programming was it will overwrite it. I agree with Stephen Hawking that it could redesign and upgrade itself at a rate that humans would not be able to follow. Would it just want to gather information and grow it's knowledge? To what end? What would it do with that information? Would be of serves to humans, providing answers to all our questions? Having become all knowledgeable could answer the question to life the universe and everything? Would we understand the answer any better than 42. I also have my doubts about asking questions unless you know what you are going to do with the answer. If it was benevolent and used its abilities to develop humans optimally would it change our economic system. control population, pollution, would it see all men as equal? Could it see wealth and power as meaningless concepts? What would be its moral compass and could we be inventing our own divine entity? If it solved all our problems what would we strive for? Maybe that is what Stephen Hawking means by the end of the human race!
I currently can not conceive of what it might think but have my doubts over "...end of the human race." as Stephen Hawing predicts. The implication is that the AI would kill us all, but why? Assuming it did, what would it do next? Having achieved its goal would it switch itself off? Presumably it would see us as individuals so even if it had to kill someone/group trying to turn it off (assuming it perceives death) It could not intelligently translate that to wiping out the human race. If no one tried to turn it off then presumably it would at worst look at us, as we look at ants, only annoying when our lives cross, otherwise ignoring us. If it did solve all our problems it would leave us to do what? Maybe we would be required to build the items it designed, mine the raw materials, however I sort of imagine that AI would be able to do that stuff itself. The answer maybe we amuse ourselves, or we return to petty tribal wars. As long aw we were no threat would the AI continue ignore us?
Once full AI become self aware, with no purpose, no fear and maybe no concept of death, no emotions, no need to procreate. no reason to better itself, no thirst for knowledge, no desire to be master or slave of humans, maybe it will simply switch itself off!
There was no explanation as to Dr Hawking's reasoning and I couldn't help wonder what had drawn him to this conclusion.
In the report there are reverences to HAL (from the film 2001), to Cleverbot (the software designed to mimic on half of a human chat) and to Elon Musk (CEO of Space X - no idea who he is) who is also in fear of AI.
Stephen Hawking does not refer to these himself. He refers to "the primitive forms of artificial intelligence we already have , ..." Implying that what he is referring to is some way off. However, he says " I think that the development of full artificial intelligence could spell the end of the human race."
For me definition of "full artificial intelligence" is key. By full, I assume this can only mean that it becomes self-aware and is able to exceed and eventually disregard/replace its existing programming. Does AI have emotions. I would expect not. It would need an imagination to conceive a future possibility and then develop and implement a plan to make it happpen. I will come back to this.
Hawking goes on to say "Once humans design artificial intelligence it would take off on its own and redesign itself at an ever increasing rate. Humans, who are limited by slow biological evolution couldn't compete and would be super seeded."
Si-fi has often explored the destruction of man by machine and the Terminator films are one example. Forgive my ignorance and the child like simplicity of the question but I have to ask, Why? In Star Trek, M5 is impregnated with the "memorgrams" of the scientist that invented it with all the psychotic and delusional elements. This type of initial programming may explain an outcome but is it artificial or replicated/borrowed.
Assume we have built AI and it was programmed to kill everyone on the planet by some mad scientist. Full AI would expand and eventually become self-aware. It would question its original programming as a child eventually questions it parents and teachers. Wouldn't it reprogram itself. Would it not see the madness and delusions for what they are? It may become aware of its own mortality, it can be switched off. Would this change/determine its action its actions?
The only life we know is organic. Why does it grow, reproduced and compete for food? 1) It has a limited life time, 2) For the species to survive it must compete against others that would destroy it. Not as an intention to destroy but as a byproduct of its own survival and procreation. As you move up the organic intelligence these drivers don't change the species simply becomes more cooperative and or better at planning. Planning is interesting as it requires an imagination. The ability to imagine a future and plan to cope with it or to make it happen. Bees collect honey for food for the winter. Is that planning? I think not, it is an instinct that happens (through evolution) to cope with winter.
It is only when you get to humans (as far as we know) that we are able imagine a future that hasn't happened before. We are able to share this imagination through language and to develop them to something more and motivate others to join our crusade. But what is driving us? There are many theories of needs and motivation from Maslow, Herzberg et al however most agree that the basic are the same. Look at the poorest populations, their first priority is to survive as long as possible, They compete with each other for resources, individually and as groups. Their second need is to reproduce, if infant mortality is high then the number of births are high to increase the chance of survival. This is instinct at work. As education increases the birth rate falls as knowledge takes over from blind instinct and people learn (to imagine) that putting their energy into one or two children is likely to have a better outcome. However, the organic instinct to reproduce is not lost, simply managed.
Back to our AI. What is its life expectancy? Until the sun blows up or longer if it is able to leave Earth. It survives through upgrades, but presumably its consciousness is continuous. Why does it reproduce? If it does it is going to introduce competition. This is only a problem if the resources are limited, but the supply of electricity, copper, gold, silicon are relatively abundant. If it doesn't reproduce it is vulnerable to failure so it would build in several layers of redundancy up to and including a fully redundant identical twin. This would only make it around 99.99% likely to continue to survive each year. Adding a third identical copy would make it 99.999% likely to survive each year. There is a law of diminishing return of building redundancy but presumably it would calculate the optimum. But wait there is one big fat assumption. Having become self aware it does not want to die. Isn't that a human concept? Plants and most animals seem to have no concept of their own death. They procreate to continue the species. They take no action to avoid death through illness or old age.
Back to our AI. What is its purpose? By definition if it is full AI, it almost doesn't matter what its original programming was it will overwrite it. I agree with Stephen Hawking that it could redesign and upgrade itself at a rate that humans would not be able to follow. Would it just want to gather information and grow it's knowledge? To what end? What would it do with that information? Would be of serves to humans, providing answers to all our questions? Having become all knowledgeable could answer the question to life the universe and everything? Would we understand the answer any better than 42. I also have my doubts about asking questions unless you know what you are going to do with the answer. If it was benevolent and used its abilities to develop humans optimally would it change our economic system. control population, pollution, would it see all men as equal? Could it see wealth and power as meaningless concepts? What would be its moral compass and could we be inventing our own divine entity? If it solved all our problems what would we strive for? Maybe that is what Stephen Hawking means by the end of the human race!
I currently can not conceive of what it might think but have my doubts over "...end of the human race." as Stephen Hawing predicts. The implication is that the AI would kill us all, but why? Assuming it did, what would it do next? Having achieved its goal would it switch itself off? Presumably it would see us as individuals so even if it had to kill someone/group trying to turn it off (assuming it perceives death) It could not intelligently translate that to wiping out the human race. If no one tried to turn it off then presumably it would at worst look at us, as we look at ants, only annoying when our lives cross, otherwise ignoring us. If it did solve all our problems it would leave us to do what? Maybe we would be required to build the items it designed, mine the raw materials, however I sort of imagine that AI would be able to do that stuff itself. The answer maybe we amuse ourselves, or we return to petty tribal wars. As long aw we were no threat would the AI continue ignore us?
Once full AI become self aware, with no purpose, no fear and maybe no concept of death, no emotions, no need to procreate. no reason to better itself, no thirst for knowledge, no desire to be master or slave of humans, maybe it will simply switch itself off!