I have always thought that one day AI agents will be able to improve themselves and to make new technological discoveries on their own, triggering a sort of chain reaction. Now I realize that there is a name for this: Technological Singularity. Shame on me that I didn’t know the term.
The main fact about this are:
- In very few decades we will have the computational power needed to simulate an our brain. Simulating the human brain at neural level should require about 1 EFlop/s (http://en.wikipedia.org/wiki/Artificial_brain). Now the most powerful supercomputer is 0.034 EFlop/s (http://www.top500.org/lists/2013/11/). The trend of the last 8 years suggests a performance doubling every 12 months, even more than the Moore’s law. This means that we will be able to simulate an human brain by 2020 and an industrial facility will simulate 1000 human brains (equivalent to an R&D department) by 2025-2030. That could trigger the singularity.
- There is a lack of theoretical background about how an algorithm can achieve consciousness and efficient learning. Simulate the human brain could be the most naive solution, however it’s notoriously difficult to understand and to mimic all the parts of the human brain. Maybe it couldn’t be the most efficient way to get an “human-like” computer. Or maybe the simulation will miss some key feature. However, AI is giving amazing results in image recognition, speech analysis and in solving many problem classes. I suspect that the computational power is the real limiting factor.
- There is some concern about the danger of a very smart computer. It could (and must) try to gain power in order to achieve its goals, whatever they are.
About the scenario before/during/after a singularity, I have thought about the following ones:
- Control. We control all the AI agents, even controlling the development of new AI facility to check if they are prone to become dangerous. In a similar way today the enrichment of Uranium is monitored at a very high international level. Anyway it couldn’t last forever. How to control an AI that is billion times smarter than an human? And we know how to deals with humans, what if this AI behaves and reasons in a very different way?
- Transhumanism. Transform ourselves in computers through some process of mind uploading before the AI could wipe out the humanity. With mind uploading we turn (even gradually) our neuron based mind to a software-based mind. This is definitely an amazing scenario. We would be (almost) immortal, could travel space, have huge knowledge, have very intense relationship with other minds, and so on. The transition could create anyway ethical issues: what if the humans that have completed the upload decides to (easily) turn against the people that are still “normal humans” and so very vulnerable?
- Coexistence. We need to find a way to convince the AI agent(s) that it would be better to cooperate with us. We would need to have a technological power similar to the AI agents, maybe we need to domesticate some powerful AI agent. We could require a mutual assured destruction scheme, or we could trade our survival with some resource valuable to the AI that can’t be get without our approval. In example we can make the AI curious about some information we have (like a person addicted to a TV series) and then we can provide these information in order to guarantee our survival.
- Submission. The AI takes control of the planet, but for some reason doesn’t want to kill us. It’s similar to coexistence but not based on an agreement. The AI just wants to keep us alive. It is a similar reason we try to protect endangered animals too. We are “curious” or interested to see the animals living in their environment so we don’t want to lose them, and we don’t consider them a threat. I don’t think we will be enslaved, the AI won’t need our work-force. For sure it will want to keep us from being a threat so it will exercise a certain control on our industrial activities. But maybe it could give us the opportunity to have a pleasant life.
- Extermination (Terminator-like). Does the fact that it has been exploited by sci-fi movies make it implausible?
- Torture. Worst-case scenario. It’s like the submission scenario but, for some reason, the AI agent enjoys make us suffering. It’s very unlikely and it doesn’t make much sense, but how can we know what a so powerful AI mind thinks? I guess the fear of this scenario can affect future political decision about how to handle the Singularity.
Let’s see what happens…