Ultimate fate of humanity: technological singularity, singleton and space colonization

Sometimes I read some funny and very unlikely prediction about the far future of humanity. Here there is a prediction about in 100,000 years there will be probably two human species due to sexual selection. humanknowledge.net predicts instead that in 1000 years we will be still struggling with DNA engineering, in 8000 years we will start to terraform Mars or Jupiter and only after 100,000 years more than half of the humans will live outside Earth. Those examples looks, no offense, a bit naive to me, but they reflect a general thought about humanity. There is this strange belief that human beings will be around forever (million years), and that we will be struggling with more or less the same issues we have today: death, illness, individualism, lack of resources (mining on other planets is a laughable scenario), even if with a better technology. On the other hand, there is a belief that humanity will be self-destructed by its own greediness. Frank Frenner, a scientist who succeeded in smallpox eradication, predicts human exintion in just 100 years due to overpopulation and lack of resources. Kudos to Frank Frenner, but I think those scenarios are affected by a much shorter term political view.

I think that the technological singularity will create a totally different scenario that will lead to a single emerging artificial intelligence, a singleton, rather than many individual human/artificial people. The next figure shows my personal view of the timeline of our civilization (theoretically I can define other ages before prehistory, but I am not much interested in past).

Civilization agesPrehistory. This part is the slowest. The “technology” here is represented by unwritten knowledge. Structure of society, farming and breeding methods, division of labour, control of fire and manufacturing methods are form of unwritten knowledge. They can evolve, but it’s a very slow process that involves also many back steps.

History. With the writing systems the technological development can be faster and more monotonic. Technological back steps are more difficult, even if possible like in the Middle Age. Human knowledge accumulates and this can already be considered a chain reaction, the begin of technological singularity. With books, human can use the work and the discoveries of their predecessors, becoming smarter and more powerful.

Computer age. With computer we are not humans anymore, we are computer-assisted-humans! Our intelligence is boosted by the huge computational power of our machines. Theorems proved, engineering designs, optimization strategies… all these things are usually computational intensive, so we are able to get some results that normal humans without a computer wouldn’t never be able to achieve. Also the speed and the quantity of information available to us is much bigger and immediate. This is another level of chain reaction: computer-assisted-humans get smarter, so they can design better technologies, so they get smarter again.

(and now the future…)

Transhumanism is another level of the chain reaction that will transform our civilization. Transhumanism is one of the outcomes of the technological singularity that I consider very likely to happen. Our society is made of many individuals weakly connected between them. What I mean for “weakly connected”? It’s not a rant about consumerism, selfishness and capitalism, it’s more about how our bodies work. The quantity of information exchanged by two individuals will always be much much smaller than the quantity of information exchanged inside the brain of one individual. Inside our brain, billion neurons fire every few milliseconds, different parts of the brain are interconnected through billion or trillion synapses, resulting in an information bandwidth of petabytes per second. Communication between two people is verbal, written and made of body language. If you think at the quantity of information that your brain is actually able to receive from another person, you won’t have more than 100 bits per second.

What happens instead with transhumanism? These are the key facts:

  • Computers will be more intelligent than humans and they will design smarter computers again. It’s the intelligence explosion.
  • Humans will be able to increase their intelligence inside a biological carbon-based framework, in example through some kind of DNA engineering, or through merging with artificial devices, let’s say silicon based. Mind uploading will make humans immortal.
  • Artificial brain will be scalable. Brain merging will allow to create a single deeply interconnected brain network. Two people won’t exchange anymore few bits per second but petabytes per second once they got their brains merged.
  • During this phase, normal human beings will coexists with artificial brains and mixed human/artificial brains. Some people would refuse to merge their brain inside a thinking network. That’s a fact, however, that the biggest and most powerful thinking network will lead the technological development and will be the one in charge of political decision. I don’t see this scenario as a conflict between super-intelligent networks and smaller networks. We are talking about agents much smarter than us, with a huge problem-solving capability. Hunger, political ideology, labour exploitation won’t be issues. Thinking network will cooperate and will be free to merge between them. At the end a big massive thinking network, a singleton, will lead the technological development. Again, normal humans and isolated AI agents will probably exists and they will live a pleasant life (hopefully), but they won’t be the dominant form of life.

Singleton age. A single dominant thinking network will explore the space, will develop its technology, will improve itself. I don’t think to this singleton as a sad lonely conscious mind in the middle of nowhere in the universe, instead I imagine it as chaotic sequence of thoughts, calculations, proposals, ideas, decisions and so on. It will be like a room with 1018 people talking simultaneously and where each person can fully understand what each of the other people is saying.
What happens next? I have no idea about future technologies, I can only say that at a certain point there will be a technological saturation. Technological improvements won’t be possible anymore, the law of physics will be fully understood and their applications fully explored. The only resource will be probably mass/energy (or something else?). After technological saturation, near-light speed space exploration and colonization will be possible so this singleton will be able to expand itself across the universe, transforming the matter it encounters in a useful way. That’s the cosmological age.

Cosmological age. One single AI network eating the Milky Way, then Andromeda galaxy, than the whole Local Group, then the Virgo Supercluster and so on. What this singleton will do? There are several options:

  • It (or should I say “we”?) will explore space looking for singularities, strange things, everything that is uncommon. Search for God could fall inside this category. Looking on every single planet for a “miracle” or an anomaly. I mean, this hyper-smart AI won’t get much additional information by studying the universe as it already knows how stars formed, how bio-genesis work and how the dark matter behaves. It will just be interested in anomalies, in something that can still increase its knowledge.
  • It will start proving theorems and it will just capture as much mass as possible (stars, neutrinos, dark matter, etc…) to convert it to energy to do calculation about theorem proving. Mathematics is a field that won’t never be fully explored. Maybe this AI will burn billions of solar masses to prove the Goldbach’s conjecture, if P=NP or some other hypothesis. So, it won’t be interested anymore in the universe as every physical phenomenon is predictable and explainable (and so boring). It will just burn entire galaxies out of curiosity.

A couple of issues must be analyzed about this cosmological age, but I will do that in another post:

  • How this singleton will behave when it will encounter a civilization with a much lower technological level?
  • Will the singleton be afraid of other alien singletons? This question depends on the answer to the previous question and it is quite interesting as it could affect deeply the scenario I imagined. If the singleton is afraid of alien singletons, it could think that the best strategy is to behave in a “stealth mode”. I mean, if the singleton start destroying stars and planets in order to convert their mass into energy, an alien singleton could notice it looking at the change in brightness and spectral emission and could take an hostile action. If so, a singleton will try not to change the emission spectrum of the volume it controls, without creating Dyson spheres and without destroying stars. This could be a plausible explanations of Fermi’s paradox: the universe is actually full of alien civilizations, some of them maybe even close (i.e. in the Milky Way), but they are afraid to get caught so they stay hidden.
Advertisements
This entry was posted in Uncategorized. Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s