Technological Singularity

Accelerating Change I

In my English course I had to prepare a presentation on a topic regarding new technologies. Having recently read the novel “Accelerando” by Charles Stross (available online for free) I had a suitable subject ready: a technological singularity. You maybe have heard of other types of singularities in mathematics or cosmology, but this one is quite different. But I better start with a definition:

Technological singularity refers to a point in the future when artificial intelligence (AI) surpasses the human level.

So basically this is future scenario of what could happen when we succeed in constructing a super-intelligent AI. Before I point out the implications of such an event I present four scenarios of how this could happen:

Scenarios

Strong artificial intelligence

Today, artificial intelligences are mainly used in expert systems to analyse vast amounts of data. In certain disciplines like chess (Deep Blue), computers even have already succeeded to outperform the human brain.

The long term vision of AI research is to construct the so-called Seed AI. It should be capable of understanding and manipulating its own programming based on self-made experience in order to optimise its performance. When this mechanism would be powerful (i.e. general) enough the AI should be able to learn like a growing child, yet much quicker. Consequently it then could reach a human level of intelligence, but there is no reason why the process should stop there. So in the next learning iteration it would become even more intelligent with no limit in sight. This scenario of course requires the availability of a powerful computer capable of executing this very complex program.

Swarm intelligence

Accelerating Change II

Complex biologic systems often consist of a linked combination of many simpler entities. Examples for this are colony forming insects (ants, termites), every multi-cell organism and even each cell itself. Using a big network of relatively simple (compared to a single strong AI) artificial intelligences it should be able to create the same emergent behaviour. The crux of this is approach is to find a powerful interaction mechanism because simple “hello world” messages obviously would not boost the network’s observed intelligence.

Brain-computer interface

This scenario does not rely on the creation of a purely artificial super-intelligence. Instead the already ubiquitous use of computers, mobile phones intensifies over the years. The next step probably will be small head-mounted displays delivering us information directly to the retina. Advances in neuroscience then once might make it possible to construct brain implants that enhance our memory or provide new skills. In the end a complete brain-computer symbiosis could be reached, boosting our own capabilities beyond limits.

Mind uploading

Here the idea is to simulate an existing brain in a computer. Today it is possible to realistically calculate the behaviour of 2·10⁵ neurons linked with 5·10⁷ synapses. This is still far away from the number of cells in our brain (10¹¹), but when Moore’s Law holds its predictions for still a decade, our computers will be able to deal with this complexity. Another approach would to construct a brain simulator with an adapted hardware structure. In any case the scenario then suggests to upload the state of a real brain to the simulator, thus kind of duplicating the original mind.

Consequences

Accelerating Change III

The existence of a super-intelligence has a great spectrum of consequences: Most probably scientific progress will accelerate dramatically. Humans won’t be able to follow the new findings any longer unless they have their brains “upgraded” in some way. Okay, most people don’t understand current science already today, but in a singularity future, human will have lost its place as the most advanced species on this planet.

The article Economics Of The Singularity by Robin Hanson mainly deals – as the title suggests – with the economic outcome of a technological singularity. An oversimplifying quotation:

The world economy, which now doubles in 15 years or so, would soon double in somewhere from a week to a month.

The article also points out parallels between a possible technological singularity and the industrial revolution which had a comparable impact on world economy.

But back to the future: a singularity’s impact on society would also be immense: ethic questions like “should simulated brains and artificial intelligences have civil rights?” would require answers, let alone religious complications. A philosophic question that also arises is whether that event would mark the next step of evolution.

Critics

Accelerating Change IV

Speculations about the soon advent of the singularity have been criticised: The biggest argument targets the lack of progress in research on strong AIs (general problem solvers) during the last 30 years. Another point can be made against the extrapolation of currently accelerating progress (Moore’s Law). It is speculative if the current acceleration in the development of processing power will continue without halt. A more general reasoning points out that despite neuroscience, the phenomena of intelligence is not understood entirely. Even if we could fabricate a computer capable of running a brain simulation, the understanding of how the interaction of neurons and synapses creates the observed macroscopic behaviour is required to create the “simulation framework” which then runs on that hypothetical machine.

A completely different argument is about that intelligence is not necessarily related to “humanity”. A super-intelligence does not need to be friendly – just imagine a SkyNet-like super-intelligence as known from the Terminator films.

Summary

Although we surely won’t face a technological singularity within the next decade, several futurologists – the most prominent example being Raymond Kurzweil (online resource) – claim that this event will take place within the next 50 years. I personally think this forecast is way to “optimistic”. While I embrace the idea in general, the potential dangers of a singularity have to be taken seriously.


Posted

in

by

Tags: