Definition of Superintelligence
Artificial superintelligence (ASI) refers to AI that has far surpassed all human intelligence: it has become smarter than the smartest humans, in multiple tasks and at all aspects of life.
Such machines would be able to synthetize (combine) information. Hypothetically, they can connect all data in the world in real-time, while reprogramming themselves without human interference to become better.
Superintelligence is generally thought to be reached if a so-called intelligence explosion is triggered by AI systems with human-like capabilities (general AI). This intelligence explosion would set a string of self-improvement cycles in motion, that could escalate existing AI capabilities towards superintelligence.
The limits of superintelligence are unknown. Many AI models can already go through a lifetime of human learning in a day – and this is still only very narrow artificial intelligence.
Elon Musk is a very vocal proponent of strict, proactive AI regulation (while discouraging legislation of many of his own disturbing ventures, such as Neuralink brain implants).
Schools of thought
There are various schools of thought on how to proceed with this level of uncertainty. Some fear a full-on battle between the species. Others foresee or hope for a merger between man and machine.
One school says we should proceed with extreme care, as these increasingly autonomous machines may represent a existential risk to humankind – once their physique is stronger and they outsmart us by any standard. Basically they would be better at almost everything.
This group of experts believes these machines may conclude that there is not much use for these troublesome humans.
Others say this is never going to happen: it is just a scare tactic and a big conspiracy theory.
Potentially this type of denial is not helpful, because the current, very basic artificial intelligence is already outperforming human brain power on many fronts. Google’s AlphaGo algorithm, for instance, beat the world champion of the Chinese board game GO and displayed unexpected creativity (see video on the left).
A third school assumes there will be a symbiosis between humans and machines, wherebye artificial intelligence will augment us at all fronts. We, ourselves, will become these superintelligent super-beings.
This process is already on-going as well. Most people rely on calculators, rather than mental arithmetic, and few people travel without navigation tools.
Writing its own code.
ROAD MAP to superintelligence
- Current Narrow AI systems are improved.
- Some systems reach human-like capabilities (General AI).
- Intelligence explosion.
- General AI escalates into Superintelligence.
Somewhere along the way, technological growth becomes uncontrollable and irreversible. The hypothetical point in time is known as technological singularity, mostly shortened to ‘the singularity’.
It is a hypothetical because there are still so many unknowns, and not all scientists agree it will ever happen.
However, no one contradicts the argument that superintelligence would be the most disruptive force humankind has ever released.
The experts say the probability is 31% that this development turns out to be ‘bad’ or ‘extremely bad’ for humanity.
Economics of AI
Developments to watch
- OpenAI: impressive text-generating artificial intelligence, GPT-3, now exclusive Microsoft deal.
- Google/Alphabet’s Deepmind: focuses on creating superintelligence. Released AI Safety Grid rule – bringing up the question whether we should let this sector be self-regulated by the major US tech companies.
- Meanwhile, the Facebook Pluribus AI defeats top poker professionals.
What is ‘exponential growth’?
Multidimensional, exponential growth is a difficult concept to comprehend. We tend to think linear and assess developments by comparing them with things we know.