Artificial Super-intelligence Poses an Existential Threat to Mankind

Artificial super-intelligence. Billed to be one of the largest advancements in our technological capabilities as yet to be achieved within the current century. Over 80% of experts in the field predict the emergence of such computational genius to occur within the next 5-100 years with the median estimate being around 2045. The prospect sounds fantastic with limitless potential for good but could it in fact spell doom for human civilisation as we know it?

What is artificial super-intelligence?

Firstly, let’s define what is meant by “artificial super-intelligence”. The author of the soon to be published book ‘Superintelligence: Paths, Dangers, Strategies’, Prof. Nick Bostrom of Oxford University, talks of super-AI as being the point at which even the best of human minds are radically exceeded in terms of creativity, wisdom and social capability by a synthetic mind through the utilisation of complex mathematical algorithms and data structures which may or may not emulate the biological processing systems of the human brain. Currently, technology is not so advanced to have replicated human intelligence – although there have been reports in June this year which document a computer having passed the Turing Test – in which it fooled 30% of judges into believing it was a 13-year old boy via text conversation. However, Bostrom fears that if the correct measures are not implemented, the creation of an intelligence which exceeds our own would prove to be a serious existential threat to mankind and he’s not the first to think so.

The technological singularity.

The technological singularity is defined as the point at which human civilisation is destroyed or altered beyond perception as a direct result of the invention of artificial super-intelligence. As scary as this may be it is not as far-fetched as it may seem. As soon as we have created a device which possesses a mind more advanced than any human, it will likely play a role in the design and construction of its successor – each generation becoming more and more intelligent. The continued evolution of these super computers would lessen the requirement for human invention to function to zero, much like the situation described in Asimov’s ”The Last Question’ – a short story in which humans played no role in the creation of the “Galactic AC (automatic computer)”, the terminal generation of a long line of artificial super-intelligence’s which began from an initial man-made Planetary AC.
The term “intelligence explosion” was first coined by I. J. Good in 1965 and describes the situation in which there is a surge in intelligence post-invention of super-AI due to the ability of a super-AI to bootstrap the design and construction process of future generations – as if it were self-replicating – with each generation having exponentially greater creativity and general neural capacity due to both qualitative or quantitative (parallel-processing) improvements in system-design. Despite the increased cognitive capability of these creations, as machines they will be driven to fulfil a goal – and this will have been programmed into their hardware at one of the very first stages of the design and construction process – with each generation being more efficient in achieving that goal, despite the cost to humanity (we would likely be seen as an obstruction to its cause, or at the very least disposable – and would hence be removed from the equation).

Surely we could control that what we create?

Bostrom explains why this might not exactly be the case, he describes a scenario: During the early stages of the development process of an AI, designers might be quite ignorant as to what the purpose of the computer should be. Even if they were to programme it such that all it were to do was obey Asimov’s Three Laws of Robotics (the first being “A robot may not harm a human being, unless he finds a way to prove that ultimately the harm done would benefit humanity in general!”), the human race would cease to exist as we know it. In this case eventually the AI would prevent the birth of further humans due to the inherent risk of being brought into the world, and humans that were living presently would be overly protected in such a way that normality would be abolished due to the innate risk present in everyday life. Further, it is not likely we could simply “pull the plug” on such a creation as a strong super-AI could employ its advanced social skills (by way of persuasion) or intellect (by hacking) to thwart our attempts to shut it down.

asimov

How exactly, therefore does Prof. Nick Bostrom suggest we avoid this seemingly bleak future? He is after all, the director of both the Future of Humanity Institute and the Programme on the Impacts of Future Technology. He recommends that we match the investment which is currently being poured into the development of artificial intelligence and dedicate some of the best minds on Earth to solving the issue of control. For example, we could program the computer ab ovo to discover the ideal goal that the programmers would have settled on had they been more intelligent, had sufficient data and thousands of years to think about it AND THEN implement that goal. You may think of this suggestion as an obvious solution – design a super-AI (that could possibly harm us)  but prevent it from doing so. However, by what method do we go about programming an artificial intelligence to be so plastic and recognise where to draw the line when doing our bidding? A tricky problem, and certainly not one with a simple solution. Only one thing is for certain – without the correct countermeasures in place, the invention of artificial super-intelligence could very well be our last.

If you enjoyed reading this post, you might enjoy reading these other posts by some of our other authors.

Supercomputers, The Human Brain and the Advent of Computational Biology

Suspended Animation: Science Fiction or just Science?

 

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s