Technology & Innovation

Busting the artificial intelligence myths

November 05, 2015

Global

November 05, 2015

Global
David Harding

CEO

David Harding is the Founder and CEO of Winton Capital. He is an Executive Director of Winton Capital Group Limited and Chairs the Executive Committee. Since graduating from Cambridge University with a degree in Natural Sciences, he has started two of the world's leading quantitative investment companies: AHL and Winton. David’s philanthropy focuses on promoting scientific research and science education. He is an honorary fellow of the Science Museum and St Catherine’s College in Cambridge.

Ignore the hype and the fear, writes David Harding, CEO of Winton Capital Management. Recent developments in AI are not the beginning of the end of the human race, but are simply the latest step in the gradual evolution of computing

The robots are coming - or at least that is the increasing fear. Elon Musk describes the development of artificial intelligence as “summoning the demon”. Stephen Hawking has warned it could “spell the end of the human race”. The Pope’s recent encyclical warned of a technological paradigm whose onward march threatened our very humanity. When the elder statesmen of science and faith unite in declarations of doom, should we be very afraid?

This blog post is an attempt to disentangle the hyperbole from the hard facts. There are a few recurrent tropes in the debate on AI: that artificial intelligence is ‘coming’, that it is to be feared, and that it will devalue humanity. I would argue – from the perspective of thirty-odd years working in the field of technology, computing and data – that all three of these are myths.

First, let’s look at the idea that ‘robots are coming’. Much of the current noise suggests that some great step-change in AI is imminent or happening at this very moment. But what we are witnessing is not a spike on the graph of technological progress.It is just the latest incremental step on a long and gradual journey.

Artificial intelligence stretches back to the analogue age. It is fifty years since computer programmes could factor prime numbers in a second and almost twenty since Deep Blue beat reigning world chess champion Garry Kasparov. AI is so well-established that it lies behind the most trivial diversions of our lives, like the Angry Birds who recreate the laws of physics through the touch screen of a tablet computer.

Proper humanoid robots are also here. Innovators like Hanson Robotics have devised eerily life-like machines who can already take part in rudimentary conversation and who will soon be able to recognise the facial expressions of the people they interact with.

This, again, is not a bolt-out-of-the-blue innovation. Inferring emotions from a human face is simply a matter of recognising a pattern of lines and curves in their features. Pattern recognition is the most basic unit of intelligence, and  computers have been doing this for years. It is somewhat puzzling to see the hype of ‘newness’ that surrounds AI, when in fact we have been quietly getting on with these things for decades.

So the robots are not coming; they are already here. That means the terms of debate are wrong; when Musk, Hawkings et al sound warnings about artificial intelligence, they are in fact talking about emergent intelligence: the prospect of computers doing things that no human has required them to do – an intentionality that comes from within, not without.

Will we ever see the Frankenstein moment when the machines take on life and intentionality of their own? I think not. Computers are thrilling because they provide increasingly fine metaphors for the human brain, but a metaphor will always remain a metaphor, an imitation. It is just about possible to imagine some freak moment of sci-fi transformation, when the mutation of atoms may produce some innate intentionality, but it seems vanishingly unlikely. My belief is that we will never simulate the experience of being human – or the ego and greed that come with being human – and so the fear of some breed of robots bent on conquering the earth remains fanciful.

Artificial intelligence and employment

There is, of course, the more prosaic fear about the effects of automation on employment; that robots will take over our jobs, leading to epic idleness and unhappiness. Supermarkets now rely heavily on automatic self-service scanners. Personal injury lawyers already fear that Google’s driverless cars will put an end to their ambulance-chasing careers. In white-collar professions, the computers are advancing too: replacing paralegals, financial advisors, even doctors.  

But the notion that automation will displace the need for human labour is just the latest perpetuation of the ‘lump of labour’ fallacy – which views the amount of work available in a given economy as fixed and finite -  that has been proved wrong time and again, most notably with reference to the Industrial Revolution. Industrialisation displaced some jobs and created many more. Once robots free up human hands and heads they will be put to use creating new services and entertainments to make life more enjoyable. Computers will remove a lot of people from the drudgery that currently enslaves them to.

This brings us to the final AI myth: that the growing intelligence of machines will somehow devalue humanity and make life worse. I passionately disagree. Computers are now cheap, powerful and ubiquitous enough to do extraordinary things with vast quantities of data – such as in the rapidly moving field of medical diagnosis. Computers are simply much better than any human being at automating large data sets, which will allow them to diagnose the rarest and most complex of diseases, so alleviating much human suffering.

Science will continue to open up enormous social, cultural and economic possibilities for billions of people. But here is the thing: these great leaps forward will not be achieved by the long-hyped fads and trends under the AI umbrella. Real scientific progress will be made the way it always has been made; through the long slog of trial and error, each generation standing on the shoulders of those that have gone before. Think of the vast resources and incredible amount of time that have been poured into the Higgs Boson project - thousands of scientists from dozens of countries taking over 50 years to look, essentially, for patterns in data.

This is the true business of science, not the hype of passing fads.. There is no great step change just around the corner for artificial intelligence. There is no fundamental discontinuity between current data analysis techniques and the ‘deep learning’ start-ups that are teaching computers to respond to the patterns in human conversation or, as Google Photos is now doing, recognising the unique patterns in millions of individual faces. What we are witnessing is not a sudden shift – a rupture – but a continuation of the information age.  And long may the revolution continue.

 

The views and opinions expressed in this article are those of the authors and do not necessarily reflect the views of The Economist Intelligence Unit Limited (EIU) or any other member of The Economist Group. The Economist Group (including the EIU) cannot accept any responsibility or liability for reliance by any person on this article or any of the information, opinions or conclusions set out in the article.

Enjoy in-depth insights and expert analysis - subscribe to our Perspectives newsletter, delivered every week