Technology & Innovation

The third era of IT

September 08, 2014

Global

September 08, 2014

Global
Our Editors

The Economist Intelligence Unit

_____________________

Why cognitive and neuromorphic computing represent a new era for information technology.

There have so far been only two eras of computing, says John Gordon, vice president of IBM Watson Solutions: the tabulation era and the programmable computer era. Now, he says, we’re beginning to embark on a third: the era of cognitive computing.

Tabulating computers, the predecessors to modern IT systems, were large machines dedicated to doing a specific job, such as doing long division sums. If you wanted them to do something different, you had to take them apart and reassemble them. Then, in the 1950s, came programmable computers, which could be instructed to perform different tasks without being rebuilt.  

Computers have become much powerful in the last six decades, of course, but we are nevertheless still in the programmable computing era. The next phase, many are predicting, will be defined by computers that no longer need to be explicitly programmed but instead learn what it is they need to do by interacting with humans and data.

“I truly believe that the cognitive computing era will go on for the next 50 years and will be as transformational for industry as the first types of programmable computers were in the 1950s,” says Mr Gordon.

Cognitive computing                                        

The true potential of cognitive computing was first made public in February 2011 when IBM’s Watson beat Jeopardy! champions, Ken Jennings and Brad Rutter. In a nod to The Simpsons, Mr Jennings wrote on his video screen: “I for one welcome our new computer overlords.”

But these computer overlords have been a long time coming. The roots of cognitive computing stretch back to the work of artificial intelligence (AI) pioneers in the 1950s. Since then, artificial intelligence has ebbed in and out of the public consciousness, with occasional periods of progress interspersed with so-called “AI winters”.

Cognitive computing is AI redux. Researchers took the successful bits of AI, such as machine learning, natural language processing and reasoning algorithms, and turned them into something useful.

“Cognitive computing is starting to take off because we are having this confluence of the maturity of the technology, the evolution of computing power, and the ability to understand things like language, which is coming together to make this more of a reality now,” says Mr Gordon.

But the phenomenon is not purely driven by technological advances; there is also a pressing need for systems that can organise and make sense of big data. It’s no secret that digital data is growing at a clip—about 3 exabytes of data every day. With the Internet of things – where machines talk to other machines over the Internet – predicted to intensify that growth, this need is only going to become more acute.

One of the ways in which cognitive computing may help make sense of all this data is by enabling linguistic interfaces to analytics systems. Current tools, with their complex and specialised user interfaces, might be useful for data scientists and statisticians, but they’re not much good for a doctor standing at a patient’s bedside or a wealth manager advising her client on the best investments given the client’s risk appetite.  

A cognitive system that ‘understands’ human language can allow professionals to interrogate data simply by asking questions. “It can propel people into the information economy because they don’t have to know how to write SQL [structured query language],” Mr Gordon says. “They just have to know how to communicate.”

Neuromorphic computing

To date, most cognitive systems work on regular computers. However, some experts believe that their true potential will only be realised when computers are made to work more like the brain, the most sophisticated platform for cognition there is.

Currently in development, so-called “neuromorphic” microprocessors work by mimicking the neurons and synapses that make up the brain. As such, computer scientists claim, they are better suited to spotting patterns within streams of data than their conventional predecessors.

Neuromorphic computers have not made it out of the research labs, yet, and academic institutions including Stanford University, Heidelberg University, University of Manchester, and ETH Zurich lead the field. But companies like IBM and Qualcomm also have very promising neuromorphic chips in R&D.

No one is precisely sure what the eventual application of neuromorphic chips may be, but it has been suggested that they will act as the eyes, ears and nose of conventional and cognitive computers. And because they’re light and power efficient, they can be used in robots, self-drive cars, and drones. They can interact with the environment in real-time and spot any patterns that are out of the ordinary (such as a car coming the wrong way down a one way street).

Neuromorphic chips might also be embedded in mobile phones, tablet computers and other handheld devices. Imagine holding your phone in front of a flower you don’t recognise. The neuromorphic chip would recognise its shape and smell and return all the relevant information you’re looking for—a search engine on steroids.

Although conventional computers are getting better at recognising objects such as the human face, they still require enormous computing power to do so. It took Google 16,000 processors to recognise a cat. Neuromorphic chips will be able to do the same with one low-powered chip.

In Smart machines: IBM’s Watson and the era of cognitive computing, John Kelly and Steve Hamm explain that neuromorphic computing and cognitive computing are complementary technologies, analogous to the division of labour between the right and left hemispheres of the brain.  Cognitive computers are the left brain, focusing on language and analytical thinking, and neuromorphic chips focus on senses and pattern recognition.

IBM describes the combination of these capabilities as “holistic computing intelligence”. If this combination lives up to a fraction of its potential, we may indeed be embarking on a new era of computing. 

When computers can learn like we do, what will the role of human beings be? Let us know your thoughts on the Future Realities group on LinkedIn, sponsored by Dassault Systèmes.

Enjoy in-depth insights and expert analysis - subscribe to our Perspectives newsletter, delivered every week