There has been remarkably little change in the core design of computers since the 1940s. Today’s systems still contain the basic components identified by John von Neumann back in 1945: a central processor, memory, and input and output devices. And little wonder: it’s an architecture that works—so far at least.
But the constant reduction in the size of these components, as predicted by Intel co-founder Gordon Moore in 1965, means they are beginning to bump up against the limits of physics. Specifically, as transistors get smaller, the insulating layers through which electrons are directed get thinner, so more energy is wasted as heat.
Multi-core processors have provided a temporary reprieve by tying multiple processors together in a single system, but with computing predicted to account for as much as 14% of global energy consumption by 2020, a longer-term solution is needed.
For this, chip design and use will have to change more substantially. And the need for more energy-efficient computing may spell the end for non-Von Neumann architectures altogether.
Computing infrastructure is already becoming more diverse, as can be seen in the rise of graphical-processing units (GPUs). These chips were originally developed to handle computer graphics, but it turns out that for certain workloads—such as crunching huge data sets—they can be more energy-efficient than conventional chips, and as such are being used in a variety of ways. Remote, GPU-based computers provide the brains for driverless cars, for example, and some recently developed drones have on-board GPU-based systems to power their computer vision.
Similarly, UK chip designer ARM is bringing its processors—designed for mobile devices which require high energy efficiency—to the server world. Its server-grade processors have much the same performance and capabilities as more traditional servers but only need a fraction of the power.
ARM still uses a Von Neumann architecture, however. For truly breakthrough power savings, we may need more ambitious architectures.
US start-up Rex Computing, founded by 19-year-old Thomas Sohmers, a fellow of the Thiel Foundation, has developed a new chip architecture that moves memory into the processor core and uses software-based memory management to reduce the amount of hardware needed. The company claims that this can deliver a 10- to 25-fold increase in efficiency.
IBM, meanwhile, is drawing on insights from neuroscience to improve computing efficiency. After all, the human brain can process far more information than computers with a fraction of the energy. The IT giant’s True North chip architecture uses an array of 4,096 small cores to simulate 256m synapses. Working with True North requires learning new ways of programming, but it offers a platform for machine learning that is substantially more efficient than conventional processors.
By making computers more energy-efficient, innovations such as these will have a number of benefits. For businesses, the most immediate will be improving data-centre density. Data centres are constrained by the amount of power they need to run and the heat they need to extract, but hyperefficient hardware will enable much denser computing clusters. This will make it easier for businesses to use renewable energy sources, such as wind and solar, to power their server farms.
As the example of GPUs demonstrates, innovations in computing often have unintended but positive consequences. The hunt for more efficient computing is being driven by necessity, but it may well lead to new possibilities in computing—something that the Von Neumann model, dominant for seven decades, would never have delivered.
What innovations might hyperefficient computing unlock? Share your thoughts on the Future Realities LinkedIn group, sponsored by Dassault Systemes.