Technology & Innovation

Fostering tomorrow’s Internet: The shortcomings of net neutrality

November 05, 2014

Global

November 05, 2014

Global
Christopher Yoo

John H. Chestnut professor of law, communication, and computer & information science

Christopher S. Yoo is the John H. Chestnut Professor of Law, Communication, and Computer & Information Science and the Founding Director of the Center for Technology, Innovation and Competition at the University of Pennsylvania.  His research focuses on how the principles of network engineering and imperfect competition can provide new insights into the regulation of the Internet, copyright, and patent law. He is also pioneering an innovative integrated interdisciplinary program designed to produce a new generation of professionals with advanced training in both law and engineering.   

To ensure the future Internet is fit for purpose, we must move away from the one-size-fits-all approach, says Dr. Christopher Yoo, John H. Chestnut professor of law, communication, and computer and information science at the University of Pennsylvania Law School.

Network neutrality—the idea that all Internet traffic is treated equally across networks—has emerged as perhaps the key policy issue confronting today’s information society. What was once dismissed as a uniquely American problem that flared up between 1999 and 2001 as a result of a series of mergers between cable providers is now a major focus in the EU as well as at the UN-sponsored Internet Governance Forum held in Istanbul in September 2014.

Unfortunately, participants in the debate are struggling to agree on what network neutrality means. The result is a fragmented discussion. Even more unfortunate is that the debate is often backwards-looking. Many proponents of net neutrality use the Internet’s past success as the reason to stick with the status quo. But the speed with which change occurs in this field makes the future particularly hard to predict.

Consider how much things have changed in just the past five years. Social networking has gone from the next big thing to being mainstream. People who once carried only a laptop now travel with smartphones and tablets, and live in an ecosystem dominated by app-stores in which users often pay for apps, in stark contrast to the fixed-line past, when almost all content was available for free. Within two years, Netflix, a video-streaming service, has transformed itself into a US$29bn giant that is revolutionising the way we watch television. Much of the data that used to be stored and processed on end-users’ desktops now resides in “clouds”.

In short, the applications and technologies making up the Internet have never been more diverse and are only becoming more so. Accordingly, as the demands being placed on the network have become more varied and data-intensive, it is only natural for network providers to respond by diversifying their services. Those who wish to preserve the network’s existing architecture overlook that no single network architecture can meet every need and that the network must evolve to match changing demand. 

While the Internet was well designed when it first emerged as a mass-market phenomenon in the mid-1990s for transmitting email and surfacing webpages, computer engineers have long identified a lengthy list of functions that the current network design does not perform well. These include security, mobility, mass media distribution and maintaining connections with multiple networks (called multihoming), to name just a few. While once of secondary importance, these functions are critical today. As a result, both the US and EU governments are sponsoring research projects exploring new approaches to networking in order to provide better support for these functions. Many involve practices such as prioritising certain Internet traffic that rest in uneasy tension with many visions of network neutrality.

Networks facing congestion have two possible responses: either add more bandwidth or manage traffic by prioritising time-sensitive applications over those that are more tolerant of delay. If regulation were to render the latter impossible, the only remaining option would be greater capital expenditure to increase bandwidth. This would make providing service more expensive and make future geographic expansion less feasible, which would exacerbate the digital divide both in developing countries and in low-income and rural areas everywhere. This is why developing nations are increasingly looking to network management as a way to making Internet coverage economically viable in areas that do not currently have service. 

The developments in how the Internet is uses mean that regulation should not seek to perpetuate any particular vision of what any specific person thinks is the best network architecture. The better course for the future of the Internet would be to structure regulation to give innovators the breathing room they need to experiment with new solutions that we cannot yet even imagine. In a world that is becoming ever more dynamic, a static, one-size-fits-all approach in which innovators must seek regulatory permission before deviating from the status quo would be a mistake. 

This blog is part of a series managed by The Economist Intelligence Unit for HSBC Commercial Banking. Visit HSBC Global Connections for more insight on international business. 

 

The views and opinions expressed in this article are those of the authors and do not necessarily reflect the views of The Economist Intelligence Unit Limited (EIU) or any other member of The Economist Group. The Economist Group (including the EIU) cannot accept any responsibility or liability for reliance by any person on this article or any of the information, opinions or conclusions set out in the article.

Enjoy in-depth insights and expert analysis - subscribe to our Perspectives newsletter, delivered every week