Technology & Innovation

Genesis of the data-driven bug

September 26, 2016

North America

September 26, 2016

North America
Suman Deb Roy

Lead Data Scientist

Dr. Suman Deb Roy is the Lead Data Scientist at betaworks, a technology company that operates as a studio, building new products, growing companies and seed investing.  Previously, he worked with Microsoft Research and as a Fellow at the Missouri School of Journalism. Suman is the author of 'Social Multimedia Signals: A Signal Processing Approach to Social Network Phenomena' - a book that describes theories and algorithms to intelligently learn from Web Media. His work in transfer (machine) learning won the IEEE Communications Society MMTC Best Journal Paper Award in 2015.

In May of this year, non-profit news organisation ProPublica published a critical story about the use of algorithms in the criminal justice system. They revealed via rigorous statistics that the algorithm tasked with predicting future criminals was racially biased against African Americans. Relying on the output of algorithms without sufficient oversight was leading to the opposite of objectivity—instead of mitigating bias it was introducing it.

The discreet and omnipresent nature of computing can trick us into believing that all algorithmic systems are objective, operating under the strict, pre-programmed guidelines. But software is gradually drifting away from that reality, especially within the currently popular domain of computing called machine learning.

Machine learning, a collective term for a set of algorithms that can learn from data and perform predictive tasks, has revolutionised our concept of what computers can do when fed tons of past information. Various forms of machine learning have enabled things like self-driving cars and voice assistants (eg Siri or Alexa) to recognise your voice accurately and power an entire click-based ad economy. While machine learning has deeply affected a number of industries and scientific fields in the last decade, its influence on society and our everyday behaviour is only now becoming apparent. 

Unanticipated consequences arise from every new technology. There was a time when software was pre-calibrated before launch with explicit error-catching mechanisms to handle errors or any kind of deviation from the norm. But machine learning produces a specific diversion in how modern software operates. Today, any machine learning software must continuously adapt itself to match the pattern of historical data. In statistics, this is traditionally called ”fitting the model”. This attribute in software is why your social newsfeed algorithm tunes itself to show you more of the same stuff that you've clicked on before, or why your insurance rate is based on where you went to school and with whom. It is why that product advertisement chases you wherever you go on the Web. And it is why trading was halted 1,200 times on Aug 24th 2015 when price-insensitive, high-frequency trading programs caused repeated selloffs. 

Where do biases in software come from?

Non-stop learning and adapting to data mean comprehensive error-catching provisions are difficult to pre-code, simply because they are inestimable or unknown except in hindsight. Thus many algorithms we use every day possess intrinsic biases. And when operating without sufficient oversight, biased algorithms can produce unforeseen and unwanted results, ranging from prejudiced predictions to disastrous software failures to certain population groups being more adversely impacted by the software than others.

Both the designer and the operating environment could be responsible for introducing biases in software. Traditionally, an error made by the developer in code is called a bug. But bugs are nothing but undesired output produced by certain software in response to a stream of data (these data could be a sequence of taps on a phone or clicks on a webpage or control flow in operation). In the case of machine learning software, bugs are nearly always introduced through the input data on which a model is trained. When software feeds on biased data, unfair results are the outcome. Bias in algorithms is just another kind of bug.

As more software mine the social web and economic data to simulate "intelligence", the biased outputs will only get fiercer. The history of humanity isn't perfect. It is riddled with phases of systematic prejudice against a class of citizens, consumers or even rival competitors. Of course, this is something a civilized society denounces. Because machine learning uses historical data, patterns of uneven social or economic distributions will inadvertently be picked up. And these patterns become biases for future generations of data.

Slow to act

As software permeates critical aspects of our everyday lives and businesses, unpredicted and sometimes undesired outcomes will come to the fore. For example, when Facebook Live launched in April 2016, the predicted use case was that users would stream anything cheery, from baby bald eagles to downhill skiing. However, the real challenge for Facebook Live emerged when a racially charged police shooting was live-streamed—the video captured 5 million views before being temporarily taken down by Facebook for “a technical glitch”. Like algorithmic bias, this was an incorrect use of the software per se, but just one with unintended consequences.

These issues are starting to attract attention but are critical and demand action. However, when it comes to algorithmic biases producing unsatisfactory output, tech companies seem to be hesitant and slow to act, asking themselves whether this will be a recurring problem that should be addressed now, or in the next product cycle? Policy makers and journalists are beginning to interpret this behaviour as lack of concern by tech companies, which could lead to stringent regulations.

Bug whisperers

The question of what or who should be regulated is not straightforward. For example, does the regulatory responsibility fall on the company adopting biased software or should the company designing the software, or holding the data, be held responsible?  Should there be different rules for private vs public companies that serve in the interest of shareholders? For example, providing a personalised newsfeed for users could lead to more clicks per user. From an engagement point-of-view, this is positive financial incentive for a company because engagement directly converts to ad revenue. However, personalised feeds also create a biased filter bubble for the user. For example, Facebook recently announced it intended to change the newsfeed algorithm to prioritise news from friends and family. But when there is a coup happening in Turkey, is the priority to show baby photos from your friends or a major world event unfolding?

Modern science has never had to deal with debugging algorithmic bias. There is no automatic bug test when your data have hidden patterns, which are capable of treating swathes of people or consumers unfairly. But some heuristic remedies exist. First, having a human in the loop who performs specific audits during the production and testing phase helps. Second, software neutrality could be a criterion from the start, so when software is being designed there needs to be an explicit evaluation of potential second- or third-order scenarios that could lead to bias. Sometimes, biased data are generated expressly to manipulate an algorithm. This kind of manipulation has been attempted in search engines, called “Google bombing”. A bias-aware algorithm could also penalise manipulative data sent to it or employ protections that discourage such manipulation. Finally, from a mathematical standpoint, a master algorithm could be formulated that acts on a majority vote from several underlying mathematical models instead of just one.  Each model could be programmed with contrasting incentives, for instance, one optimising for financial incentives while another optimises for fiduciary responsibilities.

Algorithmic systems are not a settled science, and fitting it blindly to human bias can leave inequality unchallenged and unexposed.  Machines cannot avoid using data.  But we cannot allow them to discriminate against consumers and citizens. We have to find a path where software biases and unfair impact is comprehended not just in hindsight. This is a new kind of bug. And this time, punting it as ‘an undocumented feature’ could ruin everything.   

 

The views and opinions expressed in this article are those of the authors and do not necessarily reflect the views of The Economist Intelligence Unit Limited (EIU) or any other member of The Economist Group. The Economist Group (including the EIU) cannot accept any responsibility or liability for reliance by any person on this article or any of the information, opinions or conclusions set out in the article.

Enjoy in-depth insights and expert analysis - subscribe to our Perspectives newsletter, delivered every week