Technology & Innovation

“Unsupervised learning” and the future of analytics

February 24, 2014

Global

February 24, 2014

Global

An interview with analyst Seth Grimes on the growing use of machine learning in business intelligence

 

It is nearly 60 years since the term “artificial intelligence” was first coined but there’s every sign it will be one of this year’s buzziest technology topics. Web giants such as Google and Facebook are buying up AI start-ups while classic questions like “Can machines think?” and “Will we all be replaced by robots?” have new currency in the media. 

One reason why some experts believe AI is beginning to achieve its long-imagined potential is the explosion of data on the web. Data fuels evolution in AI and today’s Internet provides researchers with the raw material to test and hone their algorithms.

A recent major breakthrough came in July 2012, when researchers used a technique known as ‘deep learning’ to automatically identify pictures of cats in photos on the web. They showed that an algorithm could successfully identify feline pets – even though it was never “taught” what the characteristics of a cat are.

“The idea is that instead of having teams of researchers trying to find out how to find edges, you instead throw a ton of data at the algorithm and you let the data speak and have the software automatically learn from the data,” Andrew Ng, a computer scientist at Stanford University, told The New York Times

This is an example of unsupervised learning – a style of AI that can identify linkages, patterns and other signals in data sets without being told what to look for.

The power of unsupervised machine learning, as it is properly known, is that it can spot salient correlations and connections between data points that no human would have thought to look for. One drawback is that it does not necessarily provide any insight into what those correlations and connections mean. This could lead, in theory, lead to a bizarre circumstance in which a machine learning system discovers a highly successful pricing or product recommendation strategy, for example, which no human employee understands.

Unsupervised machine learning is already in use in business today and, though I wouldn’t care to speculate about the timeline, it’s safe to assume its use will grow in future, given the continued growth of data and the desire among businesses to make sense of it all. I recently caught up with Seth Grimes, an industry analyst and consultant in data-analysis technologies, to discuss the implications.

What are supervised and unsupervised methods in machine learning?

Seth Grimes: Much, even most, of what we do in analytics involves categorization and classification: The processing of saying “this is a that” (classification) and determining what set of “thats” help you resolve a business problem (categorization).

In machine learning, the software is wired to learn to handle general cases by working over sets of training cases. Supervised methods rely on training examples coded by an analyst with analyst-established categories. By contrast, unsupervised methods eschew pre-established categories. The software discovers clusters whose member-cases are close to one another according to some set of criteria and distant from the member-cases of other clusters.

When is it appropriate to use unsupervised machine learning in particular?

There are a few qualifiers. You need a large amount of data to train your models, and the willingness and ability to experiment and explore, and of course a challenge that isn’t well solved via more-established methods.

What are some of those challenges? Language understanding and image identification, really any application that involves detecting patterns and extracting salient information from noisy environments, but where other techniques have fallen short, whether in accuracy, scalability or effectiveness.

In language understanding, for example, if you're working with text that is relatively free of grammar and spelling errors, in a limited and well-defined business domain, you can probably get decent results by writing a set of rules that classify the meaning of most of the expressions that appear in the text you're analyzing.

But sometimes a rules-based approach just isn't the best way to go, for instance in trying to automate the process of making sense of social postings, where there’s no boundary on discussion, volumes are huge, participants are hugely diverse, many languages and lots of slang and irregular language and cultural reference are in play, and new topics arise constantly. For sources with these sorts of characteristics, statistics and machine learning may outperform both established automated methods and human analyses.

Why are unsupervised machine learning methods becoming viable now?

The algorithms are newly effective and efficient, the required computing hardware is powerful and cheap, and data is abundant and available. Those elements explain viability, but viability isn't enough. Unsupervised methods now provide a sometimes-superior, and broadly usable, alternative to established methods, especially for problems that haven't been solved neatly by supervised machine learning or non-machine learning approaches.

Where are you seeing businesses put unsupervised methods to use?

I mentioned language understanding. Let’s take it a step further. Consider machine translation, a natural language processing (NLP) challenge that involves both language understanding and language generation.

A machine translation system has to have a grasp of context and idiom. Take the phrase "This song is sick". Obviously, it means something very different from "My son is sick" so it’s not enough to simply translate the words. In French, for example, “malade” is fine for a person who’s ailing, but "cette chanson est malade" is nonsensical. "Fou" would work however; it captures a correct sense of the idiom “sick.” The way you get to correct, idiomatic sense is via phrase-based translations, working from likelihoods suggested by statistical analysis of large datasets. When you have enough cases, the long tail stuff -- the unusual cases -- do occur frequently enough for the machine system to grasp hold of them.

The machines are far from perfect. How annoying is it when text messages are autocorrected to something silly? That miscorrection is based on probabilistic misassessment. We're annoyed and we reject the autocorrection and, if the system is doing “active learning” -- the web search engines are -- it does better the next time.

What are the potential pitfalls of using unsupervised methods, and how can organisations protect against them?

The first potential pitfall, of course, is that your models simply won’t work. Maybe they focus on features that just aren't salient, or maybe you didn’t choose the best modelling algorithm or parameters, for instance, the best number of clusters.

Also a model built with unsupervised methods may lack explanatory power: We'll find the machine's choices inscrutable.

A last hazard, albeit really of any method, is that the model will be brittle, incapable of handling unexpected cases.

A way to protect against these is human guidance, the “active learning” that I mentioned a moment ago. In other situations, you just have to judge whether the results make sense. But even before that point, you have to be discerning, selective, in your application of unsupervised methods.

Do you think machine learning techniques are necessary to extract value from ‘big data’?

Definitely. That's precisely the point of the 2009 article "The Unreasonable Effectiveness of Data" by Googlers Alon Halevy, Peter Norvig, and Fernando Pereira. That article focused on natural-language understanding, but the notion applies in mining other big data sources.  

The caveat is that these methods don't yet really confront the synthesis and sense-making challenges, which involve data of diverse types drawn from disparate sources.

Will unsupervised learning replace some of demand for data scientists?

An analogy: A data scientist is like an industrial designer or engineer who figures out the materials and tools and manufacturing steps that go into creating a product. There's always a role for that person in refining the processes and improving the product and creating whatever's next.

It is the assembly-line worker who is rendered redundant by automation, and it is the middle-tier knowledge workers who will be replaced when machines can deliver sufficiently-good, faster, cheaper, automated decisioning.

This is an observation, not a recommendation.

Do you think there will be cultural resistance to the use of unsupervised methods?

There's always cultural resistance to innovation, but innovation happens all the same, taking hold when the new alternatives are better, cheaper, or create new capabilities.

In machine learning's case, some of the resistance surely stems from the abstruseness of the methods and the opacity of the models generated. Yet the benefits are great, so just as designing and flying an airplane are beyond most of us, the technology takes us where we want to go, at an affordable cost, so we gladly hop on for the ride.

Seth Grimes consults via Washington DC-based Alta Plana Corporation, which he founded in 1997. He is the organiser of the Sentiment Analysis Symposium, which takes place on March 5-6 in New York. Follow him on Twitter: @SethGrimes

Enjoy in-depth insights and expert analysis - subscribe to our Perspectives newsletter, delivered every week