Humans are emotional creatures. Our feelings influence our interactions with the world around us, especially with other people. But to date, computers have been mostly blind to our emotions. That is a problem for computer interface designers as they try to build systems that respond to the way we are feeling.
This is where affective computing comes in. The term refers to the use of sensors and data analytics to interpret the outer signals of emotion we instinctively exhibit. It is a controversial subject, as real-time analysis of how we talk, how we move, and how we write, could reveal our innermost feelings to anyone with the right technology.
Affective computing tools are already in place in customer service environments, monitoring callers in call queues, and routing calls based on customer anger. It is also at the heart of social media sentiment monitoring, using textual cues to understand how populations feel about issues and products, though there’s still debate about how accurate these results may be.
One of the simplest approaches is ‘attention monitoring’, which involves tracking where an individual’s eyes are pointing to see what they find interesting or appealing. This technique is already commonly used by advertising companies and user experience designers to learn how best to attract and hold people’s attention.
Another technique is simply to monitor the individual’s facial expression. US company Affectiva claims its emotion tracking software Affdex can detect “complex emotional and cognitive states” by analysing ‘micro-expressions’, split-second movements of the facial features.
Thirdly, a subject’s mood can also be inferred from the conductivity of their skin, which increases as they become stressed or agitated. This is currently used in medical practice, for example to monitor the emotional state of patients with communication difficulties.
The equipment required to support affective computing is getting cheaper, with devices like Tobii’s eye-tracking sensors, Intel’s low-cost RealSense 3D cameras, and Myo’s myoelectric sensors bringing it into the consumer domain.
Even the skin galvanic response sensors built into fitness bands have a role to play. One UK police force is investigating how it might use Microsoft’s fitness wearable Band to detect whether officers are under stress and in need of back-up.
All of this means that the ability for computers to detect human emotions could soon be widespread. That raises the question: how should this capability be built into the world around us?
One can imagine beneficial applications, such as cars that pull to the side of the road when they sense that the driver is at risk of road rage, or an app that recommends music based on the user’s current mood.
But there could also be downsides. For one thing, affective computing will present a new set of privacy challenges for companies who seek to use it, as customers may be uncomfortable with the thought of their emotions being monitored by businesses.
There may also be a need for new rules to govern the use of affective computing. For example, would it be ethical for advertisers to target depressed consumers? If affective computing can be used to develop sounds and images that are highly effective at improving people’s mood, should the use of those sounds and images be regulated? These are the questions that need to be discussed as we approach what some are already calling the “emotion economy”.
Would you be comfortable with businesses tracking your emotions? Do you think affective computing needs new regulations? Join the discussion on the Future Realities LinkedIn group, sponsored by Dassault Systèmes.
Sponsored by: