Tech News: Computers that understand your emotions

The past few months with the Covid-19 pandemic and the concomitant strict lockdown restrictions has isolated us from people, says Professor Louis Fourie.

The past few months with the Covid-19 pandemic and the concomitant strict lockdown restrictions has isolated us from people, says Professor Louis Fourie.

Published Sep 4, 2020

Share

By Louis Fourie

JOHANNESBURG - The past few months with the Covid-19 pandemic and the concomitant strict lockdown restrictions has isolated us from people.

In many cases it reduced our social lives to computer screens and virtual meetings.

And despite the benefit of visuals, online meetings are not as easy as face-to-face meetings since it is rather difficult to read a person’s face and emotions.

Unfortunately, most of our current “smart” technology and devices are not able to adequately assist us with this problem.

They are still largely emotion blind. Even the Artificial Intelligence (AI) that is becoming so ubiquitous in almost all devices, has a high cognitive intelligence (IQ), but mostly no emotional intelligence (EQ).

During face-to-face meetings, people share so much emotion and meaning beyond their words. They use many non-verbal communication signs such as facial expressions, eyes and mouth, posture, body language, gestures, and vocal intonations.

But in online communication and virtual meetings we are consigned to mostly texts, emojis, or audio meetings. Usually when people lead meetings or present a keynote address at a conference, it is possible to gauge the energy of the audience through their immersion and non-verbal communication. People’s faces normally light up with excitement at innovative ideas; or their faces show their boredom or tiredness.

This is unfortunately not possible with current video-conferencing technology. The presenter of a webinar or chairperson of a meeting is operating in a vacuum and does not have a good idea how people are reacting to what they are saying. Quite often meetings will only start and end with video to give it some resemblance of normality. Since the richness of communication diminishes in cyberspace, it often leads to one-dimensional expressions of how we feel, making it much more difficult to emotively and empathetically connect with one another.

Certainly, Covid-19 and social distancing have exacerbated the problem. More than ever we are dependent on technology to stay in touch with loved ones and to remotely do our work or study. Although collaboration and video conferencing tools have improved immensely, it is just not the same as face-to-face meetings.

We may have thought we are connected during the Covid-19 pandemic, but it was not a substantive connection. It was at best an illusion of a connection. This is one of the reasons why researchers increasingly have been asking the question of what we could do to preserve our humanity and our emotions in a digital world.

Computers reading our emotions

Would it not be wonderful if computers could read and respond to our emotions and could help in interpreting the emotions of people in a virtual meeting? If online video platforms have built-in Emotion AI it could restore some of the energy and emotion that is lost in the virtual world and make virtual conferences, webinars and meetings much more engaging.

The Egyptian computer scientist, tech entrepreneur and chief executive of the company Affectiva, Rana el Kaliouby, believes that we can indeed harness the power of AI to develop technology that accentuate the emotional elements that makes us human. She believes that presenters and leaders of virtual meetings will in the near future be able to receive an emotion newsfeed or graph that changes according to the mood of the audience or meeting. It rises when they are excited, smiling or laughing and declines when they are bored or disengaged.

Over the past few years significant progress has been made in AI with emotional intelligence. AI software has been developed that can detect nuanced human emotions and complex cognitive states from people’s emotional, facial and verbal expressions and then aggregate that data in real time. The basic approach is that machine or deep learning is used together with computer vision and speech analytics. Hundreds of thousands of examples of people smiling, smirking, frowning or crying are fed to the algorithm until it can recognise numerous facial expressions. However, to link the facial expression to the underlying emotional state is more complex. For this the AI must consider the context, know a little bit about the person and what they are doing, consider the temporal development, and contemplate other gestures and vocal intonation.

The Automotive industry, for instance, are developing in-cabin sensing systems that analyses the cognitive state of the driver. The systems use Emotion AI to improve road safety by detecting signs of distracted or drowsy driving or senses driver stress. A “smart car” would be able to warn the driver, could play calming music when the driver is stressed or demonstrate road-rage, play cheerful music to keep the driver awake when the driver is tired and drowsy (e.g. a certain blink rate or head bobbing), or even pull over if the behaviour is becoming dangerous.

But emotionally intelligent AI is not limited to people’s work or the cars they drive. It has the potential to augment interpersonal communication and make the interaction between human beings more meaningful as in the case of people suffering from autism. People with autism spectrum disorder (ASD) often experience difficulty with recognising and responding to non-verbal communication signals and emotions. Emotion AI could help people with autism to navigate challenging social and emotional situations. The company Brain Power is currently developing the world’s first augmented smart glasses system for children and adults on the autism spectrum. The smart glasses system powered by Emotion AI is based on neuroscience from the Massachusetts Institute of Technology and Harvard University and gives the wearer insight into the emotions of people they are interacting with.

Until now the results have been impressive. Parents were for the first time able to connect properly with their children on an emotional level that was previously inconceivable. The smart glasses are also used for children with other social and emotional learning (SEL) challenges and is a game-changer not only in autism, but also in children with Attention Deficit Hyperactivity Disorder (ADHD). The “Empowered Brain” glasses includes a whole suite of applications using augmented reality and AI to build skills relevant to self-sufficiency in people with brain-related challenges. The suite consists of Transition Master to help autistic children to overcome the stress of transitioning to a new location or routine by providing an immersive 360-degree experience to help familiarise the users before visiting the new places in real life.

Face2Face teaches people to pay attention to faces by practicing eye contact, while Emotion Charades helps the user to strengthen their emotional literacy, deepen levels of emotional processing, and foster a connection with other people.

In mental health, human perception AI recognises facial and vocal biomarkers of things like stress, anxiety, depression, suicidal intent, and Parkinson’s. If the technology is used to understand a person’s baseline, it can notify a loved one or clinician if they deviate from it.

Affectiva’s first AI product to the market was in the media analytics space to understand how people respond to online video content, such as an online video ad, a movie trailer, or a TV show. The data is aggregated to determine if people are engaged, offended or laughing. Currently 25 percent of the Fortune 500 companies are using this technology to determine the emotional engagement of their users and consumers with their content.

Video conferencing platforms like Zoom was great during the period of isolation because it has connected us, but it has serious shortcomings.

The way humans communicate their emotional and mental states is only 10 percent choice of words and 90 percent nonverbal signals equally distributed between facial expressions, gestures, vocal intonations, speed of speaking, and level of energy in the voice.

Therefore, coming out of the Covid-19 pandemic, we will see much technological innovations that takes computers and videoconferencing platforms to the next level of shared experiences based on a multi-modal combination of signals.

AI is becoming mainstream and it is increasingly fulfilling roles that were usually done by humans, such as assisting with healthcare and productivity, driving the car, or interviewing and hiring employees. Previously there was a huge focus on automation, effectiveness and efficiency. Currently developers of AI realized that it must be human-centric. It has to understand people and for that it does not only need IQ, but also EQ or the ability to understand their own and other people’s emotional and nonverbal signals and be able to adapt in real time to this information.

Affective computing, a term coined by Professor Picard of MIT in 1996, is much closer than we may think. Empathetic computers will soon be able to recognise your facial emotions, your physiological signals, like your heart rate or stress levels, but also your vocal intonations and your gestures. A few years from now every device and computer will have an AI emotion chip to track our moods and to act accordingly. AI with emotional intelligence will become ingrained in the fabric of everyday devices ranging from our phones to our cars, our smart fridges to our smart speakers, making our lives safer, healthier and more productive; while making our interactions with others more meaningful and empathetic.

Professor Louis C H Fourie is a futurist and technology strategist.

BUSINESS REPORT

Related Topics: