Is Emotion AI a Dangerous Deceit?
“How do we get humans to trust in all this AI we’re building?”, asked Affectiva CEO Rana El-Kaliouby, at the prestigious NYT New Work Summit at Half Moon Bay in January. She had already assumed a consensus that trust-building was the correct way to proceed and went on to suggest that, rather than equipping users and consumers with the skills and tools to scrutinize AI, we should instead gently coax them into placing more unearned faith in data-driven artifacts.
How would this be accomplished? Well, Affectiva are “on a mission to humanize technology”, drawing upon machine and deep learning to “understand all things human.” All things human, El-Kaliouby reliably informed us, would include our emotions, our cognitive state, our behaviors, our activities. Note: not to sense, or to tentatively detect, but to understand those things in “the way that humans can.”
Grand claims, indeed.
The MIT-educated CEO went on to talk about how her technology could create a softer, cuddlier world of technology, filled with machines that have empathy and emotional intelligence capable of interpreting the full range of facial movements and jiving along with them. Reflecting emotions back to humans in a way that makes them seem more trustworthy, likable, and persuasive.
Trending AI Articles:
2. Bursting the Jargon bubbles — Deep Learning
3. How Can We Improve the Quality of Our Data?
4. Machine Learning using Logistic Regression in Python with Code
Sounds great, no? Finally, an AI that really “gets me.”
Not quite. In reality, we were being tutored to believe that something as infinitely complex as human emotion can be objectively measured (El-Kaliouby actually said this), codified, and — of course — commodified.
The trouble is, the very example she used served perfectly to undercut Affectiva’s bold assumptions. Asking “what if systems could tell the difference between a smile and a smirk?” — two very distinct emotional expressions using very similar facial muscles —…








