Commit 64151f75 authored by Laura Kalbag's avatar Laura Kalbag
Browse files

Add new post to Lens.

parent 1a9f01eb
title: "The Risks of Using AI to Interpret Human Emotions"
date: 2019-11-20T15:47:28Z
type: ["lens"]
tags: ["artificial intelligence", "emotion detection", "bias"]
categories: ["Laura’s Lens"]
body_classes: "lens"
postURL: ""
publication: "Harvard Business Review"
writer: "Mark Purdy, John Zealley and Omaro Maseli"
image: "image.jpg"
imagealt: ""
> “Because of the subjective nature of emotions, emotional AI is especially prone to bias. For example, one study found that emotional analysis technology assigns more negative emotions to people of certain ethnicities than to others. Consider the ramifications in the workplace, where an algorithm consistently identifying an individual as exhibiting negative emotions might affect career progression.
> …
> In short, if left unaddressed, conscious or unconscious emotional bias can perpetuate stereotypes and assumptions at an unprecedented scale.”
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment