Commit d1c993cb authored by Laura Kalbag's avatar Laura Kalbag
Browse files

Add new post to Lens.

parent 7c199021
title: "AI thinks like a corporation—and that’s worrying"
date: 2019-11-28T18:15:52Z
type: ["lens"]
tags: ["artificial intelligence", "corporation", "discrimination"]
categories: ["Laura’s Lens"]
body_classes: "lens"
postURL: ""
publication: "The Economist"
writer: "Jonnie Penn"
image: "image.jpg"
> “After the 2010 BP oil spill, for example, which killed 11 people and devastated the Gulf of Mexico, no one went to jail. The threat that Mr Runciman cautions against is that AI techniques, like playbooks for escaping corporate liability, will be used with impunity.
>Today, pioneering researchers such as Julia Angwin, Virginia Eubanks and Cathy O’Neil reveal how various algorithmic systems calcify oppression, erode human dignity and undermine basic democratic mechanisms like accountability when engineered irresponsibly. Harm need not be deliberate; biased data-sets used to train predictive models also wreak havoc.
>A central promise of AI is that it enables large-scale automated categorisation… This “promise” becomes a menace when directed at the complexities of everyday life. Careless labels can oppress and do harm when they assert false authority.”
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment