Add new post to Lens.
|title: "AI thinks like a corporation—and that’s worrying"|
|tags: ["artificial intelligence", "corporation", "discrimination"]|
|categories: ["Laura’s Lens"]|
|publication: "The Economist"|
|writer: "Jonnie Penn"|
|> “After the 2010 BP oil spill, for example, which killed 11 people and devastated the Gulf of Mexico, no one went to jail. The threat that Mr Runciman cautions against is that AI techniques, like playbooks for escaping corporate liability, will be used with impunity.|
|>Today, pioneering researchers such as Julia Angwin, Virginia Eubanks and Cathy O’Neil reveal how various algorithmic systems calcify oppression, erode human dignity and undermine basic democratic mechanisms like accountability when engineered irresponsibly. Harm need not be deliberate; biased data-sets used to train predictive models also wreak havoc.|
|>A central promise of AI is that it enables large-scale automated categorisation… This “promise” becomes a menace when directed at the complexities of everyday life. Careless labels can oppress and do harm when they assert false authority.”|
Like this? Fund us! Your patronage helps keep us independent and going.