Medical News Forget rampant killer robots: AI’s real danger is far more insidious

Medical News Forget rampant killer robots: AI’s real danger is far more insidious

by Emily Smith
0 comments 63 views
A+A-
Reset

Medical News

Forget the AI apocalypse as you know it | The true danger of artificial intelligence is in its obsession with spatulas and the biases it learns from us

Technology

| Comment

29 May 2019

Moviestore collection Ltd / Alamy Stock PhotoBy Annalee Newitz
WHEN I was growing up, nobody promised me a flying car. But I was promised an AI apocalypse. Those shiny machines were going to crush our skulls underfoot, and we were all going to welcome our new robot overlords. Remember that? Many people still seem to think it is likely to happen. Well, just like flying cars, it isn’t. But we might still get a deadly AI nightmare. It is just going to be a lot weirder and more insidious than we imagined in the innocent days of Terminators and the Cybermen.
AI is simply software programs that learn from data. That is why engineers generally prefer to call it machine learning. You give your machine-learning program a giant data set – say, every video on YouTube – and it “learns” to find patterns. Famously, when Google unleashed an AI on YouTube in 2012, it figured out how to recognise cat faces. That sounds pretty amazing until you discover that Google’s AI also became intrigued by spatulas oriented at a 30 degree angle.
If you want to understand the true dangers of AI, you have to ponder those spatulas. It isn’t so much that these programs are working incorrectly, but that they notice patterns we don’t. And sometimes those patterns are a lot more problematic than an inadvertent spatula fetish.
When Microsoft released a chatty AI bot named Tay on Twitter, for example, it began spouting Nazi slogans within 24 hours. Designed to learn from conversations around it on the social platform, it quickly became racist.
A similar issue showed up in crime-prediction algorithms that police in Florida used. Software flagged black people as more likely to commit crimes than white people, despite evidence to the contrary. And when political scientist Virginia Eubanks investigated medical insurance algorithms in the US, she discovered an inherent bias that made it harder for poor people to get health coverage.
None of this should be surprising to anyone who has met a human and discovered our propensity for prejudice. AIs aren’t autonomous creatures with agendas of their own. They are learning from our data. Think of AI as prostheses – extensions of humanity, with slightly different strengths and weaknesses.
“The AI nightmare is going to be a lot weirder than we imagined in the innocent days of Terminators”
Sometimes they suss out patterns of bias in our data much better than we do. Then, like the obedient programs they are, they act on those biases.
Google and Microsoft both acknowledged this problem for the first time in recent annual reports to the Securities Exchange Commission. AI, said Google, may present “ethical, technological, legal, and other challenges”. Microsoft put it more simply: “AI algorithms may be flawed.”
Taking human data out of the equation doesn’t help – problems often get worse when AIs learn from other AIs. Algorithmic trading between software bots led to the trillion-dollar “flash crash” in 2010, when the US stock market plunged 998.5 points, then recovered, within about 30 minutes.
YouTube engineers reported recently that they fear what they call the “inversion”. It is a scenario in which the network becomes so clogged with accounts run by bots that AIs learn to dismiss real visitors as fake.
The good news is that all these failures mean we won’t have to fight a robot army. The only way we can prevent the AI apocalypse, such as it is, will be to debug ourselves. Already, people are working on ways to correct for racial and class bias in algorithms. They are also thinking more about the ethics of deploying automation for sensitive tasks.
Google employees mounted a protest when they discovered that their employer was designing a machine-learning algorithm for surveillance drones that would automatically identify enemy targets. Under pressure, the company stopped working on the project. But plenty of other companies are building automation – complete with unconscious human bias – into their weapons. That is why the European Union called on the international community to regulate what it calls “lethal autonomous weapons systems” currently in development.
There is a hard road ahead. It is easier to obsess over imaginary killer robots than it is to undo decades of biased data that we barely understand. Still, this overwhelming task offers a glimmer of hope. In the end, improving our AI may also improve humanity.

Annalee Newitz is a science journalist and author. Her novel Autonomous won the Lambda Literary Award and she is the co-host of the Hugo-nominated podcast Our Opinions Are Correct.
You can follow her @annaleen and her website is techsploitation.com

Annalee’s week
What are you reading?
P. Djèlí Clark‘s The Black God’s Drums, an alternate history of the 19th century Caribbean with mad scientists and airships.
What are you watching?
I just saw the gorgeous, epic film The Wandering Earth, where we turn our planet into a spaceship.
What are you working on?
I’m doing a lot of research on ancient Roman toilets.

More on these topics:
artificial intelligence

algorithms

You may also like

Leave a Comment