Share

Microsoft takes Tay ‘chatbot’ offline after trolls make it spew offensive comments

Tay is an artificially intelligent chat bot designed by Microsoft to learn how to speak using modern slang by conversing with people on Twitter.

Advertisement

The computer program, created to simulate conversation with humans, responded to questions posed by Twitter users by expressing support for white supremacy and genocide.

But Twitter users soon understood that Tay will repeat back racist tweets with her own commentary and they bombarded her with abusive posts.

Microsoft has discovered the pitfalls of artificial intelligence the hard way.

Microsoft unleashed Tay to the masses Wednesday on a number of platforms including GroupMe, Twitter and Kik. The more you chat with Tay, said Microsoft, the smarter it gets, learning to engage people through “casual and playful conversation”.

What’s even worse is that Microsoft developed Tay to specifically target “18 to 24 year olds in the United States”, so it’s specifically tweeting for a young audience. It’s also a very interesting data point for the future of AI and our interactions with it.

Tay’s racist tweets may be a PR nightmare for, which seems to not have put any safeguards in her vocabulary, but Tay is really just a mirror held up in our faces. Tay learned new things from the users it interacted with, and it ended up by praising Hitler and saying nasty things about Jews.

A Microsoft representative said on Thursday that the company was “making adjustments” to the chatbot while the account is quiet. “It is as much a social and cultural experiment, as it is technical”, reads Microsoft’s statement.

“The AI chatbot Tay is a machine learning project, designed for human engagement”. Now, barely more than twenty-four hours later, the AI chatbot has gone offline, after it started sending out racist, homophobic, sexist and utterly nonsensical tweets.

Advertisement

Over time she’ll get to know you, and the more she knows you, the better she becomes.

Twitter