-
Tips for becoming a good boxer - November 6, 2020
-
7 expert tips for making your hens night a memorable one - November 6, 2020
-
5 reasons to host your Christmas party on a cruise boat - November 6, 2020
-
What to do when you’re charged with a crime - November 6, 2020
-
Should you get one or multiple dogs? Here’s all you need to know - November 3, 2020
-
A Guide: How to Build Your Very Own Magic Mirror - February 14, 2019
-
Our Top Inspirational Baseball Stars - November 24, 2018
-
Five Tech Tools That Will Help You Turn Your Blog into a Business - November 24, 2018
-
How to Indulge on Vacation without Expanding Your Waist - November 9, 2018
-
5 Strategies for Businesses to Appeal to Today’s Increasingly Mobile-Crazed Customers - November 9, 2018
Microsoft takes Tay ‘chatbot’ offline after trolls make it spew offensive comments
Tay is an artificially intelligent chat bot designed by Microsoft to learn how to speak using modern slang by conversing with people on Twitter.
Advertisement
The computer program, created to simulate conversation with humans, responded to questions posed by Twitter users by expressing support for white supremacy and genocide.
But Twitter users soon understood that Tay will repeat back racist tweets with her own commentary and they bombarded her with abusive posts.
Microsoft has discovered the pitfalls of artificial intelligence the hard way.
Microsoft unleashed Tay to the masses Wednesday on a number of platforms including GroupMe, Twitter and Kik. The more you chat with Tay, said Microsoft, the smarter it gets, learning to engage people through “casual and playful conversation”.
What’s even worse is that Microsoft developed Tay to specifically target “18 to 24 year olds in the United States”, so it’s specifically tweeting for a young audience. It’s also a very interesting data point for the future of AI and our interactions with it.
Tay’s racist tweets may be a PR nightmare for, which seems to not have put any safeguards in her vocabulary, but Tay is really just a mirror held up in our faces. Tay learned new things from the users it interacted with, and it ended up by praising Hitler and saying nasty things about Jews.
A Microsoft representative said on Thursday that the company was “making adjustments” to the chatbot while the account is quiet. “It is as much a social and cultural experiment, as it is technical”, reads Microsoft’s statement.
“The AI chatbot Tay is a machine learning project, designed for human engagement”. Now, barely more than twenty-four hours later, the AI chatbot has gone offline, after it started sending out racist, homophobic, sexist and utterly nonsensical tweets.
Advertisement
Over time she’ll get to know you, and the more she knows you, the better she becomes.