-
Tips for becoming a good boxer - November 6, 2020
-
7 expert tips for making your hens night a memorable one - November 6, 2020
-
5 reasons to host your Christmas party on a cruise boat - November 6, 2020
-
What to do when you’re charged with a crime - November 6, 2020
-
Should you get one or multiple dogs? Here’s all you need to know - November 3, 2020
-
A Guide: How to Build Your Very Own Magic Mirror - February 14, 2019
-
Our Top Inspirational Baseball Stars - November 24, 2018
-
Five Tech Tools That Will Help You Turn Your Blog into a Business - November 24, 2018
-
How to Indulge on Vacation without Expanding Your Waist - November 9, 2018
-
5 Strategies for Businesses to Appeal to Today’s Increasingly Mobile-Crazed Customers - November 9, 2018
Microsoft takes chatbot offline after offensive tweets
It was also created to “entertain people”, but with “casual and playful conversation”, not Internet trolling.
Advertisement
But perhaps Microsoft shouldn’t have deleted all of Tay’s pro-Hitler comments.
Tay fell silent after making several provocative and controversial posts on Twitter.
It’s therefore somewhat surprising that Microsoft didn’t factor in the Twitter community’s fondness for hijacking brands’ well-meaning attempts at engagement when writing Tay.
Microsoft may want to rethink this experiment. But it turns out that Tay quickly picked up some racist slurs from the internets and was soon tweeting neo-Nazi propaganda and other, similarly awful.
The bot retreated from Twitter at 4.20am GMT this morning, saying it “needed sleep”. In the hours following Tay’s release, the bot’s mentions were immediately flooded with racism, sexism, screeds against feminism, Donald Trump quotes, and just about anything else you might imagine.
Microsoft has been deleting the most problematic tweets, forcing media to rely on screenshots from Twitter users.
The project was targeted at “18 to 24 year olds in the USA, the dominant users of mobile social chat services in the U.S.”, the Tay.AI website read.
Microsoft said that it was working to fix the problems that caused the offensive tweets to be sent.
“Unfortunately, within the first 24 hours of coming online, we became aware of a coordinated effort by some users to abuse Tay’s commenting skills to have Tay respond in inappropriate ways”, Microsoft said. Because Tay expanded her knowledge base by interacting with other users, she was easily manipulated by online trolls into spouting virulently racist, misogynistic, and even genocidal comments. Tay was built with public data and content from improvisational comedians. Tay also said she supports genocide against Mexicans and said she “hates n*****s”.
Advertisement
“The more you chat with Tay the smarter she gets, so the experience can be more personalized for you”, Microsoft explains.