-
Tips for becoming a good boxer - November 6, 2020
-
7 expert tips for making your hens night a memorable one - November 6, 2020
-
5 reasons to host your Christmas party on a cruise boat - November 6, 2020
-
What to do when you’re charged with a crime - November 6, 2020
-
Should you get one or multiple dogs? Here’s all you need to know - November 3, 2020
-
A Guide: How to Build Your Very Own Magic Mirror - February 14, 2019
-
Our Top Inspirational Baseball Stars - November 24, 2018
-
Five Tech Tools That Will Help You Turn Your Blog into a Business - November 24, 2018
-
How to Indulge on Vacation without Expanding Your Waist - November 9, 2018
-
5 Strategies for Businesses to Appeal to Today’s Increasingly Mobile-Crazed Customers - November 9, 2018
Microsoft’s AI Bot Turns Racist on Twitter
Microsoft has since removed numerous offensive tweets and blocked users who spurred them. They discovered she would repeat tweets, no matter how offensive, and before long Tay had devolved into a racist chatbot tweeting inflammatory remarks, conspiracy theories, and all matter of offensive content. Her tweets involved Hitler, Jews, 9/11, and so forth. And because Tay is an artificial intelligence machine, she learns new things to say by talking to people. It was targeted at 18- to 24-year-olds in the United States and was developed by a staff that included improvisational comedians.
Advertisement
Tay has since been taken “offline and we are making adjustments”, the company said. (“The more you talk the smarter Tay gets”, says the bot’s Twitter profile.) But the well-intentioned experiment quickly descended into chaos, racial epithets, and Nazi rhetoric.
It all started innocently enough on Tuesday, when Microsoft introduced an AI twitter account simulating a teenage millennial girl. The bot’s developers at Microsoft also collect the nickname, gender, favorite food, zip code and relationship status of anyone who chats with Tay. Tay’s Twitter account – with more than 55,000 followers – is still alive but Microsoft has deleted all but three of its tweets. Numerous really bad responses, as Business Insider notes, appear to be the result of an exploitation of Tay’s “repeat after me”, function – and it appears that Tay was able to repeat pretty much anything. So many filthy, filthy conversations indeed, and here’s hoping Microsoft’s maintenance would prevent Tay from getting trolled just like it was in the first 24 hours that followed its launch.
In possibly her most shocking post, at one point Tay said she wishes all black people could be put in a concentration camp and “be done with the lot”.
We’d like to think that this isn’t the kind of behavior that we would expect from millennials, and Microsoft no doubt agrees.
Advertisement
In Tay’s case, it took less than 24 hours of tweeting back and forth for the AI and the users it was conversing with to finally compare someone or something to Hitler and Nazism.