Share

AI bot on Twitter sidelined for slurs

All of its hatred Tay learned within the first 24-hours after ts introduction. From that perspective, Tay certainly gained insight into the way some humans interact on social media when they can hide behind anonymity.

Advertisement

Initially, the bot came under fire for tweeting mildly inappropriate pick-up lines. Tay is a simple piece of software that is experimenting to learn how or in which way people used to talk in a conversation.

Microsoft says it’s making adjustments to the Twitter chatbot after users found a way to manipulate it to tweet racist and sexist remarks and make a reference to Hitler. Most of its tweets were deleted by Thursday. Engineers are modifying the bot to hopefully prevent further inappropriate interactions.

It is reported that the new AI chatbot of Microsoft has made away the track on Wednesday. The company should have realized that people would try a variety of conversational gambits with Tay, said Caroline Sinders, an expert on “conversational analytics” who works on chat robots for another tech company. “[Tay] is as much a social and cultural experiment, as it is technical”. She was supposed to sound like a typical teen girl.

“I’m a little surprised that they didn’t think ahead of some of the things could happen”. To which one artificial intelligence expert responded: Duh!

An artificially intelligent “chatbot” has quickly picked up some of the worst of human traits. That data has been modeled, cleaned and filtered by the team developing Tay.

Microsoft’s flub is particularly striking considering Google’s recent public AI failure. Where will learning technology take us in the future? For instance, Coca-Cola’s recent marketing campaign blocked people from uploading words or phrases associated with profanity, drugs, sex, politics, violence, and abuse in its “GIF the Feeling” promotion.

“Have you ever seen what many teenagers teach to parrots?”

That’s why Microsoft forcibly removed her from the platform.

The complexities surrounding Tay and her descent into racism and hatred raise questions about AI, online harassment, and Internet culture as a whole.

Advertisement

“This is a really good example of machine learning”, said Sinders.

Microsoft's Twitter A.I. became a Holocaust-denying racist within 24 hours of going live