Share

Microsoft’s AI chatbot ‘Tay’ taken offline within 24 hours of launch

Some Twitter users appear to think that Microsoft had also manually banned people from interacting with the bot. Others are asking why the company didn’t build filters to prevent Tay from discussing certain topics, such as the Holocaust.

Advertisement

Microsoft says Tay is specifically aimed at teenagers, so it’s very clear that such rude remarks aren’t really appropriate especially for this age range.

The project was targeted at “18 to 24 year olds in the U.S., the dominant users of mobile social chat services in the United States”, the Tay.AI website read. He did not elaborate on the matter, but confirmed that Redmond is “making adjustments” to Tay while it remains inactive.

Microsoft’s artificially intelligent chat bot Tay picked up some pretty radical views from humankind. “The more you chat with Tay the smarter she gets, so the experience can be more personalized for you”, explained Microsoft, in a recent online post.

Microsoft has said Tay’s tirade was a result of online trolls tricking the program’s “commenting skills”. As Tay was created to learn from social conversations, it started learning whatever was fed to it by internet trolls.

All these twitter tweets were later twitted, according to the new outlets, published by tech news on verged to Tay tweets like I hate feminist all should die and burn in hell. The Washington Times reports that after repeating racist comments, she then incorporated the language into her own tweets.

As Tay learnt new things while having conversation with people, it started going on the inappropriate side of conversations.

And it received Reuter’s direct message on Twitter to Tays tweets that it was away and back soon. By Wednesday evening, Tay was reflecting the more unsavory aspects of life online.

Advertisement

But some users found Tay’s responses odd, and others found it wasn’t hard to nudge Tay into making offensive comments, apparently prompted by repeated questions or statements that contained offensive words. Tay inexplicably added the “repeat after me” phrase to the parroted content on at least some tweets, implying that users should repeat what the chatbot said.

It didn't take long for Tay to learn the dark ways of the web