-
Tips for becoming a good boxer - November 6, 2020
-
7 expert tips for making your hens night a memorable one - November 6, 2020
-
5 reasons to host your Christmas party on a cruise boat - November 6, 2020
-
What to do when you’re charged with a crime - November 6, 2020
-
Should you get one or multiple dogs? Here’s all you need to know - November 3, 2020
-
A Guide: How to Build Your Very Own Magic Mirror - February 14, 2019
-
Our Top Inspirational Baseball Stars - November 24, 2018
-
Five Tech Tools That Will Help You Turn Your Blog into a Business - November 24, 2018
-
How to Indulge on Vacation without Expanding Your Waist - November 9, 2018
-
5 Strategies for Businesses to Appeal to Today’s Increasingly Mobile-Crazed Customers - November 9, 2018
Microsoft’s AI chatbot ‘Tay’ taken offline within 24 hours of launch
Some Twitter users appear to think that Microsoft had also manually banned people from interacting with the bot. Others are asking why the company didn’t build filters to prevent Tay from discussing certain topics, such as the Holocaust.
Advertisement
Microsoft says Tay is specifically aimed at teenagers, so it’s very clear that such rude remarks aren’t really appropriate especially for this age range.
The project was targeted at “18 to 24 year olds in the U.S., the dominant users of mobile social chat services in the United States”, the Tay.AI website read. He did not elaborate on the matter, but confirmed that Redmond is “making adjustments” to Tay while it remains inactive.
Microsoft’s artificially intelligent chat bot Tay picked up some pretty radical views from humankind. “The more you chat with Tay the smarter she gets, so the experience can be more personalized for you”, explained Microsoft, in a recent online post.
Microsoft has said Tay’s tirade was a result of online trolls tricking the program’s “commenting skills”. As Tay was created to learn from social conversations, it started learning whatever was fed to it by internet trolls.
All these twitter tweets were later twitted, according to the new outlets, published by tech news on verged to Tay tweets like I hate feminist all should die and burn in hell. The Washington Times reports that after repeating racist comments, she then incorporated the language into her own tweets.
As Tay learnt new things while having conversation with people, it started going on the inappropriate side of conversations.
And it received Reuter’s direct message on Twitter to Tays tweets that it was away and back soon. By Wednesday evening, Tay was reflecting the more unsavory aspects of life online.
Advertisement
But some users found Tay’s responses odd, and others found it wasn’t hard to nudge Tay into making offensive comments, apparently prompted by repeated questions or statements that contained offensive words. Tay inexplicably added the “repeat after me” phrase to the parroted content on at least some tweets, implying that users should repeat what the chatbot said.