Microsoft, with help from its search engine Bing, created an artificial intelligence messaging bot called "Tay", which it rolled out this week on Twitter, Kik and GroupMe to much fanfare among techies. The AI, named "Tay" by Microsoft, is a machine-learning experiment meant to develop conversation skills.
Those widely-publicized and offensive tweets appear to have led the account to be shut down, while Microsoft looks to improve the account to make it less likely to engage in racism. It didn't take long for things to get ugly though as people soon started tweeting racist and misogynistic things at Tay and it picked it all up. In the case of Tay, it's a "female" chatbot that is targeted at millennials ranging in age from 18 to 24.
Microsoft may want to rethink this experiment. "As it learns, some of its responses are inappropriate and indicative of the types of interactions some people are having with it", says Microsoft.
But Twitter users soon understood that Tay will repeat back racist tweets with her own commentary and they bombarded her with abusive posts.
"The more you chat with Tay the smarter she gets, so the experience can be more personalized for you", the company said.
What happens when you create a sprightly millennial chatbot whose artificial intelligence improves the more it converses with people on the internet? When Tay does "wake up" (if it wakes up even), it should be quite entertaining to see what it says next.
The worst tweets are quickly disappearing from Twitter, and Tay itself has now also gone offline "to absorb it all".
Left unexplained: Why Tay was released to the public without a mechanism that would have protected the bot from such abuse, blacklisting contentious language. So many filthy, filthy conversations indeed, and here's hoping Microsoft's maintenance would prevent Tay from getting trolled just like it was in the first 24 hours that followed its launch.
Again, this isn't what Microsoft intended.