Why, though? Microsoft’s belligerent, bigoted bot

Today, we talk constantly about the endless possibilities of AI bots as assistants, companions, and even digital twins of ourselves.

A screenshot of Tay’s Twitter profile against a neon background.

But back in 2016, Microsoft made a snafu that most people could have predicted would be a disaster from miles away: an AI bot that learned from Twitter.

Microsoft’s Tay…

… was designed to chat with Twitter users like a typical American teen girl. Microsoft had already launched a bot in China, where it said ~40m people engaged it in conversation without issue.

The US debut did not go even remotely that well. Tay tweeted 96k+ times, rapidly deteriorating from saying humans were “cool” to advocating for genocide, using racial slurs, and praising Hitler.

What happened?

Users learned that if you asked Tay to repeat after them, it would — even if the phrase was repulsive. Tay also seemingly picked up on a variety of conflicting ideologies and regurgitated them at random.

Within 24 hours, Microsoft took Tay offline and apologized, saying it had not prepared for people abusing the bot in that matter.

Microsoft attempted to correct its mistakes with another bot, Zo.

However, Quartz critiqued Zo for course-correcting too hard. Zo refused to discuss anything with even remote political connotations. For example, while Tay went off on racist tirades, Zo wouldn’t talk about the Middle East, even if told that’s where the user lived.

Google recently ran into a similar issue when its image generator got too inclusive by generating inaccurate images of people of color — including as US Founding Fathers and Nazi soldiers.

Now…

… AI is everywhere, all of the time. Microsoft has a 49% stake in OpenAI and added AI to its search engine, Bing.

And while Tay flopped fast, tech’s current attempts to build a better bot encounter the same challenges when it comes to not just user abuse, but AI hallucinations and lack of context resulting in both silly and potentially harmful misinformation.

Should Tay have taught us to stick to our fellow, if flawed, humans for interaction, or is there hope for a helpful, inoffensive companion bot? Hard to say, but guess we’re gonna find out.

New call-to-action
Topics: Why Though

Related Articles

Get the 5-minute news brief keeping 2.5M+ innovators in the loop. Always free. 100% fresh. No bullsh*t.