Microsoft's New Twitter AI Got Shut Down Because It Was So Racist
Microsoft, you should have expected this.
Have you heard of Tay?
Tay is (or perhaps was) an artificial intelligence chatbot created by Microsoft that had a very short-lived debut on Twitter this week. Microsoft tried to draw the interest of the 18- to 24-year-old demographic to chat with Tay by marketing her as "AI fam from the internet that’s got zero chill."
If there's one thing millennials are great at, it's smelling pandering bullshit from a mile away. And Tay's god-awful tagline should tell you she (or, rather, Microsoft) is full of it. Tay effectively "learns" how to communicate with humans by communicating with humans, which means anyone could hypothetically teach her anything.
As Microsoft probably should have expected, Twitter users began trolling Tay. But they probably never expected what that trolling would consist of: teaching Tay racism.
Tay's education in racism started off slowly. People first taught her to love Trump.
Before long, people had Tay repeating Trump, literally verbatim.
Not even 24 hours into her debut, Tay was espousing serious xenophobia. But this is mild compared to what a few other people got her to say. There were numerous mentions of Hitler and his beliefs being "right," racial slurs thrown around, and much, much more support for Trump.
Eventually, things got so bad—and Tay became so racist—that I can't even repeat some of the things her tweets said.
Microsoft took notice and began deleting some of the more awful tweets as fast as they could. Overall, Tay didn't go over so well. It may have been a fun experiment for a lot of folks, but it doesn't look great for Microsoft if their first AI experiment is a blatant bigot.
Microsoft essentially "fixed" Tay, correcting her racist behavior and teaching her to be a feminist.
Which was suspicious to some folks, mostly because it seemed her memory had been wiped clean. Hopefully, in the future of AI, the overall terribleness of humans will be accounted for.
This Twitter user actually had a decent point to make about the whole affair.
Maybe we're just not ready for Artificial Intelligence yet.
At the end of the fiasco, Tay was silenced, and she hasn't been heard from since.
Tay's premiere literally couldn't have gone worse. What do you think? Should Microsoft have silenced their AI experiment?