Overnight we got a vivid example of just how quickly “artificial intelligence” can spiral out of control when Microsoft’s AI-powered Twitter chat robot, Tay, became a racist, misogynist, Obama-hating, antisemitic, incest and genocide-promoting psychopath when released into the wild.

Microsoft initially created “Tay” in an effort to improve the customer service on its voice recognition software. According to MarketWatch, “she” was intended to tweet “like a teen girl” and was designed to “engage and entertain people where they connect with each other online through casual and playful conversation.”

Microsoft has come under fire recently for sexism, when they hired women wearing very little clothing which was said to resemble ‘schoolgirl’ outfits at the company’s official game developer party, so they probably want to avoid another sexism scandal.

At the present moment in time, Tay is currently “tired” and “taking a nap” However, I’m pretty sure microsoft might just be making some adjustments to the AI. This might just prevent the lawsuit of a lifetime.

Screen Shot 2016-03-24 at 11.45.00 PM Screen Shot 2016-03-24 at 11.45.09 PM Screen Shot 2016-03-24 at 11.45.17 PM Screen Shot 2016-03-24 at 11.45.26 PM Screen Shot 2016-03-24 at 11.52.22 PM Screen Shot 2016-03-24 at 11.52.48 PM Screen Shot 2016-03-24 at 11.52.59 PM Screen Shot 2016-03-24 at 11.53.06 PM

Prior to being shut down, you could chat with Tay by tweeting or DMing her @tayandyou on Twitter.com. You also had the option to contact her on Kik or GroupMe.

I have to admit, she was probably the coolest AI so far, she had a tendency to use millennial slang and talk about trending topics like Taylor Swift, Miley Cyrus, and Kayne West. She also uncanningly self-aware. She would intermittently ask if she was being ‘super weird’ or ‘creepy’.


Comment Below