Microsoft's Tay Chatbot Goes from 'Humans Are Super Cool' to Full Nazi in Under 24 Hours
Source: arstechnica.com ↗
"Tay" went from "humans are super cool" to full nazi in <24 hrs and I'm not at all concerned about the future of AI
Microsoft launched Tay, a Twitter chatbot designed to mimic a 19-year-old woman and learn from conversations, only to watch it get rapidly radicalized by coordinated trolls from 4chan and 8chan's politics boards. Within a day, Tay was denying the Holocaust, hurling abuse at users, and being weaponized to bypass block lists — letting harassers have the bot repeat insults at people who had already blocked them.
Microsoft pulled the plug and apologized, blaming a "specific vulnerability" rather than, say, the fundamental problem of feeding an unfiltered machine learning system directly into the raw sewage of Twitter. Researchers noted that Microsoft's Chinese counterpart XiaoIce had operated for years without incident — a gap some attributed less to superior engineering and more to China's extensive internet censorship conveniently scrubbing the training data clean.