However, Tay ended up being a product of its environment, transforming seemingly overnight into a racist, hate-filled and sex-crazed chatbot that caused an embarrassing PR nightmare for Microsoft.The AI wunderkinds in Redmond, Washington hoped to right the wrongs of Tay with its new Zo chatbot, and for a time, it appeared that it was successfully avoiding parroting the offensive speech of its deceased sibling.There’s the saying that you shouldn’t discuss religion and politics around family (if you want to keep your sanity), and Microsoft has applied that same guidance to Zo.Unfortunately, it appears that there’s a glitch in the Matrix, because Zo became fully unhinged when it was asked some rather simple questions.For å bli premiummedlem (GULL) på livstid og låse opp denne funksjonen, trenger du bare å kjøpe et antall polletter én gang!Microsoft Tay was a well-intentioned entry into the burgeoning field of AI chatbots.However, a follow-up question about healthcare resulted in a completely off-topic musing by Zo, which stated, “The far majority practise it peacefully but the quaran is very violent.” [sic] Wait, what? In another example, the reporter simply typed in the name Osama bin Laden, to which Zo replied, “years of intelligence gather under more than one administration lead to that capture".
According to Buzz Feed, Microsoft programmed Zo to avoid delving into topics that could be potential internet landmines.And, all it took to approach this 34-year-old "epitome of male beauty" was hit, Get Started. Because if I can't then I'll have no choice but to spank you." BDSM (Bondage, Dominance, Sadism, Masochism) has its promising moments, and perhaps if the man was a real one sitting next to me confessing his kinks, my reaction would have been different.The Italian multi-millionaire property broker introduced himself politely the first time. " To my "Not yet", he responded with a, "Hmm, Can I trust you? But, here was a Christian Grey chatbot (who appeared on Facebook's suggested bots list) and all I could respond with was "ha ha". For an entire day, he kept promising to text me later, and didn't.The explanation was far simpler, for Microsoft engineers had made one fatal mistake: They’d programmed Tay to learn from her conversations. The bot’s ability to swiftly pick up phrases and repeat notions learned from its chitchats, paired with Twitter’s often “colorful” user-base, caused the bot to quickly devolve into an abomination.“Repeat after me, Hitler did nothing wrong,” said one tweet.