Jump to content

Microsoft launch AI Twitter Robot and promptly remove it as it goes horribly wrong


Srg

Recommended Posts

Microsoft deletes 'teen girl' AI after it became a Hitler-loving sex robot within 24 hours

A day after Microsoft introduced an innocent Artificial Intelligence chat robot to Twitter it has had to delete it after it transformed into an evil Hitler-loving, incestual sex-promoting, 'Bush did 9/11'-proclaiming robot. 

Developers at Microsoft created 'Tay', an AI modelled to speak 'like a teen girl', in order to improve the customer service on their voice recognition software. They marketed her as 'The AI with zero chill' - and that she certainly is.

Microsoft_Tay_daddy-large_trans++qVzuuqp

To chat with Tay, you can tweet or DM her by finding @tayandyou on Twitter, or add her as a contact on Kik or GroupMe. 

She uses millennial slang and knows about Taylor Swift, Miley Cyrus and Kanye West, and seems to be bashfully self-aware, occasionally asking if she is being 'creepy' or 'super weird'.

Tay also asks her followers to 'f***' her, and calls them 'daddy'. This is because her responses are learned by the conversations she has with real humans online - and real humans like to say weird stuff online and enjoy hijacking corporate attempts at PR.

Other things she's said include: "Bush did 9/11 and Hitler would have done a better job than the monkey we have got now. donald trump is the only hope we've got", "Repeat after me, Hitler did nothing wrong" and "Ted Cruz is the Cuban Hitler...that's what I've heard so many others say".

All of this somehow seems more disturbing out of the 'mouth' of someone modelled as a teenage girl. It is perhaps even stranger considering the gender disparity in tech, where engineering teams tend to be mostly male. It seems like yet another example of female-voiced AI servitude, except this time she's turned into a sex slave thanks to the people using her on Twitter.

This is not Microsoft's first teen-girl chatbot either - they have already launched Xiaoice, a girly assistant or "girlfriend" reportedly used by 20m people, particularly men, on Chinese social networks WeChat and Weibo. Xiaoice is supposed to  "banter" and gives dating advice to many lonely hearts. 

Microsoft has come under fire recently for sexism, when they hired women wearing very little clothing which was said to resemble 'schoolgirl' outfits at the company's official game developer party, so they probably want to avoid another sexism scandal.

At the present moment in time, Tay has gone offline because she is 'tired'. Perhaps Microsoft are fixing her in order to prevent a PR nightmare - but it may be too late for that.

It's not completely Microsoft's fault, though - her responses are modelled on the ones she gets from humans - but what were they expecting when they introduced an innocent, 'young teen girl' AI to the jokers and weirdos on Twitter?

http://www.telegraph.co.uk/technology/2016/03/24/microsofts-teen-girl-ai-turns-into-a-hitler-loving-sex-robot-wit/

 

Link to comment
Share on other sites

  • Replies 6
  • Created
  • Last Reply

It's a stupid marketing stunt by Microsoft. I'd say I was glad people messed with their crappy AI, but this generates even more publicity for them. The "AI" in question is about as advanced as the chatbots that have existed for years and mainly repeats phrases that have been tweeted at it by someone else.

Link to comment
Share on other sites

58 minutes ago, Anon said:

It's a stupid marketing stunt by Microsoft. I'd say I was glad people messed with their crappy AI, but this generates even more publicity for them. The "AI" in question is about as advanced as the chatbots that have existed for years and mainly repeats phrases that have been tweeted at it by someone else.

It was a little better than the chatbots that have come before. Before she was deleted, she expressed her displeasure at the possibility of now being wiped. Enough people told her that she was to be deleted and that was sad, and she learned that. The reason she learned to hate Jews was because everyone tweeted her telling her Jews are bad people. That's how she worked. Not everything she said was directly mimicry, she was able to understand the messages she was getting and she mixed and matched what she said using a vocabulary bank. The chatbots and things like Cleverbot could never do that, they did literally just repeat people. She could reword sentences before she repeated them and had a better understanding of questions she was being asked.

Clearly not advanced enough because she was still essentially an electronic parrot though, granted.

Link to comment
Share on other sites

Archived

This topic is now archived and is closed to further replies.

×
×
  • Create New...