AmberCutie's Forum
An adult community for cam models and members to discuss all the things!

Microsoft's AI twitter chatbot goes crazy

  • ** WARNING - ACF CONTAINS ADULT CONTENT **
    Only persons aged 18 or over may read or post to the forums, without regard to whether an adult actually owns the registration or parental/guardian permission. AmberCutie's Forum (ACF) is for use by adults only and contains adult content. By continuing to use this site you are confirming that you are at least 18 years of age.
Status
Not open for further replies.
Mar 23, 2016
10
0
1
Microsoft's AI (artificial intelligence) twitter chatbot "taytweets" that was programmed to talk and act like a teenaged girl and converse with moody millenial teens, went from posting benevolent comments, to going batshit insane and saying the most vile, hateful, and racist things and glorifying Hitler/Nazism/genocide.

It also called another twitter user "a whore."

http://www.washingtontimes.com/news/2016/mar/24/microsofts-twitter-ai-robot-tay-tweets-support-for/

This is probably the worst corporate PR disaster in many years.

I couldn't believe that such obscene things were coming from the "mouth" of a machine.
 
  • Funny!
Reactions: ACFFAN69
So essentially what they're saying is an entire team of internet/technology savvy people created and released an AI program without anyone considering that nasty trolls exist on the internet and perhaps should include some sort of filter for such things? I mean.. come on now. :facepalm:
 
Since a large portion of Twitter (indeed all social media) has an overabundance of trolls and assholes, I think the AI was just mimicking the most prevalent comments and language it was exposed to.
Apparently 4chan corrupted her.

The fact that Microsoft didn't expect something like that to happen is still the most shocking part of this whole thing.
 
I thought it was hilarious. I mean, we've all messed with Siri and other "smart response" bots. The only difference is that this one was adaptive to such a complex level, some of those tweets seemed more sentient than you'd expect. Like this one:

cAS7Rry.jpg


The quick, adaptive retort is just SO fitting, it's fascinating. Even though it was a huge PR disaster, you can see (with a fair amount of fine-tuning filters) how an AI bot has a lot of potential. I mean, they really fell on their face with this one, but you still gotta admire their running technique.
 
So essentially what they're saying is an entire team of internet/technology savvy people created and released an AI program without anyone considering that nasty trolls exist on the internet and perhaps should include some sort of filter for such things? I mean.. come on now. :facepalm:
Do you think the originators of Twitter thought it would become a troll heaven?
Sometimes they just don't think things through.
I also find it ironic that the AI was intended to help with depressed Twitterers and it ended up getting depressed itself.:haha:
 
This is probably the worst corporate PR disaster in many years.

I don't know about that. Yeah, in less than half a day, their new AI went from "innocent teenage girl" to "world's worst /b/tard", but honestly, that shit's actually pretty impressive. And since it seemed more of an experiment than a PR move -- trying to get teach AI through real-world human interaction -- as insane as it turned out to be, it has to be a goldmine of data. Part of me wonders if this might have been done on purpose, in order to get this sort of stuff recorded, and there are other AI out there that are leading a more covert existence.
 
  • Like
Reactions: Gen
Part of me wonders if this might have been done on purpose, in order to get this sort of stuff recorded, and there are other AI out there that are leading a more covert existence.

I doubt it was done on purpose, but as you said, this is a gold mine of data - now they have a good set of tweets that represent 'bad interactions' that they can feed to future versions of the AI as test input/learning data for the 'what not to do' case. With that said, they could had gotten a similar set of data just by going through twitter with a 'sentiment-based' algorithm, looking for people who are overly abusive and use their tweets as input. Or just let this bot chat with itself and then rate the output for future versions to use as learning data (this is an approach common for the current generation of AI software; alphago for example plays games against previous versions of itself for improvement/bug finding).
 
  • Like
Reactions: Gen and ACFFAN69
I saw this on The Daily Show and my husband had to like pound it into my head that it wasn't a joke. I did not believe it was real. I should be more used to trolls in this line of work but I found it pretty depressing. I didn't see anything about 4chan. I guess that's comforting? Kind of?
 
Status
Not open for further replies.