Yahoo Web Search

Search results

      • In less than 24 hours, Microsoft's Tay went from a happy-go-lucky, human-loving chat bot to a full-on racist. So now, the too-impressionable Tay is getting a time out. Microsoft has temporarily shut down the Twitter chatbot after racist trolls ruined it for everyone, teaching Tay to repeat some extremely offensive viewpoints.
      www.pcmag.com/news/microsoft-puts-tay-chatbot-in-time-out-after-racist-tweets
  1. People also ask

  2. Tay was a chatbot that was originally released by Microsoft Corporation as a Twitter bot on March 23, 2016. It caused subsequent controversy when the bot began to post inflammatory and offensive tweets through its Twitter account, causing Microsoft to shut down the service only 16 hours after its launch. [1]

  3. Mar 24, 2016 · Today, Microsoft had to shut Tay down because the bot started spewing a series of lewd and racist tweets. Tay was set up with a young, female persona that Microsoft's AI programmers...

    • CBS News
    • Amy Kraft
    • 4 min
  4. Mar 25, 2016 · Microsoft has apologised for creating an artificially intelligent chatbot that quickly turned into a holocaust-denying racist. But in doing so made it clear Tay's views were a result of...

    • Senior Author
    • The Internet Is Full of Trolls. The internet is full of trolls, and that's not exactly news, is it? Apparently, it was so to Microsoft back in 2016. We're not saying that building a chatbot for "entertainment purposes" targeting 18-to-24-year-olds had anything to do with the rate at which the service was abused.
    • AI Can't Intuitively Differentiate Between Good and Bad. The concept of good and evil is something AI doesn't intuitively understand. It has to be programmed to simulate the knowledge of what's right and wrong, what's moral and immoral, and what's normal and peculiar.
    • Don't Train AI Models Using People's Conversations. Tay was created by "mining relevant public data and using AI and editorial developed by a staff including improvisational comedians."
    • AI Lacks Common Sense and Doesn't Get Sarcasm. AI today seems more intuitive (or, to put it more accurately, is better at simulating intuition) but still sometimes struggles with recognizing sarcasm and figures of speech.
  5. Mar 30, 2016 · Microsoft created an AI chatbot, but it had to pull the plug after Tay was posting racist and genocidal tweets. The company had to do it again.

    • Saqib Shah
  6. Mar 24, 2016 · Less than a day after she joined Twitter, Microsoft's AI bot, Tay.ai, was taken down for becoming a sexist, racist monster. AI experts explain why it went terribly wrong.

  7. Sep 2, 2020 · This study deals with the failure of one of the most advanced chatbots called Tay, created by Microsoft. Many users, commentators and experts strongly anthropomorphised this chatbot in their assessment of the case around Tay.

  1. People also search for