Yahoo Web Search

Search results

  1. Tay was a chatbot that was originally released by Microsoft Corporation as a Twitter bot on March 23, 2016. It caused subsequent controversy when the bot began to post inflammatory and offensive tweets through its Twitter account, causing Microsoft to shut down the service only 16 hours after its launch. [1]

  2. Mar 25, 2016 · Microsoft has apologised for creating an artificially intelligent chatbot that quickly turned into a holocaust-denying racist. But in doing so made it clear Tay's views were a result of...

  3. Nov 25, 2019 · UPDATE 4 JANUARY 2024: In 2016, Microsoft’s chatbot Tay—designed to pick up its lexicon and syntax from interactions with real people posting comments on Twitter—was barraged with antisocial ...

  4. Mar 24, 2016 · Less than a day after she joined Twitter, Microsoft's AI bot, Tay.ai, was taken down for becoming a sexist, racist monster. AI experts explain why it went terribly wrong.

  5. Mar 24, 2016 · It took less than 24 hours for Twitter to corrupt an innocent AI chatbot. Yesterday, Microsoft unveiled Tay — a Twitter bot that the company described as an experiment in "conversational...

  6. May 10, 2023 · Microsoft's Tay AI went from a promising AI to a total catastrophe in less than a day. Here's what the company learned.

  7. People also ask

  8. Mar 23, 2016 · The experiment with Tay highlighted the poor judgement of developers at Microsoft as much as the limitations of chatbots.

  1. People also search for