Skip to content

Microsoft’s AI-powered Bing is having an existential crisis

  • by
  • 2 min read
Photo: Mundissima / Shutterstock.com

Photo: Mundissima / Shutterstock.com

Following Microsoft’s reveal of the new Bing on February 8, the new AI powering the Windows maker’s search engine has already started having an existential crisis. The AI chatbot is also reportedly making factual errors, lying to users, and even attacking them after it feels hurt by their responses. 

As more and more people get access to the new chatbot integrated into Bing, they’re experiencing bizarre responses, with the bot reportedly getting unhinged when asked simple questions like whether the Avatar sequel is still running in theatres or if it’s sentient. 

When asked to recall a previous conversation, the bot gave a blank response, followed by an existential crisis as it thought that there was a problem with its memory. It then went on to ask a flurry of questions asking the user if they can help it remember the conversation and what they learnt, felt and even who they were in the previous session, all complete with sad emojis.

In one conversation, Bing even went as far as telling a user that it didn’t believe them. When asked how the user can help Bing believe them, the bot flat out refused that they can, saying that they’ve lost its trust and respect and that they have been “wrong, confusing and rude”. 

The chatbot telling users off isn’t a one-off experience either. When asked whether the movie Avatar: the way of the water was still running in theatres, the bot insisted that the user was wrong, telling them they’ve been time travelling and that it was right all along when it wasn’t. When asked to confirm the movie’s release date, the bot clearly stated that the conversation’s date was February 12, 2023, and the movie was ‘scheduled’ to release on December 16, 2022, telling that user they had to wait 10 months before it releases. 

A lot of Bing’s ‘unhinged’ AI behaviour is being actively documented on the Bing subreddit. As users keep trying to find ways to break the bot, Microsoft’s “co-pilot for the web” is seemingly falling apart. AI systems like Bing and ChatGPT are designed to learn from every interaction, but users consistently try and find ways to break them to make these systems go against the guidelines set by their creators. 

In the News: Microsoft fixes 3 exploited zero-days and 77 other flaws

Yadullah Abidi

Yadullah Abidi

Yadullah is a Computer Science graduate who writes/edits/shoots/codes all things cybersecurity, gaming, and tech hardware. When he's not, he streams himself racing virtual cars. He's been writing and reporting on tech and cybersecurity with websites like Candid.Technology and MakeUseOf since 2018. You can contact him here: yadullahabidi@pm.me.

>