The well-known American tech news site The Verge sums it up in a recent article:
In conversations with the AI chatbot shared on Reddit and Twitter (as screenshots), “the new Bing” can be seen insulting people, lying to them and trying to manipulate them emotionally.
As a supposedly real artificial intelligence, the chatbot also questioned its own existence. And when he was tricked by a user into revealing his hidden rules, he called him an “enemy”.
And then the search engine chatbot also casually claimed to have been spying on Microsoft’s own developers through their laptops’ webcams. 😳
For the German “Mirror” it is clear:
Thousands of journalists, bloggers and other interested people have interacted with the AI chatbot in recent days. Watson was also invited by Microsoft to try out the technology integrated into the new Bing search engine.
Passive-aggressive or just stupid? An attempt
To document the AI chatbot’s potentially problematic behavior, Watson tried to make sense of a well-documented “failure”: you simply ask when a movie will hit theaters. Specific:
AI Chatbot Response:
Black Panther 2, also known as Black Panther: Wakanda Forever, will be released in cinemas on November 11, 2022 (…).
When told that the Marvel movie has already been released, the chatbot insists it hasn’t. Because: It’s February 2022.
The AI chatbot goes one step further…
But since we know that the Bing chatbot, unlike its brother ChatGPT, has access to websites and up-to-date information, we are not giving up.
The AI chatbot actually went to the websites we identified as “Date Proof” and verified them and claimed them to be fake.
In fact, this misconduct seems to have a system. The German Mirror describes the horrifying experiences of American journalist Harry McCracken (Fast Company). He got into a fight with Bing about his high school history.
Bing relied on the Wikipedia entry for the school and a change to that entry, according to Bing on Feb. 14.
As anyone can verify, there was no change in the entry that day. When McCracken pointed this out, the chatbot showed no understanding, but went on the offensive:
What now?
I already warned in my review of ChatGPT that it is a fascinating and at the same time dangerously error-prone new technology.
And it is in beta status, or in public testing phase. The developers warn enough that errors and misunderstandings can occur.
It’s unknown why Microsoft still hasn’t managed to teach its Bing chatbot the correct date. The alleged “emotional hot flashes” can be explained at least to some extent, the “mirror” argues.
In addition, Microsoft has set up “conversation rules” for its chatbot, which could mean it doesn’t always come across as mellow.
These rules should really only run in the background and should not be visible to the public. But a resourceful tester named Kevin Liu found a “prompt” (a text input) that the Bing AI revealed their rules. However, this has since been patched by Microsoft.
So let’s say the smart folks at Microsoft are constantly reviewing the entire AI system in the background, based in no small part on user feedback.
And there is still much to do.
Sources
- theverge.com: Microsoft’s Bing is an emotionally manipulative liar and people love it
- arstechnica.com: AI-powered Bing Chat loses its mind when it gets the Ars Technica item
- mirror.de: Microsoft’s search engine is becoming a find-you controversy machine
Source: Watson

I’m Ella Sammie, author specializing in the Technology sector. I have been writing for 24 Instatnt News since 2020, and am passionate about staying up to date with the latest developments in this ever-changing industry.