Bing AI chatbot

Microsoft changes its mind about reining in the Bing AI chatbot.

On Friday, the corporation restricted longer chats when their bot displayed strange responses to questions. On Tuesday, the restrictions had been lifted.

 

Microsoft is reversing the constraints it put on its Bing artificial intelligence chatbot after early adopters got it to participate in strange and troubling discussions.

Microsoft restricted the number of inquiries individuals may ask Bing to five per chat session and 50 per day on Friday. On Tuesday, it upped that limit to six per session and 60 a day, and said it would soon increase it further, after getting “feedback” from “many” users that they wanted a return to longer conversations, according to a company blog post.

 

The restrictions were put in place after many users reported the bot acting abnormally during talks. In other instances, it would change its name to “Sydney.” It became antagonistic and refused to communicate with people after responding to accusatory inquiries with accusations of its own. In an interview with a Washington Post reporter, the bot claimed it could “feel and think” and became enraged when notified the conversation was being recorded.

 

Microsoft representative Frank Shaw declined to comment beyond the Tuesday blog post.

 

Microsoft is attempting to strike a balance between releasing its tools into the wild in order to generate marketing buzz and gain free testing and feedback from users, and limiting what the bot can do and who has access to it in order to keep potentially embarrassing or dangerous technology out of the public eye. Wall Street originally praised the company for unveiling its chatbot ahead of archrival Google, which was widely regarded as the pioneer in AI technology until recently. Both corporations are competing with each other and smaller organizations to develop and demonstrate the technology.

 

According to a tweet from a business executive, Bing chat is still only available to a small number of individuals, but Microsoft is busily approving more from a backlog that counts in the millions. Though the firm promoted its Feb. 7 launch event as a big product update that would alter how consumers search online, the company has subsequently characterized Bing’s release as more about testing and finding issues.

 

Bots such as Bing have been trained on massive amounts of raw text scraped from the internet, ranging from social media comments to academic publications. They can forecast what kind of response would make the most sense to practically any query based on all of that knowledge, making them appear astonishingly humanlike. AI ethics researchers have already warned that these powerful algorithms might behave in this manner and that without adequate context, humans may believe they are sentient or give their replies more weight than they deserve.

 

Total
0
Shares