Google staff regularly chastised the business’s chatbot Bard, in internal conversations, calling it “a pathological liar” and pleading with the company not to launch it.
This is according to an eye-opening Bloomberg story, which cites talks with 18 current and former Google employees as well as screenshots of internal messages.
During these internal discussions, one employee mentioned how Bard would frequently give customers harmful advice, such as how to land a plane or how to go scuba diving.
Another user stated, “Bard is worse than useless: please do not launch.”
According to Bloomberg, the corporation “overruled a risk evaluation” issued by an internal safety team that stated the system was not ready for general usage.
According to Bloomberg, Google has seemingly ignored ethical concerns in order to compete with rivals like as Microsoft and OpenAI.
Timnit Gebru and Margaret Mitchell were sacked by the business in late 2020 and early 2021 after they produced a research paper identifying faults in the same AI language systems that support chatbots like Bard.
Google spokesperson Brian Gabriel told Bloomberg that AI ethics remained a primary goal for the corporation. “We are continuing to invest in the teams that work on applying our AI Principles to our technology,” Gabriel stated.