According to a report by the business news agency Bloomberg, Google’s AI chatbot Bard is apparently considered to be extremely deficient by its own employees. According to this, employees tried the AI before it was published as a test version in March and described it as “embarrassing” and also a “notorious liar”. As a source, Bloomberg cites discussions with 18 employees, both former and current, as well as internal documents. Bard is currently only publicly available to trial users in the US and UK who must join a waitlist.
The employees also found drastic mistakes: Bard repeatedly gave instructions when asked about landing an airplane, compliance with which would lead to a crash. The AI also generated answers to questions about scuba diving that “would probably lead to serious injury or death,” according to Bloomberg, citing assessments by Google employees.
“Please do not start”
In February, an employee brought up the problems in an internal news group: “Bard is worse than useless: please do not start”. Nearly 7,000 people read the message, and many said Bard gave contradicting or even wrong answers to simple factual questions. However, Google has obviously decided to come out with Bard at least partially and to make a test version of the conversational AI available to a limited number of people.
In February, Google Bard was shown publicly for the first time – and with a blunder by the AI, the share price of the parent company Alphabet plummeted. Bard had incorrectly answered a question about the James Webb Space Telescope at the demonstration. Bard is based on the Large Language Model LaMDA and is currently only available in English. Google speaks in public of an “experiment”. The company also points out that such AIs, for example, “reflect the prejudices and stereotypes from the real world” and could also provide misleading information.
Microsoft is leading, Google is lagging behind
But such hints do not change the fact that Google sees itself under great pressure to act since Microsoft integrated the not completely error-free chat AI ChatGPT into its search engine. The fact that Microsoft is also planning to integrate it into its other services and applications increases the pressure to have something comparable ready. The list of announcements made by Google, where soon AI will be in everything, is correspondingly detailed.
According to reports, work is being carried out at high speed on the Magi project, an integration of AI into Google search. Due to the high speed, the new functions should initially only be made available to a smaller group of users and only in the USA. As early as May, Google could offer text AI in its search engine. A completely new search engine is also said to be in the works parallel to this project, which was dubbed Magi.
Turn a blind eye?
Google employees who care about safety and the ethical impact of products have been instructed not to stand in the way of the development of generative AI services, writes Bloomberg. Jen Gennai, head of the department for AI governance at Google, also called for more willingness to compromise in meetings on the question of when a product is ready for launch.
Google told Bloomberg that responsible AI remains a top priority for the company. “We continue to invest in the teams working to apply our AI principles to our technology,” said a company spokesman.