Google released its AI chatbot Bard to rival ChatGPT and Microsoft’s Bing chatbot, but comparisons have not been flattering to Bard. Sundar Pichai says upgrades are on the way.
Google CEO Sundar Pichai has responded to criticism of the company’s experimental AI chatbot Bard, promising that Google will be upgrading Bard soon.
“We clearly have more capable models,” Pichai said in an interview on The New York Times’ Hard Fork podcast. “Pretty soon, perhaps as this [podcast] goes live, we will be upgrading Bard to some of our more capable PaLM models, which will bring more capabilities; be it in reasoning, coding, it can answer maths questions better. So you will see progress over the course of next week.”
“In some ways I feel like we took a souped-up Civic and put it in a race with more powerful cars.”
Pichai noted that Bard is running on a “lightweight and efficient version of LaMDA,” an AI language model that focuses on delivering dialog. “In some ways I feel like we took a souped-up Civic and put it in a race with more powerful cars,” said Pichai. PaLM, by comparison, is a more recent language model; it’s larger in scale and Google claims it is more capable when dealing with tasks like common-sense reasoning and coding problems.
Bard was first released to public users on March 21st, but failed to garner the attention or acclaim won by OpenAI’s ChatGPT and Microsoft’s Bing chatbot. In The Verge’s own tests of these systems, we found that Bard was consistently less useful than its rivals. Like all general purpose chatbots it is able to respond to a wide range of questions, but its answers are generally less fluent and imaginative, and fail to draw on reliable data sources.
Pichai suggested that part of the reason for Bard’s limited capabilities was a sense of caution within Google. “To me, it was important to not put [out] a more capable model before we can fully make sure we can handle it well,” he said.
Pichai also confirmed that he was talking with Google co-founders Larry Page and Sergey Brin about the work (“Sergey has been hanging out with our engineers for a while now”) and that while he himself never issued the infamous “code red” to scramble development, there were probably people in the company who “sent emails saying there is a code red.”
Pichai also discussed concerns that development of AI is currently moving too fast and perhaps poses a threat to society. Many in the AI and tech communities have been warning about the dangerous race dynamic currently in play between companies including OpenAI, Microsoft, and Google. Earlier this week, an open letter signed by Elon Musk and top AI researchers called for a six month pause on the development of these AI systems.
“This is going to need a lot of debate, no-one knows all the answers.”
“In this area, I think it’s important to hear concerns,” said Pichai regarding the open letter calling for the pause. “And I think there is merit to be concerned about it ... This is going to need a lot of debate, no-one knows all the answers, no one company can get it right.” He also said that “AI is too important an area not to regulate,” but suggested it was better to simply apply regulations in existing industries — like privacy regulations and regulations in healthcare — than create new laws to tackle AI specifically.
Some experts worry about immediate risks, like chatbots’ tendency to spread mistruths misinformation, while others warn about more existential threats; suggesting that these systems are so difficult to control that once they are connected to the wider web they could be used destructively. Some suggest that current programs are also drawing closer to what’s known as artificial generally intelligence, or AGI: systems that are as least as capable as a human across a wide range of tasks.
“It is so clear to me that these systems are going to be very, very capable, and so it almost doesn’t matter whether you’ve reached AGI or not,” said Pichai. “Can we have an AI system which can cause disinformation at scale? Yes. Is it AGI? It really doesn’t matter. Why do we need to worry about AI safety? Because you have to anticipate this and evolve to meet that moment.”