Google is not releasing its AI bot competitor to ChatGPT because of ‘risk’

 (BELGA MAG/AFP via Getty Images)
(BELGA MAG/AFP via Getty Images)

Google is not releasing its own chatty AI bot because of fears about reputational risk, it has said.

The company is worried that the system will give answers that sound legitimate but could be significantly wrong, it said, according to a report of a meeting from CNBC.

In recent days, the ChatGPT system created by OpenAI has proven hugely popular. Its ability to create everything from fake TV scripts to programming code has become the basis for viral tweets and fears about the future of many industries.

The popularity of the system has led many to wonder whether Google would make its own system public, and whether it had missed a chance by doing so. That sam question was asked during an all-hands meeting at the company this week, CNBC reported.

But Alphabet chief executive Sundar Pichai and Google’s head of AI Jeff Dean said that it had to move “conservatively” because of its size and the “reputational risk” that the app could pose.

Google’s system is called LaMDA, which stands for Language Model for Dialogue Applications. It provoked a minor scandal earlier this year when a Google engineer claimed that it had become “sentient”, in a claim that was dismissed by most experts.

Google says that the technology built as part of the development of LaMDA is already used in its search offering. The system can spot when people may need personal help, for instance, and will direct them to organisations that can offer it.

But it will be staying primarily in those contexts for now, Google reportedly said, until it can be relied on more confidently.

“We are absolutely looking to get these things out into real products and into things that are more prominently featuring the language model rather than under the covers, which is where we’ve been using them to date,” Mr Dean said. “But, it’s super important we get this right.”

The problems of such AI systems have been repeatedly detailed. They include fears about bias, especially when the system is trained with only limited knowledge, and the fact that it can be hard to know whether an answer is truly correct.

Many of those issues have already been seen in ChatGPT, despite the advanced technologies underpinning it. The system will often confidently and convincingly reply to questions with answers that are wildly wrong, for instance.