Company that made an AI its chief executive sees stocks climb

An AI artwork on display at the Misalignment Museum opened to the public in San Francisco on 9 March , 2023 (AFP via Getty Images)
An AI artwork on display at the Misalignment Museum opened to the public in San Francisco on 9 March , 2023 (AFP via Getty Images)

A video game company that appointed an artificial intelligence bot as its chief executive has announced a market-beating stock increase.

China-based NetDragon Websoft named the AI program Tang Yu as its CEO in August, tasked with supporting decision making for the company’s daily operations.

The “AI-powered virtual humanoid robot” has managed to outperform Hong Kong’s Hang Seng Index in the six months since it was appointed.

The share price of NetDragon Websoft is now up 10 per cent, pushing the company’s valuation above $1 billion.

“We believe AI is the future of corporate management, and our appointment of Ms Tang Yu represents our commitment to truly embrace the use of AI to transform the way we operate our business and ultimately drive our future strategic growth,” company founder Dejian Liu said at the time of the bot’s hiring.

Dr Liu added that the appointment was part of the firm’s strategy to transform into a “metaverse-based working community”.

The company claims to be the first in the world to put an AI-powered bot in charge of its operations, though Alibaba CEO Jack Ma has predicted that in the future “a robot will likely be on the cover of Time magazine as the best CEO”.

Reports from China of the bot’s success come amid a surge in interest in generative AI technology, with several leading artificial intelligence firms launching new AI tools.

On Tuesday, OpenAI unveiled the successor to its hugely popular ChatGPT software, named GPT-4, billing it as a “much more nuanced” version of its predecessor.

GPT-4 has already proved itself capable of passing a wide range of exams, including the Bar, the LSATS, and the SAT’s Reading and Maths tests.

Despite its abilities, OpenAI warned users to not use the tech for anything critical, as it has a tendency to “hallucinate” facts and is not fully reliable.

“Great care should be taken when using language model outputs,” the company said, “particularly in high-stakes contexts, with the exact protocol (such as human review, grounding with additional context, or avoiding high-stakes uses altogether) matching the needs of a specific use-case.”