ChatGPT's new feature, Copilot AI updates, and an AI translator: This week in new AI launches
Each week, Quartz rounds up product launches, updates, and funding news from artificial intelligence-focused startups and companies.
Here’s what’s going on this week in the ever-evolving AI industry.
ChatGPT’s new feature, canvas
OpenAI introduced canvas, a new feature for ChatGPT that allows users to “collaborate” with the chatbot on writing and coding projects. Canvas, which was made available in beta to ChatGPT Plus and Team users this week, was built with the startup’s GPT-4o model. OpenAI plans to make canvas available to free ChatGPT users when it’s out of beta mode.
The feature opens in a separate window, and “can give inline feedback and suggestions” on projects the way a copy editor or code reviewer would, OpenAI said.
Microsoft’s new AI features for Copilot+ PCs and Windows 11
Microsoft (MSFT) announced new AI features for Windows 11 and its Copilot+ PCs, which are built to handle AI-powered tasks. The “next wave” of AI features include Recall, a tool that saves screenshots to remind users of what they previously saw on their PC, and Click to Do, which suggests “quick actions” over images and text.
The company also released an update for Windows 11 that swaps the operating system to one with a foundation to run AI applications and services.
DeepL’s first U.S. tech hub
German AI startup DeepL launched its first U.S.-based tech hub in New York City this week. The startup, which develops leading-edge AI translation and writing tools, will “focus on accelerating research and product development,” and improve market leadership in the U.S. through its new hub, it said.
The company also added two new members to its C-suite: chief technology officer Sebastian Enderlein, and chief marketing officer Steve Rotter.
“Launching DeepL’s first US tech hub in New York City positions us at the center of one of the largest talent pools in the market and brings us closer to our customers, including many Fortune 500 companies,” Jarek Kutylowski, founder and chief executive of DeepL, said in a statement. “This hub will drive our focus on product innovation and engineering, empowering us to deliver cutting-edge language AI solutions that help our clients scale and break down language barriers.”
Google Cloud’s partnership with AI startup, Augmented Intelligence
Augmented Intelligence, an AI startup focused on AI agents, announced a partnership with Google Cloud (GOOGL) “aimed at accelerating the deployment of AI agents for consumer and enterprise companies.”
Apollo, the startup’s “agentic” language model, “is built with a neuro-symbolic architecture to enable a new generation of conversational agents,” AUI said. The model combines generative AI’s “conversational skills” with the predictability and actionability capabilities of rule-based AI.
“We’re thrilled to have the support of Google Cloud, and our newly announced strategic partnership will help us reach and co-sell to a vast pool of customers who have come to expect the highest standards of security and compliance from Google Cloud,” Ohad Elhelo, chief executive of Augmented Intelligence, said in a statement shared with Quartz. “AUI provides Google Cloud customers with a powerful solution tailored to any conversational use-case or industry.”
AITran’s debut in China
AITran, a translation platform powered by AI, made its debut at the China-ASEAN Expo this week. The platform, which has “military-grade encryption and cutting-edge AI” makes “near-instantaneous” translations, has multi-user capabilities, and uses “advanced noise-cancelling technology” so users can translate in loud environments.
The platform is available in the Asian market, but AITran is planning to expand to Europe and North America, and has an app available to download from Google Play and the App Store.
Liquid AI’s new foundational models
Liquid AI, a spin-off of MIT, launched its Liquid Foundational Models (LFMs) that it says “bring unprecedented levels of performance and efficiency to generative AI.” Compared to other GPT models, Liquid’s LFMs are more cost-effective and power efficient, the startup said.
“We build AI systems from a new set of algorithms for data curation, pre-, mid- and post- training, model architecture design, and evaluation metrics” Ramin Hasani, Liquid AI chief executive, said in a statement. “Our new methods in designing foundation models unlock a new scaling law for LLMs. we’ve improved quality, cost-effectiveness, and power efficiency compared to today’s models at every scale.”
Mostly AI’s synthetic text functionality
Mostly AI, a pioneer of structured synthetic data, launched its synthetic text functionality, which gives Fortune 500 companies, including Databricks and Amazon Web Services (AMZN), access to a “vast amount of proprietary text” to train and fine-tune large language models, or LLMs — without compromising user privacy, it said.
On the Mostly AI platform, users can upload original text data, such as emails and transcripts of customer support calls, and choose an open-source language model from Hugging Face to generate the synthetic data. The original data is used to fine-tune the LLM on the Mostly AI platform, which then generates synthetic text that can be downloaded or stored in a database.
“Today, AI training is hitting a plateau as models exhaust public data sources and yield diminishing returns,” Tobias Hann, chief executive of Mostly AI, said in a statement. “To harness high-quality, proprietary data, which offers far greater value and potential than the residual public data currently being used, global enterprises must take the leap and leverage both structured and unstructured synthetic data to safely train and deploy forthcoming generative AI solutions.”