OpenAI expands its custom model training program
OpenAI is expanding a program, Custom Model, to help enterprise customers develop tailored generative AI models using its technology for specific use cases, domains and applications.
Custom Model launched last year at OpenAI's inaugural developer conference, DevDay, offering companies an opportunity to work with a group of dedicated OpenAI researchers to train and optimize models for specific domains. "Dozens" of customers have enrolled in Custom Model since. But OpenAI says that, in working with this initial crop of users, it's come to realize the need to grow the program to further "maximize performance."
Hence assisted fine-tuning and custom-trained models.
Assisted fine-tuning, a new component of the Custom Model program, leverages techniques beyond fine-tuning -- such as "additional hyperparameters and various parameter efficient fine-tuning methods at a larger scale," in OpenAI's words -- to enable organizations to set up data training pipelines, evaluation systems and other supporting infrastructure toward bolstering model performance on particular tasks.
As for custom-trained models, they're custom models built with OpenAI -- using OpenAI's base models and tools (e.g. GPT-4) -- for customers that "need to more deeply fine-tune their models" or "imbue new, domain-specific knowledge," OpenAI says.
OpenAI gives the example of SK Telecom, the Korean telecommunications giant, who worked with OpenAI to fine-tune GPT-4 to improve its performance in "telecom-related conversations" in Korean. Another customer, Harvey -- which is building AI-powered legal tools with support from the OpenAI Startup Fund, OpenAI's AI-focused venture arm -- teamed up with OpenAI to create a custom model for case law that incorporated hundreds of millions of words of legal text and feedback from licensed expert attorneys.
"We believe that in the future, the vast majority of organizations will develop customized models that are personalized to their industry, business, or use case," OpenAI writes in a blog post. "With a variety of techniques available to build a custom model, organizations of all sizes can develop personalized models to realize more meaningful, specific impact from their AI implementations."
Image Credits: OpenAI
OpenAI is flying high, reportedly nearing an astounding $2 billion in annualized revenue. But there's surely internal pressure to maintain pace, particularly as the company plots a $100 billion data center co-developed with Microsoft (if reports are to be believed). The cost of training and serving flagship generative AI models isn't coming down anytime soon after all, and consulting work like custom model training might just be the thing to keep revenue growing while OpenAI plots its next moves.
Fine-tuned and custom models could also lessen the strain on OpenAI's model serving infrastructure. Tailored models are in many cases smaller and more performant than their general-purpose counterparts, and -- as the demand for generative AI reaches a fever pitch -- no doubt present an attractive solution for a historically compute-capacity-challenged OpenAI.
Alongside the expanded Custom Model program and custom model building, OpenAI today unveiled new model fine-tuning features for developers working with GPT-3.5, including a new dashboard for comparing model quality and performance, support for integrations with third-party platforms (starting with the AI developer platform Weights & Biases) and enhancements to tooling. Mum's the word on fine-tuning for GPT-4, however, which launched in early access during DevDay.