The emerging types of language models and why they matter

Enterprise

AI systems that understand and generate text, known as language models, are the hot new thing in the enterprise. A recent survey found that 60% of tech leaders said that their budgets for AI language technologies increased by at least 10% in 2020 while 33% reported a 30% increase.

But not all language models are created equal. Several types are emerging as dominant, including large, general-purpose models like OpenAI’s GPT-3 and models fine-tuned for particular tasks (think answering IT desk questions). At the edge exists a third category of model — one that tends to be highly compressed in size and limited to few capabilities, designed specifically to run on internet of things devices and workstations.

These different approaches have major differences in strengths, shortcomings and requirements — here’s how they compare and where you can expect to see them deployed over the next year or two.

Large language models

Large language models are, generally speaking, tens of gigabytes in size and trained on enormous amounts of text data, sometimes at the petabyte scale. They’re also among the biggest models in terms of parameter count, where a “parameter” refers to a value the model can change independently as it learns. Parameters are the parts of the model learned from historical training data and essentially define the skill of the model on a problem, such as generating text.

“Large models are used for zero-shot scenarios or few-shot scenarios where little domain-[tailored] training data is available and usually work okay generating something based on a few prompts,” Fangzheng Xu, a Ph.D. student at Carnegie Mellon specializing in natural language processing, told TechCrunch via email. In machine learning, “few-shot” refers to the practice of training a model with minimal data, while “zero-shot” implies that a model can learn to recognize things it hasn’t explicitly seen during training.

“A single large model could potentially enable many downstream tasks with little training data,” Xu continued.

The usage of large language models models has grown dramatically over the past several years as researchers develop newer — and bigger — architectures. In June 2020, AI startup OpenAI released GPT-3, a 175 billion-parameter model that can generate text and even code given a short prompt containing instructions. Open research group EleutherAI subsequently made available GPT-J, a smaller (6 billion parameters) but nonetheless capable language model that can translate between languages, write blog posts, complete code and more. More recently, Microsoft and Nvidia open sourced a model dubbed Megatron-Turing Natural Language Generation (MT-NLG), which is among the largest models for reading comprehension and natural language inference developed to date at 530 billion parameters.

“One reason these large language models remain so remarkable is that a single model can be used for tasks” including question answering, document summarization, text generation, sentence completion, translation and more, Bernard Koch, a computational social scientist at UCLA, told TechCrunch via email. “A second reason is because their performance continues to scale as you add more parameters to the model and add more data … The third reason that very large pretrained language models are remarkable is that they appear to be able to make decent predictions when given just a handful of labeled examples.”

Startups including Cohere and AI21 Labs also offer models akin to GPT-3 through APIs. Other companies, particularly tech giants like Google, have chosen to keep the large language models they’ve developed in house and under wraps. For example, Google recently detailed — but declined to release — a 540 billion-parameter model called PaLM that the company claims achieves state-of-the-art performance across language tasks.

Large language models, open source or no, all have steep development costs in common. A 2020 study from AI21 Labs pegged the expenses for developing a text-generating model with only 1.5 billion parameters at as much as $1.6 million. Inference — actually running the trained model — is another drain. One source estimates the cost of running GPT-3 on a single AWS instance (p3dn.24xlarge) at a minimum of $87,000 per year.

“Large models will get larger, more powerful, versatile, more multimodal, and cheaper to train. Only big tech and extremely well-funded startups can play this game,” Vu Ha, a technical director at the AI2 Incubator, told TechCrunch via email. “Large models are great for prototyping, building novel proof-of-concepts, and assessing technical feasibility. They are rarely the right choice for real-world deployment due to cost. An application that processes tweets, Slack messages, emails, and such on a regular basis would become cost prohibitive if using GPT-3.”

Large language models will continue to be the standard for cloud services and APIs, where versatility and enterprise access are of more importance than latency. But despite recent architectural innovations, these types of language models will remain impractical for the majority of organizations, whether academia, the public or the private sector.

Fine-tuned language models

Fine-tuned models are generally smaller than their large language model counterparts. Examples include OpenAI’s Codex, a direct descendant of GPT-3 fine-tuned for programming tasks. While still containing billions of parameters, Codex is both smaller than OpenAI and better at generating — and completing — strings of computer code.

Fine-tuning can improve a models’ ability to perform a task, for example answering questions or generating protein sequences (as in the case of Salesforce’s ProGen). But it can also bolster a model’s understanding of certain subject matter, like clinical research.

“Fine-tuned … models are good for mature tasks with lots of training data,” Xu said. “Examples include machine translation, question-answering, named entity recognition, entity linking [and] information retrieval.”

The advantages don’t stop there. Because fine-tuned models are derived from existing language models, fine-tuned models don’t take nearly as much time — or compute — to train or run. (Larger models like those mentioned above may take weeks, or require far more computational power to train in days.) They also don’t require as much data as large language models. GPT-3 was trained on 45 terabytes of text versus the 159 gigabytes on which Codex was trained.

Fine-tuning has been applied to many domains, but one especially strong, recent example is OpenAI’s InstructGPT. Using a technique called “reinforcement learning from human feedback,” OpenAI collected a data set of human-written demonstrations on prompts submitted to the OpenAI API and prompts written by a team of human data labelers. They leveraged these data sets to create fine-tuned offshoots of GPT-3 that — in addition to being a hundredth the size of GPT-3 — are demonstrably less likely to generate problematic text while closely aligning with a user’s intent.

In another demonstration of the power of fine-tuning, Google researchers in February published a study claiming that a model far smaller than GPT-3 — fine-tuned language net (FLAN) — bests GPT-3 “by a large margin” on a number of challenging benchmarks. FLAN, which has 137 billion parameters, outperformed GPT-3 on 19 out of the 25 tasks the researchers tested it on and even surpassed GPT-3’s performance on 10 tasks.

“I think fine-tuning is probably the most widely used approach in industry right now, and I don’t see that changing in the short term. For now, fine-tuning on smaller language models allows users more control to solve their specialized problems using their own domain-specific data,” Koch said. “Instead of distributing [very large language] models that users can fine tune on their own, companies are commercializing few-shot learning through API prompts where you can give the model short prompts and examples.”

Edge language models

Edge models, which are purposefully small in size, can take the form of fine-tuned models — but not always. Sometimes, they’re trained from scratch on small data sets to meet specific hardware constraints (e.g., phone or local web server hardware). In any case, edge models — while limited in some respects — offer a host of benefits that large language models can’t match.

Cost is a major one. With an edge model that runs offline and on-device, there aren’t any cloud usage fees to pay. (Even fine-tuned models are often too large to run on local machines; MT-NLG can take over a minute to generate text on a desktop processor.) Tasks like analyzing millions of tweets may rack up thousands of dollars in fees on popular cloud-based models.

Edge models also offer greater privacy than their internet-bound counterparts, in theory, because they don’t need to transmit or analyze data in the cloud. They’re also faster — a key advantage for applications like translation. Apps such as Google Translate rely on edge models to deliver offline translations.

“Edge computing is likely to be deployed in settings where immediate feedback is needed … In general, I would think these are scenarios where humans are interacting conversationally with AI or robots or something like self-driving cars reading road signs,” Koch said. “As a hypothetical example, Nvidia has a demo where an edge chatbot has a conversation with clients at a fast food restaurant. A final use case might be automated note taking in electronic medical records. Processing conversation quickly in these situations is essential.”

Of course, small models can’t accomplish everything that large models can. They’re bound by the hardware found in edge devices, which ranges from single-core processors to GPU-equipped systems-on-chips. Moreover, some research suggests that the techniques used to develop them can amplify unwanted characteristics, like algorithmic bias.

“[There’s usually a] trade off between power usage and predictive power. Also, mobile device computation is not really increasing at the same pace as distributed high-performance computing clusters, so the performance may lag behind more and more,” Xu said.

Looking to the future

As large, fine-tuned, and edge language models continue to evolve with new research, they’re likely to encounter roadblocks on the path to wider adoption. For example, while fine-tuning models requires less data compared to training a model from scratch, fine-tuning still requires a data set. Depending on the domain — e.g., translating from a little-spoken language — the data might not exist.

“The disadvantage of fine-tuning is that it still requires a fair amount of data. The disadvantage of few-shot learning is that it doesn’t work as well as fine-tuning, and that data scientists and machine learning engineers have less control over the model because they are only interacting with it through an API,” Koch continued. “And the disadvantages of edge AI are that complex models cannot fit on small devices, so performance is strictly worse than models that can fit on a single desktop GPU — much less cloud-based large language models distributed across tens of thousands of GPUs.”

Xu notes that all language models, regardless of size, remain understudied in certain important aspects. She hopes that areas like explainability and interpretability — which aim to understand how and why a model works and expose this information to users — receive greater attention and investment in the future, particularly in “high-stake” domains like medicine.

“Provenance is really an important next step that these models should have,” Xu said. “In the future, there will be more and more efficient fine-tuning techniques … to accommodate the increasing cost of fine-tuning a larger model in whole. Edge models will continue to be important, as the larger the model, the more research and development is needed to distill or compress the model to fit on edge devices.”

Products You May Like

Articles You May Like

OpenAI says it has no plans for a Sora API — yet
YouTube’s latest test lets creators post voice notes as comments
OpenAI’s GPT-5 reportedly falling short of expectations
Building physical tech is back in fashion thanks to AI, robotics, and defense
Ex-Twitch CEO Emmett Shear is founding an AI startup backed by a16z

Leave a Reply

Your email address will not be published. Required fields are marked *