OpenAI, the company that developed ChatGPT, has announced that customers/ businesses using GPT-3.5 Turbo – its most capable and cost-effective model in the GPT-3.5 family – can now fine-tune the model with their own data.
The fine-tuning for GPT-3.5 Turbo will enable developers to customise models that perform better for their use cases and run these custom models at scale.
According to OpenAI, early tests have shown that “a fine-tuned version of GPT-3.5 Turbo can match, or even outperform, base GPT-4-level capabilities on certain narrow tasks.”
Benefits of fine-tuning use cases
In OpenAI’s private beta, fine-tuning customers have been able to improve model performance across common use cases.
Improved steerability: Fine-tuning enabled businesses to make the model follow instructions better, such as making outputs terse or always responding in a given language.
Reliable output formatting: Fine-tuning also improved the model’s ability to consistently format responses. This means that a developer can use fine-tuning to reliably convert user prompts into high-quality JavaScript Object Notation (JSON) snippets.
Custom tone: Fine-tuning can also hone the qualitative feel of the model output, such as its tone, so it better fits the voice of businesses’ brands.
OpenAI said that fine-tuning also enables businesses to shorten their prompts while ensuring similar performance.
“Fine-tuning with GPT-3.5-Turbo can also handle 4k tokens—double our previous fine-tuned models. Early testers have reduced prompt size by up to 90% by fine-tuning instructions into the model itself, speeding up each API call and cutting costs,” the company said.
Fine-tuning for more LLMs
OpenAI said that support for fine-tuning with function calling and gpt-3.5-turbo-16k and GPT-4 for the same purpose later.
The fine-tuning for GPT-3.5 Turbo will enable developers to customise models that perform better for their use cases and run these custom models at scale.
According to OpenAI, early tests have shown that “a fine-tuned version of GPT-3.5 Turbo can match, or even outperform, base GPT-4-level capabilities on certain narrow tasks.”
Benefits of fine-tuning use cases
In OpenAI’s private beta, fine-tuning customers have been able to improve model performance across common use cases.
Improved steerability: Fine-tuning enabled businesses to make the model follow instructions better, such as making outputs terse or always responding in a given language.
Reliable output formatting: Fine-tuning also improved the model’s ability to consistently format responses. This means that a developer can use fine-tuning to reliably convert user prompts into high-quality JavaScript Object Notation (JSON) snippets.
Custom tone: Fine-tuning can also hone the qualitative feel of the model output, such as its tone, so it better fits the voice of businesses’ brands.
OpenAI said that fine-tuning also enables businesses to shorten their prompts while ensuring similar performance.
“Fine-tuning with GPT-3.5-Turbo can also handle 4k tokens—double our previous fine-tuned models. Early testers have reduced prompt size by up to 90% by fine-tuning instructions into the model itself, speeding up each API call and cutting costs,” the company said.
Fine-tuning for more LLMs
OpenAI said that support for fine-tuning with function calling and gpt-3.5-turbo-16k and GPT-4 for the same purpose later.
Denial of responsibility! Todays Chronic is an automatic aggregator of the all world’s media. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials, please contact us by email – todayschronic.com. The content will be deleted within 24 hours.