Friday, November 22, 2024

OpenAI launches fine-tuning for GPT-4o

Developers will now be able to fine-tune GPT-4o to get more customized responses that are suited to their unique needs.

With fine-tuning, GPT-4o can be improved using custom datasets, resulting in better performance at a lower cost, according to OpenAI.

For example, developers can use fine-tuning to customize the structure and tone of a GPT-4o response, or have it follow domain-specific instructions.

According to OpenAI, developers can start seeing results with as few as a dozen examples in a training data set.

“From coding to creative writing, fine-tuning can have a large impact on model performance across a variety of domains. This is just the start—we’ll continue to invest in expanding our model customization options for developers,” OpenAI wrote in a blog post.

The company also explained that it has put in place safety guardrails for fine-tuned models to prevent misuse. It is continuously running safety evaluations and monitoring usage to ensure that these models are meeting its policies for use.

In addition, the company announced it would be giving companies 1 million free training tokens per day until September 23.


You may also like…

Deepfakes: An existential threat to security emerges

AI Regulations are coming: Here’s how to build and implement the best strategy

Related Articles

Latest Articles