An experienced writer with practical experience in the fintech industry. When not writing, he spends his time reading, researching or teaching.
For the best results, OpenAI recommends combining fine-tuning with other techniques such as prompt engineering, information retrieval, and function calling.
OpenAI customers can now customize GPT-3.5 Turbo according to their needs using the newly unveiled fine-tuning feature. OpenAI announced this update in a blog post on Tuesday. It follows from the customization updates the company earlier announced in July.
The fine-tuning feature allows company developers to train GPT-3.5 Turbo on their own data and applications using the OpenAI API. This way, businesses and developers can tailor their user experiences on the platform to their specific use cases.
According to the AI Intelligence company, “Fine-tuning lets you train the model on your company’s data and run it at scale.”
The GPT-3.5 Turbo Customization Process
The fine-tuning process is straightforward. Companies must prepare their data, upload it through the API, and then create the fine-tuning job. Once completed, the fine-tuned model will be available for subsequent usage. In the future, OpenAI will launch a fine-tuning UI with a dashboard to check the status of ongoing fine-tuning workloads.
The cost of using the fine-tuned model is $0.012 per 1,000 input tokens and $0.016 per 1,000 output tokens. This is more costly than the base cost of $0.0004 per 1,000 tokens for GPT-3.5 Turbo. Depending on the data volume, companies may also incur some extra costs for the initial training process.
To ensure safety, OpenAI will pass the data through its moderation API and its GPT-4-powered moderation system to ensure adherence to its safety standards. This means that OpenAI has some form of control over the kind of data users can input into their models.
For the best results, OpenAI recommends combining fine-tuning with other techniques such as prompt engineering, information retrieval, and function calling. OpenAI also noted that any data used to fine-tune the model remains under the ownership of the customer and would not be used by the company to train other models.
GPT-4 Fine-tuning Coming after Initial Success
Already, OpenAI has already done private beta-testing of its products. Early tests suggest that a fine-tuned version of GPT-3.5 Turbo can match or even outperform GPT-4 on narrow tasks.
Later this fall, OpenAI plans to make fine-tuning support for GPT-4 available. Unlike GPT-3.5, this model is expected to understand images and texts. However, it didn’t provide any more specifics about its plan.
Disclaimer: Coinspeaker is committed to providing unbiased and transparent reporting. This article aims to deliver accurate and timely information but should not be taken as financial or investment advice. Since market conditions can change rapidly, we encourage you to verify information on your own and consult with a professional before making any decisions based on this content.