Skip to content

OpenAI announces new generative text features for GPT-4 and GPT-3.5 Turbo as prices fall

  • by
  • 3 min read

Photo: Koshiro K/Shutterstock.com

OpenAI has announced updates for its text generations models as new GPT-3.5-turbo and GPT-4 versions arrive with function-calling capabilities. At the same time, pricing for the original version of GPT-3.5-turbo is being reduced by 25%, while text-embedding-ada-002 getting a 75% price reduction.

The more exciting announcement, however is the addition of function calling to GPT-4-0613 and GPT-3.5-turbo-0613, the newer versions announced by OpenAI. The feature lets developers describe functions to these models which in turn allows the models to “intelligently choose” to output a JSON object. This means GPT models can now more reliably extract information from external tools and APIs. 

For example, chatbots using this feature will be able to query external tools like ChatGPT plugins, natural language can be converted into API calls or database queries, and structured data can be extracted from text. 

However, these new additions don’t come without their caveats. OpenAI warned in its announcement that untrusted data from a tool’s output can tell the model to perform “unintended actions” and has asked developers to protect their applications by only using information from trusted tools and implementing user confirmation steps before performing any actions. 

New GPT-3.5 turbo and GPT-4 models make their debut with big improvements

The new feature announcement also brings new GPT models. GPT-4-0613 includes an updated and improved model paired with function calls. GPT-4-32k-0613 brings the same improvements except with an extended context length for better comprehension of larger texts. 

If you’re still stuck on GPT-4’s waitlist, there’s even more good news. OpenAI will be inviting “many more people” from the waitlist to try their flagship model over the coming weeks and intend to remove the waitlist entirely with this model. 

As for GPT-3.5-turbo-0613, you still get function calling alongside more reliable steerability via the system message, meaning developers can tailor the model’s responses more effectively. GPT-3.5-turbo-16k, on the other hand, offers four times the context length of gpt-3.5-turbo at twice the price — $0.003 per 1,000 input tokens and $0.004 per 1,000 output tokens. The 16k context means the model can now support roughly 20 pages of text in a single request. 

Photo: Tada Images/Shutterstock.com
Photo: Tada Images/Shutterstock.com

Of course, this also means that the upgrade and deprecation process for the initial GPT-4 and GPT-3.5-turbo versions will also start. The stable models’ applications will automatically be upgraded to the newer ones on June 27. That said, developers who require more time to transition can opt to continue using the older models available until September 13. 

More AI for less price

Last but not least, GPT-3.5-turbo and text-embedding-ada-002 are seeing significant price cuts. GPT-3.5-turbo will get a price reduction of 25%, meaning the model now costs $0.00015 per 1,000 input tokens and $0.002 per 1,000 output tokens, coming to about 700 pages per dollar. 

As for text-embedding-ada-002, OpenAI’s most popular embedding model, it gets a price cut of 75% bringing the cost down to a mere $0.0001 per 1,000 tokens. 

In the News: EU files antitrust complaint against Google’s Ad business

Yadullah Abidi

Yadullah Abidi

Yadullah is a Computer Science graduate who writes/edits/shoots/codes all things cybersecurity, gaming, and tech hardware. When he's not, he streams himself racing virtual cars. He's been writing and reporting on tech and cybersecurity with websites like Candid.Technology and MakeUseOf since 2018. You can contact him here: yadullahabidi@pm.me.

>