OpenAI Launch Chat GPT-4o Mini: Small But Effective

Chat GPT-4o was announced to launch on Thursday. This is the latest and smaller version of GPT-4o which was launched in May this year and an updated version of GPT-3.5 Turbo. GPT-4o mini is now available for free users to experience. Moreover, ChatGPT Enterprise will be experienced for ChatGPT Plus or Team users.

GPT-4o mini is introduced as multimodal as GPT-4o because the input of image enabled in the API. Through OpenAI’s announcement, GPT-4o mini is capable of reading audio, images and text and outputting images in the future.

GPT-4o mini can process up to 128,000 tokens (small pieces of data) in one input. This allows the model to understand and respond based on a larger amount of information than previous versions. The model was trained with data up to October 2023, which means it has knowledge of the world up to then. In addition, compared to GPT-3.5 Turbo, GPT-4o mini is 60% cheaper. The usage fee is 15 cents for one million input tokens and 60 cents for one million output tokens.

Significantly, a new feature of GPT-4o mini that OpenAI plans to release is called “instruction hierarchy”. Instruction hierarchy is a new technique that helps AI models prioritize some instructions then other instructions. This aims to enhance security, prevent attacks such as prompt injection, jailbreak, and system prompt extraction. Chat GPT-4o has not integrated this feature into the model yet. Therefore, when GPT-4o mini becomes public, more people will test the effectiveness of this new protection method.

Performance Of Chat GPT-4o Mini

GPT-4o mini outperforming GPT-4 Turbo on eight benchmarks

Benchmarks index:

Like big brother GPT-4o, GPT-4o mini also scored high in standard benchmark tests such as MMLU and HumanEval. However, these indicators do not have much meaning and not exactly reflect the actual performance of the AI model in everyday use. Because people do not only rely on the accuracy of information or calculation ability like benchmarks, they also evaluate based on the way it is presented (style) and the way the information is arranged (structure). This makes evaluation subjective and makes it difficult to clearly define quality standards.

Comparison with GPT-4 Turbo:

GPT-4o mini is evaluated better than GPT-4 Turbo on LMSYS Chatbot Arena rankings, based on user reviews. However, this index did not reflect expectations as expected in the AI community, as users found that GPT-4o often produced less useful output despite its higher score. For example, GPT-4o performs tasks that users do not require or respond to long-winded.

In summary, although the GPT-4o mini achieved good results in standard tests such as eight benchmarks, the actual assessment of the model’s quality is still limited. Current metrics are not enough to accurately reflect user experience.

Benefits of Launching Smaller Language Models

A popular development trend in the AI ​​community today is to release smaller versions of large language models. This makes smaller models often designed to perform simple tasks at lower cost, such as making lists, summarizing, or suggesting words. And the target audience for micromodels is often API users, who pay input and output token fees to use the model in their applications.

OpenAI is not the initial company to release a smaller version of an existing AI language model. Before that, there were vendors like Meta, Google and Anthropic doing this. However, OpenAI still has the benefit when they launch a smaller version. GPT-4o mini is available for free in ChatGPT, but using the smaller model can still save costs for OpenAI. Moreover, smaller language models require less computation, which leads to reduced hardware and power costs.

In short, smaller language models can provide acceptable performance for many applications at a lower cost and OpenAI can expand AI accessibility to many users.

Seamus Wilbor

Seamus Wilbor

Seamus Wilbor, CEO and Founder at Quarule. He has over 20 years of expertise as an AI Consultant in evaluating AI technology and developing AI strategies.