OpenAI unveiled GPT-5.4, latest update to its flagship AI model
EL.KZ Информационно-познавательный портал
OpenAI today unveiled GPT-5.4, the latest update to its flagship AI model, bringing improvements in reasoning, coding, and real-world task automation. The model is rolling out across Chat GPT, the API, and developer tools, with new variants aimed at both everyday users and enterprise workloads, El.kz reports citingDigital Trends.
One of the biggest changes is the model’s ability to interact with computers more directly. GPT-5.4 can interpret screenshots, operate browsers, and issue keyboard and mouse commands to complete tasks across different apps and services. This makes it capable of carrying out multi-step workflows that previously required human input. This marks a major step forward toward more autonomous AI agents.
The update also improves the model’s ability to research complex questions. OpenAI saysGPT-5.4 can search across multiple rounds of information gathering and combine findings into clearer, more structured answers. The company claims the model is its most factual yet, reducing false claims by about 33 percent compared with GPT-5.2
A new “Thinking” mode for tougher questions
Alongside the core model, OpenAI has introduced GPT-5.4 Thinking inside ChatGPT. It’s designed for more complex prompts and provides a visible outline of the model’s reasoning process while it works through a problem. Users can adjust instructions mid-response, helping guide the AI toward a more desirable outcome without restarting the conversation.
GPT-5.4 is also designed to handle longer and more complex tasks, retaining information across multiple steps and extended workflows. These improvements could be particularly useful in coding tools like OpenAI Codex, where the model can help automate large or time-consuming development tasks.

