GPT-4o, o3, and the full OpenAI family — all inside Nagent agents.
OpenAI's GPT-4o and reasoning-focused o-series models bring best-in-class multimodal and logical reasoning to your Nagent agents. From real-time voice interfaces to vision-enabled document processing to advanced chain-of-thought problem-solving with o3, the full OpenAI spectrum is accessible in a single configuration panel.
Best multimodal flagship
Context Window
128K tokens
Max Output
16K tokens
Input Types
Text, Images, Audio
Output Types
Text, Code, JSON
Cost-efficient for high volume
Context Window
128K tokens
Max Output
16K tokens
Input Types
Text, Images
Output Types
Text, Code
State-of-the-art reasoning
Context Window
200K tokens
Max Output
100K tokens
Input Types
Text, Images
Output Types
Text, Code
Fast reasoning, lower cost
Context Window
200K tokens
Max Output
100K tokens
Input Types
Text, Images
Output Types
Text, Code
Process screenshots, diagrams, and photos alongside text — for support bots, visual QA, or invoice extraction.
Use o3/o4-mini for scientific analysis, financial modelling, and multi-step logical deductions.
GPT-4o Mini handles millions of lightweight tasks at a fraction of the cost without sacrificing quality.
JSON mode and function-calling make it trivial to pull structured data from unstructured sources.
Nagent adds enterprise orchestration, observability, and workflow automation on top of OpenAI's raw model capabilities.
Function-calling and parallel tool use natively supported in Nagent agent definitions
JSON mode enforced at the platform level — no prompt hacking required
Switch between GPT-4o and o3 mid-workflow based on task complexity
Built-in rate-limit handling and automatic failover between model tiers
Navigate to Agent Studio in your Nagent workspace.
Choose OpenAI under Model Configuration and select your preferred GPT or o-series model.
Toggle function-calling or JSON mode in the model settings and connect your data sources.
Get started in minutes — no API key management required.