16.2 C
New York
Friday, May 1, 2026

Zero Day Assist at 400 Tokens Per Second


We’re excited to announce day-0 help for NVIDIA Nemotron 3 Nano Omni on Clarifai. Out there now on Clarifai Reasoning Engine, Nano Omni brings quick multimodal reasoning to builders constructing agentic techniques, delivering throughput of 400+ tokens per second.

NVIDIA Nemotron 3 Nano Omni is a 30B A3B multimodal reasoning mannequin constructed for workloads that span paperwork, photographs, video, and audio. With a 256K context window and help for textual content, picture, video, and audio inputs with textual content output, it offers builders a single mannequin for dealing with wealthy multimodal context inside agentic workflows.

That makes it a robust match for sub-agents in workflows the place multimodal understanding and velocity have to go collectively.

A Multimodal Mannequin for Specialised Sub-Brokers

As agent techniques develop extra succesful, in addition they change into extra specialised. Completely different fashions and elements tackle planning, execution, retrieval, and verification, every working inside a broader workflow. In that structure, the mannequin dealing with multimodal inputs has to do greater than course of remoted inputs. It has to interpret a number of modalities collectively, protect context throughout steps, and reply quick sufficient to remain inside the operational loop.

As a light-weight multimodal mannequin for sub-agents, Nemotron 3 Nano Omni can motive throughout screens, paperwork, charts, audio, and video with out routing every modality by a separate stack. Moderately than splitting imaginative and prescient, speech, and language throughout a number of fashions, it offers builders a extra unified approach to deal with multimodal reasoning whereas conserving the general system simpler to handle.

Constructed for Pc Use, Paperwork, and Audio-Video Reasoning

Nano Omni is very related for the sorts of workloads which might be turning into central to enterprise agentic techniques.

For laptop use, brokers have to learn interfaces, observe UI state over time, and confirm whether or not actions accomplished as anticipated. For doc intelligence, they should motive throughout textual content, tables, charts, screenshots, scanned pages, and blended visible construction in the identical cross. For audio and video workflows, they should join what was mentioned, what was proven, and what modified over time.

These are all instances the place multimodal functionality has to work reliably in manufacturing, with a mannequin that may deal with a number of modalities effectively with out splitting the workflow throughout separate fashions.

The mannequin represents a major leap in functionality from earlier fashions within the Nemotron household. Important enchancment in benchmarks like OCRBenchV2, OCR_Reasoning, MathVista_MINI and OSWorld replicate the mannequin’s improved efficiency for the actual world workloads right this moment’s brokers are prone to serve.

MULTIMODAL ACCURACY - nemotron

That’s the place Nano Omni suits naturally, giving builders a single multimodal reasoning stream for the duties sub-agents are more and more anticipated to deal with.

Agent-Pleasant Tokenomics

In agent techniques, sub-agents tackle recurring duties throughout paperwork, screens, audio, and video inside a bigger workflow. Every invocation provides to the price, throughput, and infrastructure calls for of the general system. NVIDIA Nemotron 3 Nano Omni consolidates imaginative and prescient, speech, and language right into a single multimodal mannequin, lowering inference hops, orchestration logic, and cross-model synchronization in contrast with separate notion stacks.

Nano Omni delivers roughly 2x larger throughput on common, together with about 2.5x decrease compute for video reasoning by temporal-aware notion and environment friendly video sampling. For multimodal agent workflows, meaning larger throughput and decrease compute overhead with out including complexity to the stack.

The mannequin makes use of a hybrid Combination-of-Specialists structure with a Transformer-Mamba design, together with 3D convolution layers and Environment friendly Video Sampling for temporal and video inputs. It will probably run on a single H100, H200, or B200, making it sensible to deploy multimodal sub-agents with out stretching infrastructure necessities.

Excessive-Throughput Inference on Clarifai

On Clarifai Reasoning Engine, NVIDIA Nemotron 3 Nano Omni runs at 400+ tokens per second, giving builders the throughput wanted for manufacturing multimodal agent workflows. That issues in techniques the place sub-agents are known as repeatedly to course of paperwork, interfaces, audio, and video as a part of an ongoing workflow.

Clarifai Reasoning Engine is constructed for inference acceleration by combining optimized kernels, speculative decoding and adaptive efficiency strategies to enhance throughput for reasoning fashions with out compromising accuracy.

Getting Began on Clarifai

Builders can attempt NVIDIA Nemotron 3 Nano Omni within the Clarifai Playground and also can entry it by way of an OpenAI-compatible API, making it simpler to combine into present functions, instruments, and agentic frameworks.

For larger-scale or extra managed deployments, Clarifai offers a direct path to manufacturing with Compute Orchestration. Builders can run Nano Omni on Clarifai Reasoning Engine or deploy it throughout their very own cloud, VPC, on-prem or air-gapped environments whereas managing deployments by a unified management airplane.

NVIDIA Nemotron 3 Nano Omni is out there on Clarifai right this moment.

In case you have any questions on accessing NVIDIA Nemotron 3 Nano Omni on Clarifai, be part of our Discord.



Related Articles

Latest Articles