Skip to main content

Overview

The platform now offers updated isaac-0.2-2b-preview and isaac-0.2-1b models alongside Isaac 0.1 and a hosted Qwen3VL option.
FeatureIsaac 0.2 2B PreviewIsaac 0.2 1BIsaac 0.1Qwen3VL
Description / best forBest-in-class image VLM with reasoning.Best-in-class small image VLM; for local, low-latency perception.Original image VLM for grounded perception.Qwen’s 235B hosted VLM; for large, complex documents/scenes.
Model size2B parameters1B parameters2B parameters235B parameters
Model ID (API)isaac-0.2-2b-previewisaac-0.2-1bisaac-0.1qwen3-vl-235b-a22b-thinking
Access / open sourceHosted API + open weights on Hugging Face.Hosted API + open weights on Hugging Face.Hosted API + open weights on Hugging Face.Hosted API; open weights on Hugging Face.
Reasoning enabledYesYesNoYes (always on)
Comparative latencyFastFastestFastSlow
Context window8K tokens8K tokens8K tokens127K tokens
Max input + output8K tokens8K tokens8K tokens160K tokens
Pricing$0.15 per million input tokens
$1.25 per million output tokens
$0.15 per million input tokens
$1.25 per million output tokens
$0.15 per million input tokens
$1.25 per million output tokens
$0.40 per million input tokens
$4.00 per million output tokens

isaac-0.2-2b-preview

isaac-0.2-2b-preview is a 2B-parameter, best-in-class VLM with tool-calling-ready reasoning. It succeeds Isaac 0.1 with stronger perception and the same flexible deployment options. Highlights
  • Best-in-class perception + reasoning across VQA, OCR, detection, pointing, counting, and tool calls.
  • Rapid response with sub-200 ms time-to-first-token and predictable latency.
  • Focus capabilities to natively zoom, refocus, and reason over critical regions.
  • Few-shot in-context learning so you can specialize with prompt-only examples.
Access
  • Python SDK: set model="isaac-0.2-2b-preview" with the Perceptron SDK.
  • REST: hit /v1/chat/completions with model=isaac-0.2-2b-preview.
  • Self-hosting: download the open weights on Hugging Face; the repo includes the tokenizer, processor, and reference configs. Commercial uses require a commercial license - contact us for details.

isaac-0.2-1b

isaac-0.2-1b is a compact image VLM for grounded perception in local, memory-constrained or low-latency deployments. Highlights
  • Best in class perception + reasoning at small scale for detection, pointing, and VQA.
  • Local-friendly footprint for CPU+GPU hybrids, Apple silicon laptops, Jetsons, and lightweight inference stacks.
Access
  • Python SDK: set model="isaac-0.2-1b" with the Perceptron SDK.
  • REST: hit /v1/chat/completions with model=isaac-0.2-1b.
  • Self-hosting: download the open weights on Hugging Face; the repo includes the tokenizer, processor, and reference configs. Commercial uses require a commercial license - contact us for details.

Isaac 0.1

Isaac 0.1 is the prior 2B image VLM focused on grounded perception, still supported for customers who have integrated it. Access
  • Python SDK: set model="isaac-0.1" with the Perceptron SDK.
  • REST: hit /v1/chat/completions with model=isaac-0.1.
  • Self-hosting: download the open weights on Hugging Face; the repo includes the tokenizer, processor, and reference configs. Commercial uses require a commercial license - contact us for details.

Qwen3VL

Perceptron hosts Qwen3-VL-235B-A22B-Thinking to unlock additional capabilities for customers. Try it when:
  • You need multi-step chain-of-thought over complex documents or scenes.
  • Your workload tolerates higher latency/cost.
  • You want one integration that spans efficient pointing and VQA models.
Access
  • SDK: set model="qwen3-vl-235b-a22b-thinking" with the Perceptron SDK.
  • REST: hit /v1/chat/completions with model=qwen3-vl-235b-a22b-thinking.

Evaluation snapshots

Our latest public benchmarks for isaac-0.2-2b-preview and isaac-0.2-1b are shown below. Please reach out if you have questions. isaac-0.2 benchmark comparison