Documentation Index
Fetch the complete documentation index at: https://docs.perceptron.inc/llms.txt
Use this file to discover all available pages before exploring further.
question() helper accepts a video() node alongside a natural-language prompt and returns a textual answer. Combine with reasoning=True for step-by-step analysis of long-horizon episodes — assembly walkthroughs, training videos, customer-session recordings, and other content where the answer depends on watching what happens over time.
Basic usage
| Parameter | Type | Default | Description |
|---|---|---|---|
media_obj | VideoNode | - | Wrap your MP4 or WebM (URL or local file path) with video() |
question_text | str | - | The question to ask about the video |
reasoning | bool | False | Set True to enable reasoning and include the model’s chain-of-thought |
expects | str | "text" | Output structure for the SDK ("text", "clip", "point", "box", "polygon") |
PerceiveResult object:
text(str): The answer to your question.reasoning(str | None): The model’s chain-of-thought whenreasoning=True.clips,points,boxes,polygons(list | None): Populated when the correspondingexpectsis requested.
Example: Robot assembly walkthrough
In this example we download a short robot-assembly clip, ask Isaac to identify the overall goal and the sub-goals it observes, and let it think through the episode before answering.Best practices
- Reach for
expects="clip"when you need timestamps: If the answer needs to point at when something happens in the video, switch to the Video Clipping workflow instead.
Run through the full Jupyter notebook here. Reach out to Perceptron support if you have questions.