Skip to content

SAM3 Video

This model was released on 2025-11-19 and added to Hugging Face Transformers on 2025-11-19.

PyTorch SDPA FlashAttention

SAM3 (Segment Anything Model 3) was introduced in SAM 3: Segment Anything with Concepts.

SAM3 Video performs Promptable Concept Segmentation (PCS) on videos. PCS takes text as input (e.g., “yellow school bus”), and predicts instance and semantic masks for every single object matching the concept, while preserving object identities across video frames.

The model combines a detection module (SAM3) with a tracking module (SAM2-style tracker) to enable robust object tracking across video frames using text prompts.

The abstract from the paper is the following:

We present Segment Anything Model (SAM) 3, a unified model that detects, segments, and tracks objects in images and videos based on concept prompts, which we define as either short noun phrases (e.g., “yellow school bus”), image exemplars, or a combination of both. Promptable Concept Segmentation (PCS) takes such prompts and returns segmentation masks and unique identities for all matching object instances. To advance PCS, we build a scalable data engine that produces a high-quality dataset with 4M unique concept labels, including hard negatives, across images and videos. Our model consists of an image-level detector and a memory-based video tracker that share a single backbone. Recognition and localization are decoupled with a presence head, which boosts detection accuracy. SAM 3 doubles the accuracy of existing systems in both image and video PCS, and improves previous SAM capabilities on visual segmentation tasks. We open source SAM 3 along with our new Segment Anything with Concepts (SA-Co) benchmark for promptable concept segmentation.

This model was contributed by yonigozlan and ronghanghu.

Process a video with all frames already available using text prompts:

>>> from transformers import Sam3VideoModel, Sam3VideoProcessor
>>> from accelerate import Accelerator
>>> import torch
>>> device = Accelerator().device
>>> model = Sam3VideoModel.from_pretrained("facebook/sam3").to(device, dtype=torch.bfloat16)
>>> processor = Sam3VideoProcessor.from_pretrained("facebook/sam3")
>>> # Load video frames
>>> from transformers.video_utils import load_video
>>> video_url = "https://huggingface.co/datasets/hf-internal-testing/sam2-fixtures/resolve/main/bedroom.mp4"
>>> video_frames, _ = load_video(video_url)
>>> # Initialize video inference session
>>> inference_session = processor.init_video_session(
... video=video_frames,
... inference_device=device,
... processing_device="cpu",
... video_storage_device="cpu",
... dtype=torch.bfloat16,
... )
>>> # Add text prompt to detect and track objects
>>> text = "person"
>>> inference_session = processor.add_text_prompt(
... inference_session=inference_session,
... text=text,
... )
>>> # Process all frames in the video
>>> outputs_per_frame = {}
>>> for model_outputs in model.propagate_in_video_iterator(
... inference_session=inference_session, max_frame_num_to_track=50
... ):
... processed_outputs = processor.postprocess_outputs(inference_session, model_outputs)
... outputs_per_frame[model_outputs.frame_idx] = processed_outputs
>>> print(f"Processed {len(outputs_per_frame)} frames")
Processed 51 frames
>>> # Access results for a specific frame
>>> frame_0_outputs = outputs_per_frame[0]
>>> print(f"Detected {len(frame_0_outputs['object_ids'])} objects")
>>> print(f"Object IDs: {frame_0_outputs['object_ids'].tolist()}")
>>> print(f"Scores: {frame_0_outputs['scores'].tolist()}")
>>> print(f"Boxes shape (XYXY format, absolute coordinates): {frame_0_outputs['boxes'].shape}")
>>> print(f"Masks shape: {frame_0_outputs['masks'].shape}")

You can also track multiple object categories simultaneously by providing multiple prompts. The model efficiently reuses vision features across all prompts:

>>> # Add multiple text prompts (or use a list in add_text_prompt)
>>> multi_prompt_session = processor.init_video_session(
... video=video_frames,
... inference_device=device,
... processing_device="cpu",
... video_storage_device="cpu",
... dtype=torch.bfloat16,
... )
>>>
>>> prompts = ["person", "bed", "lamp"]
>>> processor.add_text_prompt(multi_prompt_session, prompts)
>>>
>>> # Process video - detects objects from ALL prompts in a single pass
>>> multi_outputs_per_frame = {}
>>> for model_outputs in model.propagate_in_video_iterator(
... inference_session=multi_prompt_session, max_frame_num_to_track=50
... ):
... processed_outputs = processor.postprocess_outputs(multi_prompt_session, model_outputs)
... multi_outputs_per_frame[model_outputs.frame_idx] = processed_outputs
>>>
>>> # Check which objects were detected by each prompt
>>> frame_0_outputs = multi_outputs_per_frame[0]
>>> prompt_to_obj_ids = frame_0_outputs["prompt_to_obj_ids"]
>>> for prompt, obj_ids in prompt_to_obj_ids.items():
... print(f"{prompt}: {len(obj_ids)} objects")
person: 2 objects
bed: 1 objects
lamp: 1 objects
⚠️ **Note on Streaming Inference Quality**: Streaming inference disables hotstart heuristics that remove unmatched and duplicate objects, as these require access to future frames to make informed decisions. This may result in more false positive detections and duplicate object tracks compared to pre-loaded video inference. For best results, use pre-loaded video inference when all frames are available.

For real-time applications, SAM3 Video supports processing video frames as they arrive:

>>> # Initialize session for streaming
>>> streaming_inference_session = processor.init_video_session(
... inference_device=device,
... processing_device="cpu",
... video_storage_device="cpu",
... dtype=torch.bfloat16,
... )
>>> # Add text prompt
>>> text = "person"
>>> streaming_inference_session = processor.add_text_prompt(
... inference_session=streaming_inference_session,
... text=text,
... )
>>> # Process frames one by one (streaming mode)
>>> streaming_outputs_per_frame = {}
>>> for frame_idx, frame in enumerate(video_frames[:50]): # Process first 50 frames
... # First, process the frame using the processor
... inputs = processor(images=frame, device=device, return_tensors="pt")
...
... # Process frame using streaming inference - pass the processed pixel_values
... model_outputs = model(
... inference_session=streaming_inference_session,
... frame=inputs.pixel_values[0], # Provide processed frame - this enables streaming mode
... reverse=False,
... )
...
... # Post-process outputs with original_sizes for proper resolution handling
... processed_outputs = processor.postprocess_outputs(
... streaming_inference_session,
... model_outputs,
... original_sizes=inputs.original_sizes, # Required for streaming inference
... )
... streaming_outputs_per_frame[frame_idx] = processed_outputs
...
... if (frame_idx + 1) % 10 == 0:
... print(f"Processed {frame_idx + 1} frames...")
>>> print(f"✓ Streaming inference complete! Processed {len(streaming_outputs_per_frame)} frames")
✓ Streaming inference complete! Processed 50 frames
>>> # Access results
>>> frame_0_outputs = streaming_outputs_per_frame[0]
>>> print(f"Detected {len(frame_0_outputs['object_ids'])} objects in first frame")
>>> print(f"Boxes are in XYXY format (absolute pixel coordinates): {frame_0_outputs['boxes'].shape}")
>>> print(f"Masks are at original video resolution: {frame_0_outputs['masks'].shape}")
⚠️ **Performance Note**: Custom resolutions may degrade accuracy. The model is meant to be used at 1008px resolution.

For faster inference or lower memory usage:

>>> config = Sam3VideoConfig.from_pretrained("facebook/sam3")
>>> config.image_size = 560
>>> model = Sam3VideoModel.from_pretrained("facebook/sam3", config=config).to(device, dtype=torch.bfloat16)
>>> processor = Sam3VideoProcessor.from_pretrained("facebook/sam3", size={"height": 560, "width": 560})

[[autodoc]] Sam3VideoConfig

[[autodoc]] Sam3VideoProcessor - call - postprocess_outputs - init_video_session - add_text_prompt

[[autodoc]] Sam3VideoInferenceSession

[[autodoc]] Sam3VideoSegmentationOutput

[[autodoc]] Sam3VideoModel - forward - propagate_in_video_iterator