Meta just dropped two major AI models that could reshape how we edit videos and understand 3D space. SAM 3 brings text-based object detection to images and video, while SAM 3D reconstructs 3D objects from single photos. Both models are live on Meta's new Segment Anything Playground, marking a significant leap in computer vision capabilities.
Meta is making a bold play in the computer vision arms race with two breakthrough models that solve problems the industry has wrestled with for years. The company's latest Segment Anything releases - SAM 3 and SAM 3D - represent a quantum leap from pointing and clicking to simply describing what you want to isolate or reconstruct.
The bigger story here isn't just the technical achievement, but how Meta is positioning these tools for mass adoption. Unlike previous AI models that required computer vision expertise, both SAM 3 and SAM 3D are designed for everyday creators and businesses.
SAM 3 tackles one of the most persistent challenges in video editing: precise object detection through natural language. Where previous models could segment a "car," SAM 3 understands "yellow school bus" or even complex queries like "people sitting down, but not wearing a red baseball cap." This level of nuance represents years of training on massive datasets, according to Meta's research paper.
The implications hit immediately in Meta's ecosystem. The company is already integrating SAM 3 into its Edits video creation app, promising effects that creators can apply to specific people or objects. More integrations are coming to Vibes on the Meta AI app and meta.ai, suggesting Meta sees this as a core differentiator across its product suite.
But SAM 3D might be the more commercially significant release. The model's ability to reconstruct 3D objects from single images is already powering Facebook Marketplace's new "View in Room" feature, helping shoppers visualize furniture in their homes before buying. This practical application shows how Meta is thinking beyond research labs to real revenue opportunities.
The technical benchmarks tell the story of just how significant these advances are. SAM 3D Objects "significantly outperforms existing methods," according to Meta's announcement, while introducing what the company calls "a new standard for measuring research progress in 3D." Meta also collaborated with artists to build SAM 3D Artist Objects, a first-of-its-kind evaluation dataset that challenges existing 3D reconstruction methods.
What's particularly striking is Meta's open-source approach. The company is releasing SAM 3 model weights, evaluation benchmarks, and research papers alongside the consumer-facing playground. For SAM 3D, Meta is sharing model checkpoints and inference code. This represents a calculated bet that giving away the technology will accelerate adoption and cement Meta's position as the go-to platform for AI-powered creative tools.











