One reason I’ve been underwhelmed by AI is that companies consistently frame it as a solution to every problem under the sun. That’s why Meta’s new Segment Anything Model (SAM 2) is so intriguing to me. SAM 2 doesn’t answer questions, write code, generate AI images, or compose music. Instead, as its name suggests, the new AI model simply segments objects in video and images — but it does its one job really, really well.
Meta describes SAM 2 as “the first unified model for real-time, promptable object segmentation in images and videos.” You can use it to select multiple objects that can be tracked in real time, even if they’re moving around erratically in a video.
Here are just a few exciting examples of the new Segment Anything Model in action:
Meta’s new Segment Anything Model 2 (SAM 2) is incredible!
I fed it this video and it instantly was able to track Craig’s complex movements and add text overlays. pic.twitter.com/RbLDpPb7nD
— MindBranches (@MindBranches) July 30, 2024
A quick test using SAM 2 (Meta Segment Anything)
I can’t wait until this is real-time AR via the quest I have a lot of ideas already pic.twitter.com/uZmqMJ6xvU
— I▲N CURTIS (@XRarchitect) July 30, 2024
According to Meta, SAM 2’s improvements over the first model include better segmentation in images, better tracking in videos, and three times less interaction time than existing interactive video segmentation methods.
Tech. Entertainment. Science. Your inbox.
Sign up for the most interesting tech & entertainment news out there.
By signing up, I agree to the Terms of Use and have reviewed the Privacy Notice.
If you want to try it out for yourself to see what the AI model is capable of, Meta has a demo on its site that lets you track several objects in a short video and then make various edits, including changing the background and adding effects to the selected objects.
If you want to read more about SAM 2, check out Meta’s latest blog post.