How digital artist Josephine Miller uses Meta Segment Anything to help design the future of fashion

17 January 2025

In a recent Instagram post, Josephine Miller appears to be dressed for a formal event in a gold floor-length evening gown when, suddenly, the color of her dress changes. One moment it’s red, then turquoise with a floral print, followed by a rainbow design. Her dress would be a conversation piece on any red carpet, except for one thing: The illusion doesn’t exist in the physical world.

It’s instead one that Josephine, a London-based XR creative designer and content creator, created using AI and Meta’s Segment Anything 2 model.

Instead of buying a few similar dresses, Josephine says she was eager to show how many different looks could be achieved without having to change the physical dress she was wearing.

“With AI and digital fashion, we have creative ways to express ourselves online that don’t rely on having to ‘buy into’ trends and fast fashion mass consumption,” she says. “This shift not only celebrates individuality but also supports a more sustainable future.”

For her work, Josephine built a computer with an RTX 4090 GPU to accommodate heavier tasks. She created her red carpet looks by programming ComfyUi, an open source stable diffusion model. She then used Meta’s Segment Anything model to segment her outfit, a computer vision task that requires identifying the pixels in an image that correspond to the object of interest. Until the release of Meta’s first version of SAM in 2023, object segmentation in images was a time-consuming task that wasn’t always precise. SAM revolutionized the process by enabling interactive and automatic object segmentation in images. Meta followed the success with the release of SAM 2, which extended the flexible segmentation capabilities to videos and images.

“SAM was transformative in helping execute clean segmentation on top of the existing video,” Josephine says. “As soon as I started using it, the quality of my outcomes increased significantly. For example, I can select a dress and jacket, instead of manually having to do it.”

It’s Josephine’s artistic vision and expertise in working with open source AI that has propelled her to become one of the go-to creative minds for global brands seeking to create activations across augmented, virtual, and mixed reality.

She remembers when she dove head-first into learning how to work with AI as a university student during the COVID lockdown in 2020.

“I taught myself AI workflows for content creation primarily through experimenting, driven by a genuine curiosity,” she says. “I began with creating images from text prompts and gradually moved into programming open source AI video models, like ComfyUI. That’s where I started integrating SAM to segment AI outfits into my videos. It was a lot of trial and error, but after two months of persistence, I finally developed a workflow that works for me today.”

Working with models requires a powerful GPU. Josephine built a computer with an RTX 4090 GPU that enables her to work on heavy tasks. While working with the models requires a powerful GPU and can’t yet be done locally on a standard laptop, Josephine says she sees a future where open source models like SAM will be used by more people to help supercharge their creativity.

“SAM has genuinely saved me time in segmenting AI outfits, with results that are impressively clean-cut compared to traditional methods,” she says. “I highly recommend it for creators and video editors looking to streamline their workflow—it’s really been a game changer for me. AI tools like SAM 2 allow you to focus more time on areas of creativity you enjoy, speeding up the time spent on mundane tasks.”

 

Source: Meta