Adobe just showed off something that'll make video editors either very excited or very worried about their jobs. At its Max conference, the company demonstrated Project Frame Forward, an experimental AI tool that can edit entire videos by making changes to just the first frame. Remove a person, add an object, change lighting - whatever you do to frame one gets applied across the whole video automatically, no masks required.
Adobe just dropped a preview of AI tools that could fundamentally change how people edit videos and photos. At the company's Max conference, Adobe showcased what it calls "sneaks" - experimental projects that give us a glimpse into the future of creative software. The standout tool, Project Frame Forward, lets video editors make changes to a single frame and watch those edits ripple across an entire video clip. It's the kind of capability that traditionally required hours of frame-by-frame work, now happening in a few clicks. The demonstration shows the tool identifying and removing a woman from the first frame of a video, then automatically applying that removal across every subsequent frame. But it goes beyond simple object removal. Users can insert new elements by drawing where they want them placed and describing what to add through AI prompts. The system is smart enough to make these additions contextually aware - in one demo, a generated puddle reflects the movement of a cat that was already in the video. That level of scene understanding represents a significant leap from current AI editing tools. Project Light Touch tackles photo editing with similar sophistication, using generative AI to reshape light sources in images. The tool can change lighting direction, make rooms appear illuminated by lamps that weren't on in the original photo, and control light diffusion and shadows. More impressively, it can insert dynamic lighting that users can drag across the editing canvas in real-time, bending light around people and objects. The demo shows illuminating a pumpkin from within while transforming the surrounding environment from day to night. The third major tool, Project Clean Take, focuses on audio editing. It can change how speech is pronounced using AI prompts, alter the emotion or delivery of someone's voice, or replace words entirely while preserving the speaker's vocal characteristics. The tool can also separate background noises into individual sources, letting editors selectively adjust or mute specific sounds while preserving voice clarity. These aren't the only experimental tools Adobe showed off. Project Surface Swap instantly changes materials and textures on objects, while Project Turn Style lets users rotate objects in images as if they were 3D models. enables photo editing in 3D space, automatically handling occlusion when inserting new objects. The creative industry is watching these developments closely. Traditional video editing workflows often involve tedious masking processes for object removal or complex compositing for scene changes. Adobe's approach eliminates much of that manual work, potentially democratizing advanced editing techniques that previously required specialized expertise. However, these are still experimental projects. Adobe's sneaks programme has a mixed track record - some features eventually make it into Creative Cloud applications, while others never see public release. Recent successes include Photoshop's and , which both started as sneaks projects. The timing of these announcements isn't coincidental. Adobe faces increasing competition from AI-native editing tools and needs to demonstrate that its established Creative Cloud ecosystem can evolve with the technology. The company has been integrating AI capabilities across its product line, but these experimental tools represent a more aggressive push into AI-first workflows. For creative professionals, the implications are significant. These tools could dramatically reduce project turnaround times and make complex edits accessible to non-specialists. But they also raise questions about the future role of traditional editing skills and the potential for AI to homogenize creative output.






