Adobe just dropped its biggest AI update yet at Max 2025, rolling out conversational AI assistants across Creative Cloud and launching Firefly's new audio generation tools. The company's betting that natural language editing will transform how creators work, letting users simply tell Photoshop and Express what changes to make instead of hunting through menus and toolbars.
Adobe is making its biggest play yet to transform creative workflows with AI, announcing at Max 2025 that conversational assistants are coming to its entire Creative Cloud suite. The move represents a fundamental shift in how the company thinks about software interfaces - instead of complex menus and toolbars, creators will soon be able to simply describe what they want.
The rollout starts with Express and Photoshop for web, where Adobe's new AI Assistant lets users edit projects through natural language commands. "Make this wedding invitation more fall-themed" or "turn this into a retro science fair poster" are the kinds of prompts that now trigger comprehensive design changes, according to The Verge's hands-on coverage. The feature launches in public beta today, marking Adobe's most aggressive push into conversational AI yet.
But the real surprise came with Firefly's expansion into audio generation. Adobe's Generate Soundtrack tool analyzes uploaded videos and creates synchronized instrumental tracks that match the footage's mood and pacing. Users can guide the AI with style presets like lofi, hip-hop, or classical, or describe the desired vibe in plain text. The tool launched in public beta alongside Generate Speech, which creates AI voice-overs for video projects.
The audio push puts Adobe in direct competition with emerging players like Suno and Udio, but with a key advantage - tight integration with existing video editing workflows. "We're seeing creators spend hours hunting for the right royalty-free music," Adobe product managers told . "This eliminates that friction entirely."












