Google just opened access to Lyria 3, its newest music generation model, marking a significant expansion of its AI toolkit for developers. The model is now available through a paid preview in the Gemini API and testing environment via Google AI Studio, according to Google DeepMind Product Manager Alisa Fortin. This release positions Google to compete more aggressively in the rapidly evolving AI music generation market, where startups like Suno and Udio have been gaining traction with consumer-facing products.
Google is making its bet in AI-generated music official. The company just released Lyria 3, its latest music generation model, through a paid preview program accessible via the Gemini API. Developers can start building with the technology immediately through Google AI Studio, the company's testing and prototyping environment.
The timing isn't accidental. AI music generation has exploded over the past year, with startups like Suno and Udio capturing mainstream attention with consumer apps that create full songs from text prompts. But Google's taking a different approach - targeting developers and enterprise users rather than going direct to consumers. It's a classic Google play: build the infrastructure, let others create the products.
Alisa Fortin, Product Manager at Google DeepMind, announced the release through the company's developer blog, though technical specifications remain sparse. What we do know is that Lyria 3 represents the third generation of Google's music AI technology, which first surfaced in experimental projects like Dream Track for YouTube Shorts.
The paid preview model signals Google's intent to commercialize its generative AI research beyond just search and productivity tools. By integrating Lyria 3 into the Gemini API ecosystem, Google is positioning the technology alongside its language models and multimodal AI capabilities. Developers building on Google's platform can now theoretically create applications that combine text, image, and music generation in a single workflow.











