Google just turned its Gemini chatbot into a music studio. The company's rolling out beta access to Lyria 3, DeepMind's latest audio model, letting users generate 30-second tracks from text, images, or videos without ever leaving the chat window. It's a bold move that puts AI-generated music creation directly into the hands of millions of Gemini users globally, and it signals Google's intent to dominate the emerging generative audio space where startups like Suno and Udio have been making noise.
Google just handed its Gemini chatbot a new superpower - the ability to compose music on demand. Starting today, users can access Lyria 3, DeepMind's latest audio generation model, directly within the Gemini app to create 30-second tracks based on text descriptions, uploaded images, or even video clips.
The integration marks a significant expansion of Gemini's capabilities and represents Google's most aggressive push yet into the rapidly evolving generative audio market. Unlike standalone music generation tools that require users to jump between platforms, Lyria 3 lives natively inside the Gemini interface - you describe what you want, and the AI composes it right there in the chat thread.
According to The Verge's report, the feature launches globally today with support for eight languages: English, German, Spanish, French, Hindi, Japanese, Korean, and Portuguese. Google's limiting access to users 18 and older, likely due to copyright and content moderation concerns that have plagued the AI music generation space.
The timing couldn't be more strategic. Generative audio has exploded in the past year, with startups like Suno and Udio attracting millions of users and raising significant venture capital. Both companies let users create full-length songs from simple text prompts, and they've demonstrated how hungry consumers are for accessible music creation tools. By embedding Lyria 3 directly into Gemini, Google's betting it can capture that demand while leveraging its massive existing user base.











