Spotify is rolling out Artist Profile Protection, a new beta feature that lets musicians manually approve tracks before they appear on their profiles. The move comes as AI-generated impersonations and metadata mistakes have plagued everyone from Drake to indie acts, turning artist pages into potential minefields of fake content. It's Spotify's most direct answer yet to a problem that's been escalating for months across streaming platforms.
Spotify just handed artists something they've been demanding for months - actual control over their own profiles. The company's new Artist Profile Protection feature, now in beta testing, lets musicians review and approve any release before it appears under their name. It's a direct response to the surge of AI-generated fakes and impostor tracks that have turned artist pages into potential liability zones.
The announcement comes via Spotify's artist blog, where the company frames it as a solution to both innocent metadata mixups and deliberate bad actors. But the timing tells the real story - AI voice cloning has evolved from novelty to nuisance to genuine threat over the past year.
Everyone from Drake and Beyonce to experimental composer William Basinski and psych-rock outfit King Gizzard and the Lizard Wizard have found fake tracks appearing under their names. Some are obvious metadata errors - two artists with the same name getting their catalogs crossed. But increasingly, the fakes are AI-generated voice clones designed to siphon streams and royalties from legitimate artists.
The new system works like a gatekeeper. When a distributor submits a release to an artist's profile, the artist gets a notification through Spotify for Artists. They can review the track details, listen if they want, then approve or decline it. Declined releases get bounced back to the distributor to sort out. It's manual, it's time-consuming, and for artists dealing with impostor problems, it's probably a relief.
What makes this particularly significant is how it shifts responsibility. Previously, artists had to play whack-a-mole with fake uploads, reporting them after they'd already gone live and potentially racked up thousands of streams. Now they can stop the fakes before they ever reach listeners. It's the difference between damage control and actual prevention.











