Grammarly just pulled the plug on one of its most controversial AI features. The enterprise writing assistant disabled its "Expert Review" tool after discovering it was cloning the voices of real journalists and writers without permission - including The Verge's own editor-in-chief. The move marks a rare retreat for an AI company and highlights growing tensions around consent and identity in enterprise software.
Grammarly just learned a hard lesson about AI and consent. The company's parent, Superhuman, pulled its "Expert Review" feature after users discovered the AI was essentially impersonating real writers and journalists to provide editing suggestions - all without asking permission first.
The feature claimed its feedback was "inspired by" actual experts in various fields. But when The Verge staff members started seeing their own names attached to AI-generated writing advice, the backlash was swift. Editor-in-chief Nilay Patel and other staffers found themselves unwittingly "reviewing" documents they'd never seen, their professional reputations weaponized to sell AI suggestions.
"After careful consideration, we have decided to disable Expert Review as we reimagine the feature to make it more useful for users, while giving experts real control over how they want to be represented - or not represented at all," Ailian Gan, Superhuman's director of product management, told The Verge in a statement. The admission was blunt: "Based on the feedback we've received, we clearly missed the mark. We are sorry and will do things differently going forward."
The incident reveals a troubling pattern in enterprise AI development. Companies are racing to add AI features that feel personalized and authoritative, but they're doing it by scraping real people's expertise and identities without building in basic consent mechanisms. Grammarly apparently assumed it could synthesize expert personas the same way large language models absorb training data - as raw material rather than real people with rights.











