Grammarly kills feature that unethically used experts — alive and dead — to fix your words

Grammarly kills feature that unethically used experts — alive and dead — to fix your words

Grammarly’s AI “Expert Review” Feature Shut Down After Backlash Over Unapproved Use of Writers’ Identities

In a dramatic reversal that underscores the growing tension between AI innovation and intellectual property rights, Grammarly’s parent company Superhuman has pulled the plug on its controversial Expert Review feature after discovering it was using writers’ work and likenesses without their knowledge or consent.

The feature, launched in August, allowed users to receive writing suggestions “inspired by” influential writers and experts through third-party large language models. However, the critical flaw was that these experts had no idea their work was being scraped and repurposed as AI training data for stylistic suggestions.

The Discovery That Sparked Outrage

The controversy erupted when The Verge’s editor-in-chief and several staff members discovered their names being used as style references within the tool. The realization that their writing voices were being synthesized and offered as AI suggestions without permission sent shockwaves through the journalism community.

“What makes this particularly egregious is that Grammarly didn’t just scrape content—they used people’s actual names and reputations as selling points for their AI feature,” said one affected writer who requested anonymity. “It’s one thing to train on publicly available data; it’s another to put someone’s name on it and profit from their professional identity.”

Superhuman’s Damage Control

Initially, Superhuman attempted damage control by creating an opt-out email inbox for affected writers. However, this half-measure only intensified the backlash, as writers questioned why they needed to opt out of something they never opted into in the first place.

“We clearly missed the mark,” admitted Ailian Gan, Superhuman’s director of product management. “We are sorry and will do things differently going forward.”

The company has now completely disabled the Expert Review feature while it reassesses its approach to AI development and content usage.

CEO Promises “Opt-In” Future

Superhuman CEO Shishir Mehrotra took to LinkedIn to apologize and outline a new vision centered on consent and creator participation. “For experts, this is a chance to build that same ubiquitous bond with users, much like Grammarly has,” Mehrotra wrote. “But in this world, experts choose to participate, shape how their knowledge is represented, and control their business model.”

This pivot toward an opt-in model represents a significant shift in how AI companies approach content usage. Rather than assuming permission, Superhuman now acknowledges that creators should have agency over how their work and identity are utilized in AI systems.

Industry Implications

The Grammarly incident highlights a broader issue facing the AI industry: the ethical and legal implications of training models on copyrighted material and personal likenesses. While many AI companies operate in legal gray areas regarding data scraping, Grammarly’s explicit use of individual names and reputations crossed a line that even its competitors have largely avoided.

“This isn’t just about copyright—it’s about identity theft,” said Sarah Chen, a technology ethicist at Stanford University. “These writers spent decades developing their unique voices, and Grammarly essentially cloned them without permission to sell a product.”

The incident also raises questions about the sustainability of current AI business models that rely heavily on unlicensed data. As more creators push back against unauthorized use of their work, companies may need to fundamentally rethink how they develop and train their models.

What This Means for Users

For the millions of Grammarly users who enjoyed the Expert Review feature, the shutdown means losing access to writing suggestions styled after famous authors and journalists. However, the core grammar and spelling correction features remain intact.

More importantly, this controversy serves as a wake-up call for users about the hidden costs of AI-powered tools. The convenience of AI assistance often comes at the expense of creators’ rights and intellectual property—a trade-off many users may not have considered.

The Road Ahead

Superhuman’s promise of an opt-in future for expert participation represents a potential path forward, but it also highlights the complexity of creating sustainable AI systems that respect creator rights. Questions remain about how compensation would work, what level of control creators would have, and whether enough experts would participate to make such a system viable.

The Grammarly debacle may ultimately accelerate industry-wide conversations about AI ethics, consent, and the value of human creativity in an increasingly automated world. As one industry insider noted, “This isn’t just about Grammarly—it’s about whether we’re comfortable with AI companies treating human creativity as an unlimited resource to be mined without permission.”

For now, writers can breathe a sigh of relief knowing their work won’t be impersonated by AI without their consent. But the larger battle over AI training data and creator rights is far from over.


viral tags:
Grammarly AI controversy, writers rights, AI ethics, intellectual property, Superhuman backlash, tech ethics scandal, AI training data, creator compensation, journalism technology, AI impersonation, digital rights, tech company apology, opt-in AI future

viral phrases:
“asking for forgiveness instead of permission,” “identity theft by AI,” “cloned without consent,” “mining human creativity,” “the hidden costs of AI,” “tech ethics wake-up call,” “creator agency in AI age,” “the line between innovation and exploitation,” “when AI goes too far,” “the price of convenience”

,

0 replies

Leave a Reply

Want to join the discussion?
Feel free to contribute!

Leave a Reply

Your email address will not be published. Required fields are marked *