Grammarly drops AI impersonation tool after class action lawsuit

Grammarly drops AI impersonation tool after class action lawsuit

Grammarly Pulls AI ‘Expert Review’ Feature Amid Lawsuit Over Impersonation of Journalists and Authors

In a dramatic turn of events, Grammarly has officially disabled its controversial AI-powered “Expert Review” feature following mounting backlash and a high-profile lawsuit alleging unauthorized impersonation of prominent journalists, authors, and editors.

The writing assistant tool, which has long been a staple for students, professionals, and writers worldwide, launched the feature last August as part of its premium subscription offering. For $12 per month, users could access what Grammarly marketed as “subject-matter expertise and personalized, topic-specific feedback” that supposedly met “rigorous academic or professional standards.”

However, the feature’s implementation quickly spiraled into controversy when it was revealed that Grammarly was using the names and identities of numerous well-known figures—both living and deceased—without their consent. The roster included journalists from prestigious publications such as Bloomberg, The New York Times, Wired, The Atlantic, and The Verge, as well as celebrated authors like Stephen King.

Among those impersonated was investigative journalist Julia Angwin, whose extensive career includes bylines at The Wall Street Journal, ProPublica, and The New York Times. Angwin filed a lawsuit against Grammarly on March 11th, alleging that the company violated her privacy and publicity rights by “exploiting their names and identities for profit without their consent.”

“I have worked for decades honing my skills as a writer and editor, and I am distressed to discover that a tech company is selling an imposter version of my hard-earned expertise,” Angwin stated in response to the discovery.

The feature operated by identifying “relevant subject-matter experts based on your text and suggests edits from the perspective of these experts,” according to Grammarly’s user guide. The company defended its approach by claiming these experts were mentioned because “their published works are publicly available and widely cited.”

The backlash intensified when Alex Gay, Superhuman’s vice-president of product and corporate marketing, told The Verge that the company believed its use fell within acceptable bounds due to the public nature of the experts’ work.

However, the mounting criticism proved too significant to ignore. Shishir Mehrotra, CEO of Superhuman (Grammarly’s parent company following its acquisition of the AI email client last July), issued a public apology on LinkedIn. “Over the past week, we received valid critical feedback from experts who are concerned that the agent misrepresented their voices. This kind of scrutiny improves our products, and we take it seriously,” Mehrotra wrote.

The CEO acknowledged the company’s misstep and outlined a path forward: “I want to apologise and acknowledge that we’ll rethink our approach going forward. After careful consideration, we have decided to disable Expert Review while we reimagine the feature to make it more useful for users, while giving experts real control over how they want to be represented — or not represented at all.”

Grammarly plans to implement an opt-out system where affected experts can request removal from the feature via email, though critics argue this places the burden on individuals rather than requiring explicit consent upfront.

This controversy arrives at a pivotal moment for Grammarly, which is navigating the rapidly evolving AI landscape. As Casey Newton, editor and founder of Platformer, astutely observed, “Anyone with access to Claude, ChatGPT or Gemini can already get editing that makes Grammarly’s core product look like a relic.” This reality has pushed Grammarly to diversify beyond its traditional writing assistant model.

The company’s strategic pivot included the acquisition of AI productivity tools startup Coda in 2024, followed by the Superhuman acquisition and rebranding last July. These moves signal Grammarly’s recognition that standalone grammar and spelling correction tools are increasingly insufficient in an era dominated by powerful large language models.

Industry analysts suggest that Grammarly’s rush to add AI features may have outpaced its consideration of ethical implications and legal boundaries. The company’s initial defense—that public availability of work justifies its use—fails to account for the right of publicity and the commercial value of personal identity, particularly when used to sell premium subscriptions.

The incident also highlights the broader challenges facing AI companies as they navigate the fine line between innovation and exploitation. While AI tools promise unprecedented productivity gains, their deployment raises complex questions about consent, attribution, and the commercialization of human expertise.

For Grammarly, the immediate future involves not just technical redevelopment but also rebuilding trust with the writing and journalism communities it inadvertently alienated. The company’s promise to give experts “real control” over their representation represents a step toward more ethical AI deployment, though implementation details remain forthcoming.

As the dust settles on this controversy, one thing is clear: the AI industry’s rapid evolution continues to outpace regulatory frameworks and ethical guidelines, leaving companies like Grammarly to navigate uncharted territory where technological capability often outstrips responsible deployment.

The Grammarly saga serves as a cautionary tale for tech companies racing to capitalize on AI’s potential without fully considering the human implications of their innovations. In an age where artificial intelligence can convincingly mimic human expertise, the question of who owns and controls that expertise has never been more pressing.


Tags: Grammarly, AI controversy, Expert Review, Julia Angwin, Superhuman, AI ethics, impersonation lawsuit, writing assistant, tech backlash, AI features disabled

Viral phrases: AI impersonation scandal, Grammarly’s identity crisis, Expert Review backlash, tech company apology, AI ethics fail, unauthorized AI use, writing tool controversy, AI feature pulled, expert voices misused, tech accountability

Viral sentences: Grammarly faces lawsuit over AI impersonation of journalists, Company pulls controversial AI feature amid mounting criticism, CEO apologizes for unauthorized use of expert identities, Writing assistant tool accused of exploiting human expertise, AI company navigates ethical minefield after public backlash, Grammarly’s rapid AI expansion hits legal roadblock, Tech industry grapples with consent in AI development, Writing tool’s premium feature sparks privacy concerns, AI innovation collides with right of publicity, Company promises “real control” to affected experts

,

0 replies

Leave a Reply

Want to join the discussion?
Feel free to contribute!

Leave a Reply

Your email address will not be published. Required fields are marked *