A writer is suing Grammarly for turning her and other authors into ‘AI editors’ without consent

A writer is suing Grammarly for turning her and other authors into ‘AI editors’ without consent

Grammarly Faces Class Action Lawsuit Over Unauthorized Use of Writers’ Names in AI Feature

Grammarly, the popular AI-powered writing assistant, has found itself at the center of a major controversy and legal battle after launching a feature that impersonated hundreds of well-known writers, journalists, and experts without their consent. The feature, called “Expert Review,” promised users feedback from prominent figures such as novelist Stephen King, the late scientist Carl Sagan, and tech journalist Kara Swisher—but it turns out none of these individuals had agreed to participate.

The Legal Backlash Begins

The controversy erupted when journalist Julia Angwin, known for her investigative work on technology and privacy, discovered that Grammarly had used her name and professional reputation without permission. In response, Angwin has filed a class action lawsuit against Superhuman, the parent company that owns Grammarly, alleging violations of privacy and publicity rights. The lawsuit, filed in federal court, seeks to represent not just Angwin but potentially hundreds of other writers whose names were used without authorization.

“I have worked for decades honing my skills as a writer and editor, and I am distressed to discover that a tech company is selling an imposter version of my hard-earned expertise,” Angwin stated in a public declaration. Her frustration is compounded by the irony that her career has been dedicated to exposing how tech companies impact privacy and personal rights.

A Who’s Who of Unconsenting Participants

The list of writers and experts affected by Grammarly’s unauthorized use is extensive and includes some of the most recognizable names in journalism and technology. Beyond Angwin, the feature included AI approximations of renowned AI ethicist Timnit Gebru, known for her groundbreaking work on ethical AI development and her controversial departure from Google, as well as other prominent voices who have been critical of AI technology’s rapid advancement without proper safeguards.

The inclusion of critics of AI technology in a feature that uses AI to impersonate them adds another layer of irony to the situation. It’s as if the very people who have been sounding alarms about the potential dangers of AI were being used to promote a feature that embodies some of their concerns.

The Feature That Promised More Than It Delivered

Available only to Grammarly’s premium subscribers who pay $144 annually, the “Expert Review” feature was marketed as a way to get feedback that would make users feel like they were receiving critiques from the world’s top writers and thinkers. However, when users actually tried the feature, they found that the AI-generated feedback was disappointingly generic and superficial.

Casey Newton, founder and editor of the tech newsletter Platformer, decided to test the feature himself. He fed one of his articles into Grammarly’s system and received feedback supposedly from an AI approximation of Kara Swisher. The feedback he received was so vague and unhelpful that it raised serious questions about why Grammarly would go through the trouble of using these writers’ names in the first place.

The AI’s feedback to Newton was: “Could you briefly compare how daily AI users versus AI skeptics articulate risk, creating a through-line readers can follow?” This type of generic writing advice could have come from any basic writing assistant, not from one of tech journalism’s most respected voices.

The Reaction from the Real Experts

When Newton shared the AI’s attempt at impersonating Kara Swisher with the actual journalist, her response was swift and scathing. Swisher texted Newton, “You rapacious information and identity thieves better get ready for me to go full McConaughey on you,” referencing Matthew McConaughey’s intense courtroom scene in “A Time to Kill.” She added, “Also, you suck.”

This reaction from Swisher, known for her direct and unfiltered communication style, perfectly encapsulates the frustration and anger felt by many of the writers who discovered their names were being used without permission.

Grammarly’s Response and the Feature’s Removal

In the wake of growing criticism and the looming legal threat, Superhuman CEO Shishir Mehrotra announced on LinkedIn that the “Expert Review” feature had been disabled. While Mehrotra offered an apology, he continued to defend the underlying concept of the feature, suggesting that it had potential value.

“Imagine your professor sharpening your essay, your sales leader reshaping a customer pitch, a thoughtful critic challenging your arguments, or a leading expert elevating your proposal,” Mehrotra wrote in his defense of the idea. “For experts, this is a chance to build that same ubiquitous bond with users, much like Grammarly has.”

This statement has done little to quell the anger of the affected writers, many of whom see it as missing the fundamental point: using someone’s name and reputation without permission is a violation of their rights, regardless of the intended benefit.

The Broader Implications

This controversy touches on several important issues in the current tech landscape:

  1. Consent and Intellectual Property: The case raises serious questions about who owns the rights to a person’s name, likeness, and professional reputation in the age of AI.

  2. AI Ethics: It highlights the ongoing debate about the ethical use of AI, particularly when it comes to impersonation and the use of real people’s identities.

  3. Privacy Rights: For writers and journalists who have made careers out of protecting privacy, having their identities used without consent is a particularly bitter pill to swallow.

  4. The Limits of AI: The generic nature of the feedback provided by the feature also serves as a reminder of AI’s current limitations in providing truly insightful, personalized critique.

What’s Next?

As the class action lawsuit moves forward, it could set important precedents for how AI companies can use real people’s identities and likenesses. The case may also force a broader conversation about the rights of individuals in an increasingly AI-driven world.

For Grammarly and Superhuman, this incident represents a significant misstep that could have lasting consequences for their reputation and business model. It also serves as a cautionary tale for other tech companies racing to implement AI features without fully considering the ethical and legal implications.

The writers and experts affected by this feature are not just seeking compensation; they’re fighting for control over their own identities and professional reputations in a digital age where AI can easily appropriate them without permission. As this case unfolds, it will be watched closely by content creators, tech companies, and legal experts alike, all of whom have a stake in how we navigate the complex intersection of AI, identity, and intellectual property in the 21st century.


Tags: Grammarly, AI controversy, class action lawsuit, unauthorized use, writers rights, intellectual property, tech ethics, Superhuman, Expert Review feature, Julia Angwin, Kara Swisher, Timnit Gebru, AI impersonation, privacy rights, digital identity

Viral Phrases: “rapacious information and identity thieves,” “go full McConaughey,” “selling an imposter version of my hard-earned expertise,” “ubiquitous bond with users,” “the limits of AI,” “AI ethics nightmare,” “tech company overreach,” “the right to your own name,” “AI gone wrong,” “the future of digital consent”

Viral Sentences: “Grammarly used hundreds of writers’ names without permission,” “AI tried to be Stephen King and failed miserably,” “Tech critics find themselves impersonated by the tech they criticize,” “The feature was so bad even AI couldn’t save it,” “Writers fight back against AI identity theft,” “This is why we can’t have nice AI things,” “When your name becomes AI’s playground,” “The lawsuit that could change AI forever,” “Grammarly’s big mistake that cost them everything,” “AI ethics meets real-world consequences”

,

0 replies

Leave a Reply

Want to join the discussion?
Feel free to contribute!

Leave a Reply

Your email address will not be published. Required fields are marked *