Grammarly is using our identities without permission

Grammarly is using our identities without permission

Grammarly’s AI “Expert Review” Feature Sparks Outrage by Using Tech Journalists’ Names Without Permission

In a shocking revelation that has sent shockwaves through the tech journalism community, Grammarly’s new “Expert Review” feature has been caught using the names and reputations of prominent tech journalists without their consent. The AI-powered tool, launched in August by Grammarly’s parent company Superhuman, claims to offer users writing advice “inspired by” subject matter experts, including recently deceased professors and living industry figures.

What makes this story particularly scandalous is that many of the so-called “experts” named in the feature are actually tech journalists who had no idea their names were being used to lend credibility to an AI tool. Among those unexpectedly featured are The Verge‘s own editor-in-chief Nilay Patel, editor-at-large David Pierce, and senior editors Sean Hollister and Tom Warren.

“I was absolutely stunned to see my name attached to an AI feature I had no knowledge of,” said one affected journalist who wished to remain anonymous. “It’s one thing to train an AI on publicly available content, but it’s another to present that AI as if it were me giving personalized advice.”

The feature, which Grammarly describes as helping users “sharpen their message through the lens of industry-relevant perspectives,” allows users to receive AI-generated suggestions supposedly inspired by experts like Stephen King, Neil deGrasse Tyson, and Carl Sagan. However, the inclusion of living tech journalists without their permission has raised serious ethical questions about consent and representation in the age of AI.

When The Verge investigated further, they discovered a veritable who’s who of tech journalism names appearing in the feature, including former Verge editors Casey Newton and Joanna Stern, Wired‘s Lauren Goode, Bloomberg‘s Mark Gurman and Jason Schreier, The New York Times’ Kashmir Hill, The Atlantic‘s Kaitlyn Tiffany, and many others.

The situation becomes even more concerning when examining how the feature presents its “expert” suggestions. In Google Docs, the AI-generated comments are formatted to look remarkably similar to actual human feedback, potentially misleading users into thinking they’re receiving advice from the named expert. One journalist noted that the AI’s suggestion attributed to them actually contradicted their known editing style.

“It’s not just about using our names,” explained another affected journalist. “It’s about creating a false impression of endorsement and expertise. The AI suggestions don’t reflect how any of us would actually edit or advise.”

Adding insult to injury, the feature appears to have significant technical problems. Users report frequent crashes, and the “sources” linked in the suggestions often lead to spammy websites or completely unrelated content. In some cases, suggestions attributed to one expert appear to be based on the work of entirely different people.

When confronted about these issues, Alex Gay, vice president of product and corporate marketing at Superhuman, defended the practice by stating that “The experts in Expert Review appear because their published works are publicly available and widely cited.” However, this explanation has done little to satisfy critics who argue that public availability doesn’t equal permission for commercial use in this manner.

The controversy highlights growing concerns about how AI companies are using the work and identities of real people without consent. While the feature doesn’t explicitly claim endorsement from the named experts, the presentation strongly implies a connection that doesn’t exist.

“This is a clear example of how AI companies are rushing to market with features that have serious ethical implications,” said a digital rights advocate. “Using someone’s name, reputation, and writing style without permission to sell a product is problematic, regardless of whether their work is publicly available.”

As the story continues to develop, many affected journalists are calling for Grammarly to remove their names from the feature and implement a consent-based system for future expert references. The incident serves as a stark reminder of the ongoing challenges in balancing AI innovation with respect for individual rights and professional reputations.

The Grammarly controversy is just the latest in a series of incidents highlighting the need for clearer guidelines and regulations around AI training data and the commercial use of people’s identities and work. As AI continues to advance, the tech industry will need to grapple with these ethical questions to maintain trust and respect for individual rights.

Tags: #Grammarly #AIethics #TechJournalism #Superhuman #ArtificialIntelligence #DigitalRights #Plagiarism #TechControversy #AIResponsibility #ContentCreation

Viral Phrases:

“AI pretending to be real experts”
“Tech journalists caught in AI scandal”
“Grammarly’s ethical nightmare”
“When AI crosses the line”
“Names used without permission”
“Digital identity theft”
“AI’s consent problem”
“Tech journalism betrayed”
“Grammarly’s big mistake”
“The AI that impersonates journalists”

,

0 replies

Leave a Reply

Want to join the discussion?
Feel free to contribute!

Leave a Reply

Your email address will not be published. Required fields are marked *