Grammarly’s AI Backlash: A Turning Point for Digital Identity and AI Ethics
Grammarly, the popular writing assistant, recently faced a significant public relations crisis and a class-action lawsuit over its “Expert Review” feature. The feature, now disabled, used the names and implied expertise of authors, journalists, and academics – even those deceased – to provide writing suggestions without their consent. This incident isn’t just about Grammarly; it signals a broader reckoning with how AI tools leverage personal identity and expertise.
The Rise of AI Personas and the Problem of Misappropriation
Grammarly’s “Expert Review” aimed to elevate its service by offering advice “inspired by” leading professionals. However, the execution sparked outrage. Tech journalist Kara Swisher publicly condemned the practice, calling Grammarly “rapacious information and identity thieves.” The core issue wasn’t simply the leverage of names, but the implication of endorsement and the potential for misrepresentation. Even dummy text triggered suggestions attributed to Stephen King, highlighting the feature’s flawed logic.
This incident underscores a growing trend: AI tools creating “personas” based on real individuals. Although intended to enhance user experience, this practice raises critical questions about digital identity, intellectual property, and the right to control one’s public image. The lawsuit filed against Grammarly alleges “misappropriation” of identities, citing California Civil Code § 3344(a)(1), which protects against unauthorized use of a person’s name for commercial purposes.
Beyond Grammarly: The Wider Implications for AI Development
The Grammarly controversy isn’t isolated. It reflects a broader anxiety surrounding the rapid advancement of large language models (LLMs) and their potential for misuse. As AI becomes more sophisticated, the line between inspiration and imitation blurs. The temptation to leverage established reputations to build trust and credibility is strong, but doing so without consent is ethically problematic and legally risky.
This situation highlights the need for clearer guidelines and regulations regarding the use of personal data and intellectual property in AI development. Companies must prioritize transparency and obtain explicit consent before using anyone’s name, likeness, or perform to train or operate AI models. The disclaimer buried in Grammarly’s documentation – stating that expert references were “for informational purposes only” – proved insufficient to quell the backlash, demonstrating that transparency alone isn’t enough.
The Future of AI and Expert Endorsement
Grammarly CEO Shishir Mehrotra has pledged to “reimagine” the feature, focusing on giving experts “real control” over their representation. This suggests a potential shift towards a more collaborative model, where experts actively participate in and benefit from the use of their expertise in AI tools. Several possible paths forward exist:
- Verified Endorsements: AI tools could seek explicit permission and compensation from experts for using their names and insights.
- AI-Generated Expertise: Focus on developing AI models that can generate original insights based on publicly available knowledge, rather than mimicking specific individuals.
- Transparent Attribution: Clearly distinguish between AI-generated content and advice from human experts.
The incident likewise raises questions about the future of work for writers and journalists. As AI tools become capable of generating increasingly sophisticated content, the value of human expertise may be diminished – or, conversely, become even more critical in ensuring accuracy, originality, and ethical considerations.
FAQ
Q: What was Grammarly’s “Expert Review” feature?
A: It was a premium feature that offered writing suggestions attributed to well-known authors, journalists, and academics.
Q: Why did people object to the feature?
A: The feature used people’s names and implied expertise without their consent, raising concerns about misappropriation and misrepresentation.
Q: Is Grammarly facing legal action?
A: Yes, a class-action lawsuit has been filed alleging the feature violated California law regarding the unauthorized use of personal identities.
Q: What is Grammarly doing about the issue?
A: Grammarly has disabled the “Expert Review” feature and is working on a revised version that will give experts more control over their representation.
Did you know? Even placeholder text triggered suggestions from renowned authors, demonstrating the feature’s flawed association logic.
Pro Tip: Always review the terms of service and privacy policies of AI tools to understand how your data and identity are being used.
What are your thoughts on AI tools using the names of experts? Share your opinion in the comments below, and explore our other articles on the ethical implications of artificial intelligence.
