Confronting the CEO of the AI company that impersonated me
AI Summary
Shishir Mehrotra, CEO of Superhuman (formerly Grammarly), appeared on The Verge's Decoder podcast to address the controversy surrounding Grammarly's 'Expert Review' feature, which was launched in August of the prior year. The feature generated AI-based writing suggestions attributed to real journalists and public figures — including The Verge's editor-in-chief Nilay Patel, journalist Casey Newton, and investigative journalist Julia Angwin — without obtaining their consent. Angwin subsequently filed a class-action lawsuit against the company, which Mehrotra stated he believes is without merit, arguing the feature constituted attribution rather than impersonation. Superhuman initially responded to complaints by offering an email-based opt-out before discontinuing the feature entirely, which Mehrotra said was decided prior to the lawsuit and driven by the feature being 'off-strategy' and poorly executed by a small team of roughly a product manager and a few engineers. The company, which rebranded from Grammarly to Superhuman late last year and now encompasses products including Grammarly, the document tool Coda, an email client called Mail, and a new agent platform called Superhuman Go, currently reports approximately 40 million daily active users across one million unique app surfaces per day and employs around 1,500 people. Mehrotra acknowledged the feature was of poor quality and apologized, while defending the company's broader platform strategy, which includes a 70/30 revenue-share model for creators who voluntarily build and monetize agents on the Superhuman platform.
Why it matters
The controversy highlights a critical and unresolved tension in the AI industry around the use of personal names, likenesses, and published work for commercial AI products without consent or compensation — an issue that extends well beyond Superhuman to major foundation model providers currently facing multiple copyright lawsuits from outlets including The New York Times and Vox Media. For investors in AI-adjacent software companies, the case underscores emerging legal and reputational risks tied to AI feature development, particularly as courts have yet to establish clear precedent on whether AI-generated attributions constitute actionable misappropriation of likeness under New York and California law. Mehrotra's comments also reflect broader competitive pressures facing SaaS companies built on top of foundation models, including the risk of disintermediation by OpenAI, Anthropic, and Google, which could compress margins for application-layer AI businesses like Superhuman.
Scoring rationale
The article covers an AI product controversy (Grammarly/Superhuman's AI-cloned 'Expert Review' feature) and broader AI industry economics, but is primarily a podcast interview focused on ethics, creator rights, and company strategy rather than a market-moving financial story.
This summary was generated by AI from the original article published by The Verge AI. AIMarketWire does not provide trading advice. Always refer to the original source for complete reporting.