Grammarly Is Facing a Class Action Lawsuit Over Its AI 'Expert Review' Feature
Source: Wired
Superhuman, the tech company behind the writing software Grammarly, is facing a class action lawsuit over an AI tool that presented editing suggestions as if they came from established authors and academicsnone of whom consented to have their names appear within the product.
Julia Angwin, an award-winning investigative journalist who founded The Markup, a nonprofit news organization that covers the impact of technology on society, is the only named plaintiff in the suit, which does not call for a specific amount in damages but argues that damages across the plaintiff class are in excess of $5 million. She was among the many individuals, alongside Stephen King and Neil deGrasse Tyson, offered up via Grammarlys Expert Review tool as a kind of virtual editor for users.
The federal suit, filed Wednesday afternoon in the Southern District of New York, states that Angwin, on behalf of herself and others similarly situated, challenges Grammarlys misappropriation of the names and identities of hundreds of journalists, authors, writers, and editors to earn profits for Grammarly and its owner, Superhuman.
The complaint comes as Superhuman has already decided to discontinue the feature amid significant public backlash. After careful consideration, we have decided to disable Expert Review as we reimagine the feature to make it more useful for users, while giving experts real control over how they want to be representedor not represented at all, said Ailian Gan, Superhumans director for product management, in a statement to WIRED shortly before the claim was filed. We built the agent to help users tap into the insights of thought leaders and experts and to give experts new ways to share their knowledge and reach new audiences. Based on the feedback weve received, we clearly missed the mark. We are sorry and will do things differently going forward.
-snip-
Read more: https://www.wired.com/story/grammarly-is-facing-a-class-action-lawsuit-over-its-ai-expert-review-feature/
Earlier threads about what Grammarly was doing:
Grammarly Is Offering 'Expert' AI Reviews From Your Favorite Authors--Dead or Alive (Wired. & Verge staff were also used)
https://www.democraticunderground.com/100221078055
Grammarly ripped off famous writers using their names for AI-generated Expert Reviews. They're now allowing opt-outs
https://www.democraticunderground.com/100221086007
The lawsuit filed today says that "Contrary to the apparent belief of some tech companies, it is unlawful to appropriate peoples names and identities for commercial purposes, whether those people are famous or not."
erronis
(23,593 posts)They're pouring money into this snake oil and need to show profits.
highplainsdem
(61,611 posts)of them. I just wish a lot of them would serve prison sentences, with their built-on-theft tools destroyed and the companies having to start over with what's in the public domain plus what they can afford to pay for after the lawsuits. At which point it would be obvious to everyone that almost the entire value of genAI was in the stolen IP.
AZJonnie
(3,611 posts)This is why I often say that the biggest fuckup about this whole thing is there should have been hard, clear regulations and even criminal laws in some cases, about things like this sort of infringement on people's intellectual output, and indeed even their names/reputations, constructed at minimum before anyone could incorporate AI into their commercial products. And we needed like a serious, permanent Bureau of AI Oversight created, in as independent a way possible (like the CFPB was meant to be done, for example), and Congressional Committees for it too.
I know you are also deeply troubled by the theft aspect while building the models and of course I think that is also problematic but from a practical perspective, may have been tough sledding legislatively to regulate, but at minimum when it comes to actual commercial products doing shit like what Grammarly is doing here, THIS should have been a much easier piece to legislate and come to agreement on
and should've been done a LONG time ago!!!
Regulating how the "stolen" (to your thinking, and mine to a slightly lesser extent) information can be used in a for-profit setting, that should be no-brainer and easier to reach consensus upon vs. the review of materials to make the AI "smart-like" in the first place. I don't LIKE that part, but I do see 2 arguments about it. These Grammarly shenanigans, I do NOT
highplainsdem
(61,611 posts)Yes, we should have had more regulations and laws in place. But the AI companies did already know that they were breaking laws training with stolen IP, and they knew "publicly available" is not the same as "public domain" and copyright-free. And they're still stealing IP, and trying to get the laws they've been breaking for years changed.
One law we didn't have, and needed, was a law against releasing AI that sounded convincing but hallucinated a lot and could not be stopped from hallucinating. But I don't think anyone would have imagined the tech industry ever marketing a fancy but hallucinating autocomplete as artificial intelligence, and burying the "it makes mistakes" and "check everything it does for errors because we have no idea where they might pop up" caveats in the fine print.
The genAI industry has been criminally irresponsible and exploitative. If we survive the military lunatics adding the AI lunatics' hallucinating autocompletes to weapons, I hope we'll be able to do something about that.
In the meantime, maybe some of these companies can be put out of business, and more people will begin to understand how unethical this tech is, and why it shouldn't be used.
Casey Newton of Platformer wrote a couple of days ago that what Grammarly was doing wasn't all that different from what the genAI companies had already done:
https://www.platformer.news/grammarly-expert-review-reviewed/
-snip-
The difference is that Grammarly took a latent capability one that exists in every LLM and turned it into a product feature. It curated a list of real people, gave its models free rein to hallucinate plausible-sounding advice on their behalf, and put it all behind a subscription. That's a deliberate choice to monetize the identities of real people without involving them, and it sucks.
But for both expert review and chatbots in general, my underlying discomfort is the same. Most of my published work appears already to be inside these models, shaping their outputs in ways I never agreed to and will never fully understand. Grammarly just had the bad manners to put my name on it.
The bigger problem, though, is the one thats still invisible: all the ways my work and the work of every other writer is being used, right now, by systems that are smart enough not to tell us about it.
AZJonnie
(3,611 posts)for work (for anything I care remotely about accuracy with) and I find it to be consistently pretty spot-on Yes it makes some reasoning mistakes (esp. with overstating certainty levels about some things), but wholesale misattributions or outright hallucinations is just not something I really ever see. I realize it's a separate question from the way the models were built, and probably 98% of people are using the consumer grade free stuff and that's a different animal. You use the higher end models and it's honestly downright freaky smart (seeming). The jump in Gemini from 2.5 to 3.0 WRT to its coding skills has been outright mind-blowing to me I have to say, and 3.1 (maybe it's a beta) is already out though I've not used it yet. Again, I *have to* use it, I cannot possibly hand-code at the level that the market/our clients will pay for these days.
This all being said, it's REALLY dangerous to give it autonomous control over ANYONE's freaking LIFE, such as the military, medical, and surveillance type applications. It ain't THAT good, by any stretch. Scary, scary stuff there.