Yes, we should have had more regulations and laws in place. But the AI companies did already know that they were breaking laws training with stolen IP, and they knew "publicly available" is not the same as "public domain" and copyright-free. And they're still stealing IP, and trying to get the laws they've been breaking for years changed.
One law we didn't have, and needed, was a law against releasing AI that sounded convincing but hallucinated a lot and could not be stopped from hallucinating. But I don't think anyone would have imagined the tech industry ever marketing a fancy but hallucinating autocomplete as artificial intelligence, and burying the "it makes mistakes" and "check everything it does for errors because we have no idea where they might pop up" caveats in the fine print.
The genAI industry has been criminally irresponsible and exploitative. If we survive the military lunatics adding the AI lunatics' hallucinating autocompletes to weapons, I hope we'll be able to do something about that.
In the meantime, maybe some of these companies can be put out of business, and more people will begin to understand how unethical this tech is, and why it shouldn't be used.
Casey Newton of Platformer wrote a couple of days ago that what Grammarly was doing wasn't all that different from what the genAI companies had already done:
https://www.platformer.news/grammarly-expert-review-reviewed/
All that said, what Grammarly is doing here is not that different from the companies who build the underlying large language models. Paste a draft of your writing into a chatbot, type "edit this the way Casey Newton would," and the chatbot will cheerfully oblige. It won't ask for my permission, either. It certainly won't pay me. And unlike Grammarly, it won't even bother to remind you (in fine print) that I am not meaningfully involved.
-snip-
The difference is that Grammarly took a latent capability one that exists in every LLM and turned it into a product feature. It curated a list of real people, gave its models free rein to hallucinate plausible-sounding advice on their behalf, and put it all behind a subscription. That's a deliberate choice to monetize the identities of real people without involving them, and it sucks.
But for both expert review and chatbots in general, my underlying discomfort is the same. Most of my published work appears already to be inside these models, shaping their outputs in ways I never agreed to and will never fully understand. Grammarly just had the bad manners to put my name on it.
The bigger problem, though, is the one thats still invisible: all the ways my work and the work of every other writer is being used, right now, by systems that are smart enough not to tell us about it.