This is why I often say that the biggest fuckup about this whole thing is there should have been hard, clear regulations and even criminal laws in some cases, about things like this sort of infringement on people's intellectual output, and indeed even their names/reputations, constructed at minimum before anyone could incorporate AI into their commercial products. And we needed like a serious, permanent Bureau of AI Oversight created, in as independent a way possible (like the CFPB was meant to be done, for example), and Congressional Committees for it too.
I know you are also deeply troubled by the theft aspect while building the models and of course I think that is also problematic but from a practical perspective, may have been tough sledding legislatively to regulate, but at minimum when it comes to actual commercial products doing shit like what Grammarly is doing here, THIS should have been a much easier piece to legislate and come to agreement on
and should've been done a LONG time ago!!!
Regulating how the "stolen" (to your thinking, and mine to a slightly lesser extent) information can be used in a for-profit setting, that should be no-brainer and easier to reach consensus upon vs. the review of materials to make the AI "smart-like" in the first place. I don't LIKE that part, but I do see 2 arguments about it. These Grammarly shenanigans, I do NOT