I think that the large language model based AI are rapidly becoming unusable. This is probably going to be something like the tulip bulb craze that almost brought down the Dutch economy in the early 1600s.
A lot of companies and a lot of people got really really excited about this stuff. Everybody forgot all the caveats. You suddenly had software that could pass the Turing test. All this really showed is that the Turing Test was completely inadequate and the development of software that could pass it really didnt mean anything. It certainly didnt mean that artificial intelligence had actually been achieved. Large language models are on a certain level just very sophisticated pattern matching. The LLMs are written to mimic the content that has been loaded into them.
That is what were seeing now. When prompted to come up with legal decisions that buttress the case thats trying to be provedwell thats what they do. Doesnt matter that the case law is fictional.
That seems to be the first law of large language models an answer has to be generated. Even if that answer doesnt exist.
What this means to all these companies that are trying to roll this out and reduce their headcount and lower their costs is that they are not going to be able to trust the output from LLMs. And that means that this wonderful new tool will rapidly become completely unusable.
The sad truth is that people will still continue to try to create real AI. If that ever happens the universe as we know it may cease to exist.