From their excuses blog post:
We focused the RSP on the principle of conditional, or if-then, commitments. If a model exceeded certain capability levels (for example, biological science capabilities that could assist in the creation of dangerous weapons), then the policy stated that we should introduce a new and stricter set of safeguards (for example, against model misuse and the theft of model weights).
"Model misuse" - this would have been a safeguard against this Department of "War". But this is what they're giving up.
A race to the top. We hoped that announcing our RSP would encourage other AI companies to introduce similar policies. This is the idea of a race to the top (the converse of a race to the bottom), in which different industry players are incentivized to improve, rather than weaken, their models safeguards and their overall safety posture. Over time, we hoped RSPs, or similar policies, would become voluntary industry standards or go on to inform AI laws aimed at encouraging safety and transparency in AI model development.
...
The idea of using the RSP thresholds to create more consensus about AI risks did not play out in practicealthough there was some of this effect. We found pre-set capability levels to be far more ambiguous than we anticipated: in some cases, model capabilities have clearly approached the RSP thresholds, but we have had substantial uncertainty about whether they have definitively passed those thresholds. The science of model evaluation isnt well-developed enough to provide dispositive answers. In such cases, we have taken a precautionary approach and implemented the relevant safeguards, but our internal uncertainty translates into a weak external case for taking multilateral action across the AI industry.
In other words, the other players have loose morals, and we can't afford to have tighter ones.
Despite rapid advances in AI capabilities over the past three years, government action on AI safety has moved slowly. The policy environment has shifted toward prioritizing AI competitiveness and economic growth, while safety-oriented discussions have yet to gain meaningful traction at the federal level. We remain convinced that effective government engagement on AI safety is both necessary and achievable, and we aim to continue advancing a conversation grounded in evidence, national security interests, economic competitiveness, and public trust. But this is proving to be a long-term projectnot something that is happening organically as AI becomes more capable or crosses certain thresholds.
https://www.anthropic.com/news/responsible-scaling-policy-v3
"This US government is doing fuck all to regulate the industry, so we don't see why we should be more responsible than we're forced to be."
The tragedy of the timing of the 2nd Trump Regime is not that he gets to preen at the 250th celebrations, the World Cup and the Olympics; it's that the most immoral gang of deviants ever to get even close to power in the USA are in charge when climate change and AI demand a responsible, intelligent, selfless, forward-looking attitude.