Welcome to DU! The truly grassroots left-of-center political community where regular people, not algorithms, drive the discussions and set the standards. Join the community: Create a free account Support DU (and get rid of ads!): Become a Star Member Latest Breaking News Editorials & Other Articles General Discussion The DU Lounge All Forums Issue Forums Culture Forums Alliance Forums Region Forums Support Forums Help & Search

muriel_volestrangler

(105,956 posts)
6. Uggh. Peer pressure and the potential loss of the Pentagon contract made them cave
Wed Feb 25, 2026, 02:33 PM
Wednesday

From their excuses blog post:

We focused the RSP on the principle of conditional, or if-then, commitments. If a model exceeded certain capability levels (for example, biological science capabilities that could assist in the creation of dangerous weapons), then the policy stated that we should introduce a new and stricter set of safeguards (for example, against model misuse and the theft of model weights).

"Model misuse" - this would have been a safeguard against this Department of "War". But this is what they're giving up.

A race to the top. We hoped that announcing our RSP would encourage other AI companies to introduce similar policies. This is the idea of a “race to the top” (the converse of a “race to the bottom”), in which different industry players are incentivized to improve, rather than weaken, their models’ safeguards and their overall safety posture. Over time, we hoped RSPs, or similar policies, would become voluntary industry standards or go on to inform AI laws aimed at encouraging safety and transparency in AI model development.
...
The idea of using the RSP thresholds to create more consensus about AI risks did not play out in practice—although there was some of this effect. We found pre-set capability levels to be far more ambiguous than we anticipated: in some cases, model capabilities have clearly approached the RSP thresholds, but we have had substantial uncertainty about whether they have definitively passed those thresholds. The science of model evaluation isn’t well-developed enough to provide dispositive answers. In such cases, we have taken a precautionary approach and implemented the relevant safeguards, but our internal uncertainty translates into a weak external case for taking multilateral action across the AI industry.

In other words, the other players have loose morals, and we can't afford to have tighter ones.
Despite rapid advances in AI capabilities over the past three years, government action on AI safety has moved slowly. The policy environment has shifted toward prioritizing AI competitiveness and economic growth, while safety-oriented discussions have yet to gain meaningful traction at the federal level. We remain convinced that effective government engagement on AI safety is both necessary and achievable, and we aim to continue advancing a conversation grounded in evidence, national security interests, economic competitiveness, and public trust. But this is proving to be a long-term project—not something that is happening organically as AI becomes more capable or crosses certain thresholds.

https://www.anthropic.com/news/responsible-scaling-policy-v3

"This US government is doing fuck all to regulate the industry, so we don't see why we should be more responsible than we're forced to be."

The tragedy of the timing of the 2nd Trump Regime is not that he gets to preen at the 250th celebrations, the World Cup and the Olympics; it's that the most immoral gang of deviants ever to get even close to power in the USA are in charge when climate change and AI demand a responsible, intelligent, selfless, forward-looking attitude.

Recommendations

3 members have recommended this reply (displayed in chronological order):

Latest Discussions»Latest Breaking News»Anthropic ditches its cor...»Reply #6