Anthropic
Jump to navigation
Jump to search
|
This page is a seed article. You can help Issuepedia water it – contact me to offer suggestions or additional sources! (Anything tossed in the tip jar also helps
^.^) |
Links
Reference
News
- 2026-02-24 20:00 UTC [L..T] Exclusive: Anthropic Drops Flagship Safety Pledge «Anthropic, the wildly successful AI company that has cast itself as the most safety-conscious of the top research labs, is dropping the central pledge of its flagship safety policy, company officials tell TIME.» ... chief science officer Jared Kaplan: «“We didn't really feel, with the rapid advance of AI, that it made sense for us to make unilateral commitments … if competitors are blazing ahead.”» ... «Kaplan [...] denied the company’s decision to change course was a capitulation to market incentives as the race for superintelligence accelerates. He framed it instead as a pragmatic response to emerging political and scientific realities.» It's a "pragmatic response" to "political ... realities", but definitely not "capitulation"...
- 2026-02-23 18:00 UTC [L..T] Statement from Dario Amodei on our discussions with the Department of War «The Department of War has stated they will only contract with AI companies who accede to “any lawful use” and remove safeguards in the cases mentioned above. They have threatened to remove us from their systems if we maintain these safeguards» ... «these threats do not change our position: we cannot in good conscience accede to their request.» ...and yet this article says they did.