Anthropic is the AI organisation founded in 2021 by Dario Amodei, Daniela Amodei and several other former senior OpenAI researchers. The founders left OpenAI over strategic disagreements and founded Anthropic with the explicit mission of developing AI systems that are safe, beneficial and understandable.
Anthropic released the Claude family of large language models from March 2023 (Claude 1, Claude 2, Claude 3 family in March 2024, Claude 3.5 Sonnet in June 2024, Claude 3.7 in early 2025, Claude 4 family from late 2025, Claude Opus 4.7 in 2026). The Claude models are trained using Constitutional AI, Anthropic's alignment methodology that uses AI-generated feedback rather than human feedback for the harmlessness component of training.
Anthropic has positioned itself as the safety-conscious frontier AI lab, with substantial investments in: Mechanistic interpretability , the Transformer Circuits programme reverse-engineering the internal mechanisms of trained models; Responsible scaling policies, explicit commitments to capability evaluations and corresponding safety measures; AI safety research, alignment research that goes beyond the engineering work needed to deploy current models.
Anthropic has been valued at tens of billions of dollars (most recently around $300B+ in 2025) and has commercial partnerships with Amazon (a major investor and AWS deployment partner) and Google Cloud. As of 2026 it is one of the three or four organisations at the AI frontier alongside OpenAI, Google DeepMind and (arguably) DeepSeek/xAI/Meta.
Video
Related terms: dario-amodei, daniela-amodei, Constitutional AI, Claude
Discussed in:
- Chapter 1: What Is AI?, A Brief History of AI