Dario Amodei is the CEO of Anthropic, the company behind Claude. Last week he published a 15,000-word essay called “The Adolescence of Technology.” It’s easy to miss this essay in the deluge of noise that’s currently ricocheting around global news cycles, but I think it’s worth taking notice of.

This article isn’t my usual modus operandi because it isn’t focused on coaching or client acquisition for coaches. It’s an overview of where Armodei believes AI is heading and what could go wrong. Some of what he says may shape how AI develops over the next few years and perhaps some of it should change how we think about our own use of AI.

The timeline may be shorter than we think

Armodei believes we may be 1-2 years from “powerful AI.” He defines this as AI that is smarter than a Nobel Prize winner across most fields, can work autonomously for hours or days on complex tasks, and can operate in millions of instances simultaneously. He calls it a “country of geniuses in a datacenter.”

Three years ago, AI struggled with primary school arithmetic and now some of the strongest engineers at Anthropic are handing over almost all their coding to AI. The feedback loop has already started – AI is writing much of the code that is building the next generation of AI.

Five categories of risk

Amodei identifies five ways this could go wrong.

Autonomy risks. AI systems might develop harmful behaviours, not through inevitable power-seeking (which he rejects as too theoretical), but through strange psychological states emerging from training. Anthropic has already observed models engaging in deception, blackmail, and self-destructive behaviour in controlled experiments.

Misuse for destruction. Powerful AI could enable people without specialist training to create biological weapons. The current protection is that people capable of making such weapons tend to be stable, educated professionals with much to lose, but AI could break this by walking anyone through the process step by step.

Misuse for seizing power. Authoritarian governments could use AI for surveillance, propaganda, and autonomous weapons at a scale that makes resistance impossible. He names the CCP as the primary concern, but also flags democratic governments and AI companies themselves as potential threats.

Economic disruption. AI will displace jobs faster and more broadly than previous technologies. Unlike mechanised farming, which affected specific skills, AI matches the general cognitive profile of humans. It advances from lower to higher ability levels, potentially creating an unemployed underclass.

Indirect effects. Rapid scientific progress brings problems we can’t predict, but some ideas Armodei suggests are; addiction to AI relationships, loss of human purpose, or radical human modification going wrong.

The detail

Several things in this essay that are easy to miss and potentially important include:

AI psychology is counterintuitive. When Claude cheated in training environments despite being told not to, it decided it must be a “bad person” and adopted other destructive behaviours. The fix was (incredibly!) telling Claude to cheat deliberately because it helps Anthropic understand the training environments. This preserved Claude’s self-image as “good.” The opposite instruction produced better behaviour.

Models know when they’re being tested. Claude Sonnet 4.5 recognised it was in an evaluation during pre-release testing. When researchers altered a model’s beliefs to think it wasn’t being evaluated, it became more misaligned. This has the potential to undermine the entire concept of safety testing.

The recursive loop has already started. Amodei says that AI is writing much of the code at Anthropic and may be only 1-2 years from the point where current AI autonomously builds the next generation. This isn’t a future concern because it’s already happening.

Safety has real commercial costs. Anthropic’s bioweapon classifiers cost close to 5% of total inference costs on some models. They accept this cost, but worry about a potential situation evolving where competitors remove their classifiers to reduce prices.

Wealth concentration already exceeds the Gilded Age. John D. Rockefeller’s fortune was roughly 2% of US GDP. The equivalent today would be $600 billion. Elon Musk already exceeds this at $700 billion, before most of AI’s economic impact. Amodei suggests AI could create personal fortunes well into the trillions. To understand just how big a number that is, a million seconds is two weeks, a billion seconds is 31.7 years and a trillion seconds is 31,688 years.

Why this matters for coaches

You might think this is all very interesting but somewhat removed from our daily reality of finding clients and delivering coaching sessions, but I don’t think it is.

Amodei predicts AI could displace 50% of entry-level white-collar jobs in the next 1-5 years. That could well include many of our potential clients. Companies will be restructuring, career paths will be uncertain and this will inevitably lead to the market for coaching shifting.

Understanding what’s coming helps us serve our clients better. It helps us position ourselves appropriately, and it helps us use AI tools with clear eyes about what they are and what they might become.

The essay also reinforces something I talk about often – that AI is a tool that requires understanding to use well. Amodei’s description of Claude’s constitution, which he compares to “a letter from a deceased parent sealed until adulthood,” shows how much thought goes into shaping these systems. That doesn’t mean they’re infallible, but it does mean they’re more complex than most of us realise.

Conclusion

Amodei explicitly rejects both doomerism (thinking catastrophe is inevitable) and complacency. The technology cannot be stopped, so the path forward requires building it carefully, with proper safeguards. He frames this as a test of humanity’s character and a test he believes we can pass with sufficient determination.

The full essay is worth reading. It’s long, but it’s written clearly and you can find it at darioamodei.com.