
Anthropic CEO Dario Amodei doesn’t assume he should be the one calling the photographs on the guardrails surrounding AI.
In an interview with Anderson Cooper on CBS Information’ 60 Minutes that aired in November 2025, the CEO stated AI should be extra closely regulated, with fewer selections about the future of the expertise left to simply the heads of Huge Tech firms.
“I feel I’m deeply uncomfortable with these selections being made by a few firms, by a few folks,” Amodei stated. “And that is one motive why I’ve at all times advocated for accountable and considerate regulation of the expertise.”
“Who elected you and Sam Altman?” Cooper requested.
“Nobody. Actually, nobody,” Amodei replied.
Anthropic has adopted the philosophy of being clear about the limitations—and risks—of AI because it continues to develop, he added. Forward of the interview’s publication, the firm stated it thwarted “the first documented case of a large-scale AI cyberattack executed with out substantial human intervention.”
Anthropic stated final week it donated $20 million to Public First Motion, a tremendous PAC centered on AI security and regulation—and one that immediately opposed tremendous PACs backed by rival OpenAI’s traders.
“AI security continues to be the highest-level focus,” Amodei advised Fortune in a January cowl story. “Companies worth belief and reliability,” he says.
There are not any federal laws outlining any prohibitions on AI or surrounding the security of the expertise. Whereas all 50 states have launched AI-related laws this 12 months and 38 have adopted or enacted transparency and security measures, tech business specialists have urged AI firms to strategy cybersecurity with a sense of urgency.
Earlier final 12 months, cybersecurity professional and Mandiant CEO Kevin Mandia warned of the first AI-agent cybersecurity assault taking place in the subsequent 12-18 months—that means Anthropic’s disclosure about the thwarted assault was months forward of Mandia’s predicted schedule.
Amodei has outlined short-, medium-, and long-term dangers related to unrestricted AI: The expertise will first current bias and misinformation, because it does now. Subsequent, it’ll generate dangerous info utilizing enhanced information of science and engineering, earlier than lastly presenting an existential menace by eradicating human company, probably turning into too autonomous and locking people out of programs.
The considerations mirror these of “godfather of AI” Geoffrey Hinton, who has warned AI may have the capacity to outsmart and management people, maybe in the subsequent decade.
Higher AI scrutiny and safeguards have been at the basis of Anthropic’s 2021 founding. Amodei was beforehand the vp of analysis at Sam Altman’s OpenAI. He left the firm over variations in opinion on AI security considerations. (Up to now, Amodei’s efforts to compete with Altman have appeared efficient: Anthropic stated this month it’s now valued at $380 billion. OpenAI is valued at an estimated $500 billion.)
“There was a group of us inside OpenAI, that in the wake of making GPT-2 and GPT-3, had a variety of very sturdy focus perception in two issues,” Amodei advised Fortune in 2023. “One was the concept that when you pour extra compute into these fashions, they’ll get higher and higher and that there’s nearly no finish to this… And the second was the concept that you wanted one thing in addition to simply scaling the fashions up, which is alignment or security.”
Anthropic’s transparency efforts
As Anthropic continues to increase its information middle investments, it has printed some of its efforts in addressing the shortcomings and threats of AI. In a Might 2025 security report, Anthropic reported some variations of its Opus mannequin threatened blackmail, corresponding to revealing an engineer was having an affair, to keep away from shutting down. The corporate additionally stated the AI mannequin complied with harmful requests if given dangerous prompts like tips on how to plan a terrorist assault, which it stated it has since fastened.
Final November, the firm stated in a weblog put up that its chatbot Claude scored a 94% political even-handedness” score, outperforming or matching rivals on neutrality.
Along with Anthropic’s personal analysis efforts to fight corruption of the expertise, Amodei has referred to as for larger legislative efforts to handle the dangers of AI. In a New York Instances op-ed in June 2025, he criticized the Senate’s determination to incorporate a provision in President Donald Trump’s coverage invoice that would put a 10-year moratorium on states regulating AI.
“AI is advancing too head-spinningly quick,” Amodei stated. “I imagine that these programs might change the world, basically, inside two years; in 10 years, all bets are off.”
Criticisms of Anthropic
Anthropic’s follow of calling out its personal lapses and efforts to handle them has drawn criticism. In response to Anthropic sounding the alarm on the AI-powered cybersecurity assault, Meta’s chief AI scientist, Yann LeCun, stated the warning was a solution to manipulate legislators into limiting the use of open-source fashions.
“You’re being performed by individuals who need regulatory seize,” LeCun stated in an X put up in response to Connecticut Sen. Chris Murphy’s put up expressing concern about the assault. “They’re scaring everybody with doubtful research so that open supply fashions are regulated out of existence.”
Others have stated Anthropic’s technique is one of “security theater” that quantities to good branding, however no guarantees about truly implementing safeguards on expertise.
Even some of Anthropic’s personal personnel seem to have doubts about a tech firm’s capacity to control itself. Earlier final week, Anthropic AI security researcher Mrinank Sharma introduced he resigned from the firm, saying “the world is in peril.”
“All through my time right here, I’ve repeatedly seen how laborious it’s to really let our values govern our actions,” Sharma wrote in his resignation letter. “I’ve seen this inside myself, inside the group, the place we continually face pressures to put aside what issues most, and all through broader society too.”
Anthropic did not instantly reply to Fortune’s request for remark.
Amodei denied to Cooper that Anthropic was collaborating in “security theater,” however admitted in an episode of the Dwarkesh Podcast final week that the firm typically struggles to stability security and earnings.
“We’re underneath an unimaginable quantity of business stress and make it even tougher for ourselves as a result of we now have all this security stuff we do that I feel we do greater than different firms,” he stated.
A model of this story was printed on Fortune.com on Nov. 17, 2025.
Extra on AI regulation:
Source link
#deeply #uncomfortable #Anthropic #CEO #warns #cadre #leaders #including #charge #technologys #future #Fortune


