“The world is in peril,” Anthropic’s Safeguards Analysis Group lead wrote in his resignation letter
A number one synthetic intelligence safety researcher, Mrinank Sharma, has resigned from Anthropic with an enigmatic warning about international “interconnected crises,” asserting his plans to turn out to be “invisible for a time frame.”
Sharma, an Oxford graduate who led the Claude chatbot maker’s Safeguards Analysis Group, posted his resignation letter on X Monday, describing a rising private reckoning with “our scenario.”
“The world is in peril. And never simply from AI, or bioweapons, however from a entire collection of interconnected crises unfolding on this very second,” Sharma wrote to colleagues.
The departure comes amid mounting tensions surrounding the San Francisco-based AI lab, which is concurrently racing to develop ever extra highly effective programs whereas its personal executives warn that those self same applied sciences might hurt humanity.
I will be shifting again to the UK and letting myself turn out to be invisible for a time frame.
— mrinank (@MrinankSharma) February 9, 2026
It additionally follows stories of a widening rift between Anthropic and the Pentagon over the navy’s need to deploy AI for autonomous weapons focusing on with out the safeguards the corporate has sought to impose.

Sharma’s resignation, which lands days after Anthropic launched Opus 4.6 – a extra highly effective iteration of its flagship Claude software – hinted at inner friction over safety priorities.
“All through my time right here, I’ve repeatedly seen how onerous it’s to really let our values govern our actions,” he wrote. “I’ve seen this inside myself, throughout the group, the place we consistently face pressures to put aside what issues most, and all through broader society too.”
The researcher’s workforce was established simply over a yr in the past with a mandate to deal with AI safety threats together with “mannequin misuse and misalignment,” bioterrorism prevention, and “disaster prevention.”

Sharma famous with satisfaction his work creating defenses towards AI-assisted bioweapons and his “ultimate venture on understanding how AI assistants might make us much less human or distort our humanity.” Now he intends to maneuver again to the UK to “discover a poetry diploma” and “turn out to be invisible for a time frame.”
Anthropic’s chief govt, Dario Amodei, has repeatedly warned of the hazards posed by the very expertise his firm is commercializing. In a near-20,000-word essay final month, he cautioned that AI programs of “nearly unimaginable energy” are “imminent” and can “take a look at who we’re as a species.”
Amodei warned of “autonomy dangers” the place AI might “go rogue and overpower humanity,” and recommended the expertise might allow “a international totalitarian dictatorship” by AI-powered surveillance and autonomous weapons.
You’ll be able to share this story on social media:
Source link
#httpswww.rt.comnews632333anthropicaisafetyresearcherAI #safety #researcher #quits #cryptic #warning



