The security chief of one of the largest AI companies resigned and issued a disturbing warning

A key security researcher artificial intelligence of Anthropic resigned this week and published a letter in which he warned that “the world is in danger”not only because of AI but because of “a series of interconnected crises” that develop simultaneously.

The message, spread on the social network put the focus back on internal tensions within the companies that They lead the race for generative AI.

It is about Mrinank Sharmawho since last year led the research team in “safeguards”dedicated to designing and evaluating mechanisms of risk mitigation associated with the use of advanced models.

In his farewell letter, dated February 9, he stated that the time had come to “move on” and raised questions on the coherence between declared values ​​and practical decisions within the organization.

“We see a threshold approaching where Our wisdom must grow as our ability to affect the worldor we will face the consequences,” Sharma wrote. When contacted by Forbes, the researcher declined to comment further. The company also did not respond to requests for comment.

Sharma’s role and his warnings about AI

Sharma has a doctorate in machine learning from the University of Oxford and has worked at Anthropic since August 2023, according to his public profile. The team he led investigated how to mitigate risks derived from the malicious use of AI systems.

His projects included the development of defenses against AI-assisted bioterrorism, that is, the use of chatbots to obtain guidance on harmful activities, and studies on algorithmic “flattery” or “pandering,” a phenomenon whereby conversational assistants tend to overly flatter or validate users.

In a study published last week, Sharma analyzed how the extensive use of chatbots could contribute to distorted perceptions of reality. According to their findings, thousands of daily interactions could generate these types of effects.

Severe cases, which the researcher calls “disempowerment patterns,” are rare, but they are observed with greater incidence in issues linked to personal relationships and well-being.

For Sharma, these results underscore the need to design systems that “robustly support autonomy and human flourishing.”

In his resignation letter he also noted that, during his time at the company, he observed how difficult it is to “truly allow our values ​​to govern our actions,” both at an individual and organizational level, in the face of constant pressure to relegate what “matters most.”

Ethical tensions in the artificial intelligence industry

Sharma’s departure It is not an isolated case. In recent years, several high-profile figures have left leading AI companies over differences over priorities, transparency and regulatory approaches.

At OpenAI, for example, the Superalignment team, dedicated to investigating advanced systems risks, was dissolved in 2024 after the resignation of two of its key members.

One of them, Jan Leike, then maintained that he had been disagreeing with management regarding the company’s central priorities. There was also internal criticism linked to the publication of research that questions the widespread use of AI.

In this context, Sharma’s letter resonates beyond an individual job change. “The world is in danger. And not just because of AI or biological weapons, but because of a series of interconnected crises unfolding at this very moment,” he wrote, in relation to the challenges of AI.

After his departure, he announced that he could dedicate himself to studying poetry and exploring “brave forms of expression,” with the intention of contributing “in a way that feels fully whole.”

The complete letter from Mrinank Sharma, in Spanish

I have decided to leave Anthropic. My last day will be February 9.

Thank you. There is so much here that inspires me and that has inspired me. To name a few of those things: a sincere desire and drive to show up in such a challenging situation, and aspire to contribute in a meaningful way and with high integrity; the willingness to make difficult decisions and stand up for what is good; an inordinate amount of intellectual brilliance and determination; and, of course, the considerable kindness that permeates our culture.

I have achieved what I wanted here. I came to San Francisco two years ago, after completing my PhD, with the desire to contribute to AI safety. I feel lucky to have been able to contribute what I did: understand the adulation of AI and its causes; develop defenses to reduce the risks of AI-assisted bioterrorism; bring those defenses to production; and write one of the first AI safety cases. I am especially proud of my recent efforts to help us live our values ​​through internal transparency mechanisms; and also from my final project on understanding how AI assistants could make us less human or distort our humanity. Thank you for your trust.

However, it is clear to me that the time has come to move on. I find myself continually confronting our situation. The world is in danger. And not just because of AI, or because of biological weapons, but because of a whole series of interconnected crises that are developing at this very moment. It seems that we are approaching a threshold where our wisdom must grow in equal measure with our ability to affect the world, so as not to face the consequences. Furthermore, throughout my time here, I have seen repeatedly how difficult it is to truly allow our values ​​to govern our actions. I’ve seen it in myself, within the organization—where we constantly face pressure to let go of what matters most—and also in society at large.

It is by holding this situation and listening as best I can that it becomes clear what I should do. I want to contribute in a way that feels fully whole, and that allows me to bring more of my particularities into play. I want to explore the questions that really feel essential to me, the questions that, as David Whyte would say, “have no right to disappear,” the questions that Rilke implores us to “live.” For me, that means leaving.

What comes next, I don’t know. I think fondly of the famous Zen quote: “not knowing is the most intimate thing.” My intention is to create space to let go of the structures that have sustained me these past few years and see what can emerge in their absence. I feel called to write texts that fully address and engage with the place we find ourselves in, and that place poetic truth alongside scientific truth as equally valid forms of knowledge, both with something essential to contribute when developing new technologies. I hope to explore a career in poetry and dedicate myself to the practice of courageous expression. I am also excited to deepen my practice of facilitation, accompaniment, community building and group work. We’ll see what unfolds.

Thank you and goodbye. I have learned a lot being here and I wish you the best. I leave you with one of my favorite poems, The Way It Is, by William Stafford.

There is a thread you follow. It goes between

things that change. But it doesn’t change.

People wonder what you’re chasing.

You have to explain the thread.

But it’s hard for others to see.

As long as you hold it you can’t get lost.

Tragedies happen; people get hurt

or die; and you suffer and grow old.

Nothing you do can stop the unfolding of time.

By Editor