Half a million USD salary job ‘unfeasible’ at OpenAI

OpenAI recruits an AI risk prevention position with a salary of 555,000 USD per year, but is considered an “almost impossible task”.

“This will be a stressful job, facing difficult challenges right away,” OpenAI CEO Sam Altman wrote on X late last month when mentioning the “Head of Preparedness” position that the company is recruiting.

According to the description, this is a position that “plays an important role in helping the world”. The head will be directly responsible for preventing risks from artificial intelligence to human mental health, cybersecurity and biological weapons. At the same time, this person is also responsible for assessing and mitigating new threats, as well as “monitoring and preparing for advanced capabilities that create new risks that cause serious damage”.

The company blog also states that the position requires in-depth technical judgment, clear communication and leading complex work across multiple risk areas. In addition to a salary of $550,000 per year, applicants also receive shares offered when OpenAI IPOs. According to CNBCafter the most recent capital mobilization rounds, the company is currently valued at about 850-1,000 billion USD.

Guardian commented, this job at OpenAI is “so challenging that even superhumans have to hold their breath”. In fact, this position has been held by a number of people unofficially, but only for a short time.

 

Illustration of AI safety work at OpenAI. Image: ChatGPT

Professor Maura Grossman at the School of Computer Science at the University of Waterloo (USA) also commented on the above Business Insider that Head of Preparedness is “almost impossible”. He compared this “to stopping a rock rushing down a hill”, because in addition to ensuring safety, the leader also has to “slow down the speed or prevent some goals that Altman wants to achieve in the future”.

OpenAI’s search for a leader in AI safety comes in the context of experts issuing a series of warnings about the risks of artificial intelligence. On December 29, 2025, Mustafa Suleyman, Director of Microsoft AI, said that due attention is needed to the risks that can cause humanity. “Honestly, if you don’t feel scared right now, you haven’t been fully following the advances of AI,” Suleyman said on the show Today belong to BBC Radio 4.

Previously, many experts believed that companies were engrossed in the race to create high-performance AI with many features and forgot about safety, while management sanctions in countries were still limited. In early December 2025, Demis Hassabis, co-founder of Google DeepMind, said artificial intelligence could “go astray in some way that is harmful to humanity”. Computer scientist Yoshua Bengio also commented that “a sandwich is even more tightly managed than AI”.

By Editor