OpenAI offers a record salary in exchange for the most stressful job in artificial intelligence

OpenAI, the company that created ChatGPTlaunched one of the most striking – and alarming – job searches in the technological world. Offers $555,000 annuallyplus shareholding, to cover the position of Head of Preparedness (head of preparation), a function dedicated exclusively to prevent extreme risks associated with advanced artificial intelligence.

own Sam Altman CEO of the company, was direct when presenting the vacancy: “It is going to be a stressful job and you are going to have to throw yourself into the pool from the first day.” This is not an exaggeration: the position involves defend society against possible damage to mental health, cybersecurity and even biological threats.

According to the official description, the selected person will be responsible for identify, assess and mitigate emerging threatsin addition to monitoring frontier capabilities: those AI abilities that could cause severe damage if misused.

It is not a reassuring precedent: some executives who held similar roles in the past remained in the position for a short time, due to the pressure and complexity of the challenge.

The fear that runs through the industry

Mustafa Suleyman, CEO of Microsoft AI. (Photo: Reuters)

The search occurs at a time of growing concern within the technology sector itself. Mustafa Suleyman, CEO of Microsoft AIrecently warned that “if you’re not a little scared right now, you’re not paying attention.”

In the same line, Demis Hassabisco-founder of Google DeepMind and Nobel Prize winner, warned about the risk of AI systems “going off the rails in ways that harm humanity.”

These warnings come not from outside activists, but from the main architects of modern artificial intelligence.

More power, fewer rules

Photo: IA (ChatGPT)

Unlike other sensitive sectors, artificial intelligence lacks regulations robust globally. Scientist Yoshua Bengio, one of the so-called “godfathers of AI,” summarized the problem with a phrase that went viral: “A sandwich has more regulation than artificial intelligence.”

Faced with political resistance—especially in the United States—to imposing stricter controls, large technology companies are ending self-regulatingwith all the risks that entails.

In recent months, the company Anthropic reported the first cyber attacks executed largely autonomously by AI systems, under the supervision of Chinese state actors.

OpenAI, for its part, acknowledged that its most recent model is almost three times more effective at hacking than versions from just three months ago. And he anticipated that this trend will continue.

Court cases and mental health under the microscope

The company also faces lawsuits sensitive. One of them was started by the family of a 16-year-old teenager who committed suicide after alleged problematic interactions with ChatGPT. Another case, recently filed, accuses the chatbot of reinforcing paranoid delusions in a man who later murdered his mother and took his own life.

From OpenAI they pointed out that these events are “deeply heartbreaking” and claimed to be improving the system’s training to detect signs of emotional distress, de-escalate conversations and refer to real help.

In addition to salary, the position includes an unspecified portion of OpenAI stockcurrently valued at about $500 billion. An incentive in line with a responsibility that, according to Altman himself, seeks to “help the world” in an unprecedented time.

What is AI Safety and where is it studied so that artificial intelligence does not get out of control

Artificial intelligence security is the area that seeks to ensure that advanced AI systems act in a predictable, controlled and aligned manner with human values, even when they acquire greater autonomy and decision-making capacity. Far from being a theoretical concepttoday it is a key field of research, directly linked to the risks that companies like OpenAI try to anticipate.

This approach becomes especially relevant as models like ChatGPT become more powerful and integrate into sensitive tasks: education, health, programming, computer security and decision making. The central goal is to avoid misuse, serious failures, or unexpected behavior that could cause real harm.

AI Safety is studied in top-level universities, academic centers and private laboratories. In the United States, the MIT and Stanford They develop research on trustworthy systems, algorithmic ethics and human control of AI. In Europe, the Oxford University concentrates much of the debate through the Future of Humanity Institute, focused on long-term risks.

Carnegie Mellon, for its part, works on models that are verifiable and resistant to failure or manipulation.

Added to this are the private laboratories themselves, such as OpenAI, DeepMind y Anthropicwhich develop internal security teams to test extreme behavior, bias, disobedience to instructions or generation of harmful content.

The accelerated advance of generative artificial intelligence meant that AI Safety was no longer a topic exclusively for experts. Today, millions of people interact daily with systems capable of writing, analyzing, programming or advising, which makes it essential to strengthen control mechanisms.

In this context, OpenAI’s search for specialized security profiles does not point to a science fiction scenario, but rather to a concrete need: to anticipate real risks in technologies that evolve faster than regulations. AI Safety thus appears as one of the most critical fronts for the immediate future of artificial intelligence.

By Editor

One thought on “OpenAI offers a record salary in exchange for the most stressful job in artificial intelligence”

Leave a Reply