Microsoft AI chief anticipates “human-like” performance in 18 months

Artificial intelligence once again puts the world in check office work. In an interview with the British newspaper Financial Times, Mustafa SuleymanCEO of Microsoft AI, assured that In just 18 months, AI could reach a “human” level of performance in most professional office tasks.

The prognosis is overwhelming: almost everything that involves “sitting in front of a computer” It would be automatable in the short term. Accounting, law, marketing and project management are among the most exposed areasaccording to the executive, what is known as “white collar” jobs.

The warning adds to a wave of statements from technology leaders who anticipate a radical transformation of skilled employment.

Suleyman attributes this leap to the exponential evolution of computing capacity. As the models become more powerful, he says, they will be able to program better than most human developers and execute complex tasks with minimal supervision.

Your mission to fMicrosoft AI Area Managerhe said, is to move towards a “superintelligence” and reduce the company’s technological dependence on third parties.

The speech is not isolated. In recent months, executives like Dario AmodeiCEO of Anthropic, warned that AI could eliminate up to half of jobs entry level in offices. Elon Musk also maintained that the so-called artificial general intelligence, capable of equaling or surpassing human intelligence, could arrive sooner than expected.

Between the promise and the data

However, empirical evidence still shows a more nuanced scenario. A recent report from Thomson Reuters revealed that lawyers, accountants and auditors use AI primarily for specific tasks, such as document review or routine analysis. Productivity improvements exist, but they are marginal and do not indicate, for now, a massive replacement.

There are even counterintuitive results. A study by the independent Model Evaluation and Threat Research institute found that AI-assisted software developers took an average of 20% more on completing certain tasks. Instead of speeding up work, technology introduced friction and new oversight.

In macroeconomic terms, the impact seems concentrated in the technology sector itself. According to data cited by market analysts, the profit margins of large tech companies grew more than 20% by the end of 2025, while the rest of the broad stock index showed virtually no changes attributable to AI.

Even so, signs of adjustment are beginning to be seen. Consulting firm Challenger, Gray & Christmas estimated that around 55,000 layoffs in 2025 were linked in some way to automation with AI.

Although he did not directly attribute it to that cause, Microsoft cut 15,000 jobs last yearin the midst of an internal restructuring aimed at “reimagining” its strategy in the new technological era. And more cuts are planned.

The market also reacted with volatility. Shares of software companies suffered sharp falls on fears that new autonomous AI systems will replace part of the traditional software-as-a-service business.

The big question is whether Suleyman’s forecast reflects an imminent disruption or an optimistic projection typical of an industry that is competing to lead the next technological wave.

For now, artificial intelligence seems more of a support tool than a total substitute. But if the times that the CEOs anticipate are met, the debate on the future of work will no longer be theoretical much sooner than many imagine.

Musk and the “end” of traditional programming

A prediction is added to the warnings about office employment even more specific. Elon Musk assured that programming as a profession could “practically end” by the end of 2026, as artificial intelligence systems are capable of generating code directly in machine language, without going through languages ​​written by humans.

In a video that circulated on social media, the CEO of Tesla and SpaceX argued that before the end of this year many people will no longer need to “bother with programming,” because AI models could produce more efficient binary code than that generated by traditional compilers.

In his vision, AI could completely skip the classic development process: writing in C++ or Python, compile and then translate into machine-understandable instructions.

The statement comes at a time when big technology companies are deepening their use of developer support tools. Microsoft recently asked its engineers to test Claude Code alongside GitHub Copilotwhile Nvidia enabled its employees to use CodexOpenAI’s code generation system.

These platforms today function as co-pilots: They suggest code snippets, fix bugs, and speed up tasks.

The difference in Musk’s approach is one of scale. It does not speak of assistance but of replacement. If models begin to produce optimized binaries without direct human intervention, the programmer’s role could shift from writing code to defining problems, monitor results and audit AI generated outputs.

For now, most of the technical community sees these tools as productivity enhancers rather than total substitutes. But the schedule proposed by Musk – just months for a structural change – reignites the debate about whether the AI ​​revolution will be gradual or abrupt for one of the most emblematic professions in the digital economy.

“The world is in danger”: resignation and alert from the heart of AI security

The warning of Mustafa Suleyman It occurs in parallel with another disturbing signal within the industry. This week Mrinank Sharma, until now responsible for the “safeguards” area—risk mitigation mechanisms—resigned at Anthropic, one of the most influential companies in the development of advanced artificial intelligence models.

In a public letter posted on the social network The message exceeded one million views in a few hours and revived the debate about internal tensions in the companies that lead the race for generative AI.

PhD in machine learning from the University of Oxford, Sharma led a team dedicated to researching how to prevent harmful uses of models, from potential chatbot-assisted bioterrorism to more subtle phenomena such as “algorithmic complacency”when systems tend to over-validate users.

In a recent study, he warned that the intensive use of conversational assistants can contribute to distort perceptions of realityespecially in topics linked to personal relationships and well-being.

In his letter he also hinted at tensions between the values ​​declared by the companies and the practical decisions they make. under competitive pressure. “We see a threshold approaching where our wisdom must grow at the same pace as our ability to affect the world,” he wrote.

Sharma’s departure adds to other ruptures in the sector. At OpenAI, for example, the Superalignment team was dismantled in 2024 following the resignation of key researchers who raised differences over priorities and governance.

By Editor