What is expected in 2025 from artificial intelligence, the advance that marked a before and after in the history of technology

The artificial intelligence (AI) is marking a before and after in the history of technology, and 2025 will bring more surprises.

It is not easy to make a prediction of what awaits us, but it is easy to highlight trends and challenges that will define the immediate future of AI for next year. Among them, the challenge of the so-called “centaur doctor” or “centaur professor”, key for those of us immersed in the development of AI.

The explosion of AI-based science

AI has become a fundamental tool to address major scientific challenges. Areas such as health, astronomy and space exploration, neuroscience or climate change, among others, will benefit even more than they already are doing.

AlphaFold (which won the Nobel Prize in 2024) has determined the three-dimensional structure of 200 million proteins, practically all that are known.

Its development represents a significant advance in molecular biology and medicine. This facilitates the design of new drugs and treatments, and 2025 will mark the emergence of their use (free access, by the way).

For its part, ClimateNet uses artificial neural networks to perform precise spatial and temporal analysis of large volumes of climate data, essential to understanding and mitigating global warming.

The use of ClimateNet will be essential in 2025 to predict extreme climate events with greater accuracy.

The systems used to make weather forecasts will become more accurate in the coming months.

Medical diagnoses and judgments: the role of AI

Justice and medical diagnosis are considered high-risk scenarios. In them it is more urgent than in any other area to establish systems so that humans always have the final decision.

AI experts work to guarantee user trust, that the system is transparent, that it protects people and that humans are at the center of the decisions.

Here the challenge of “doctor centaur” comes into play. Centaurs are hybrid human-algorithm models that combine formal analytics typical of machines and human intuition.

A “centaur doctor + an AI system” improves the decisions that humans make on their own and AI systems make on theirs.

A doctor will always be the one who presses the accept button, and a judge who determines if the sentence is fair.

Experts believe that in the near future doctors will have programs that will diagnose diseases without them.

The AI ​​that will make decisions for us

Autonomous AI agents based on language models are the goal for 2025 of large technology companies such as OpenAI (ChatGPT), Meta (LLaMA), Google (Gemini) or Anthropic (Claude).

So far, these AI systems make recommendations, In 2025 we want them to make decisions for us.

AI agents will perform personalized and precise actions in tasks that are not high risk, always adjusted to the user’s needs and preferences. For example: buy a bus ticket, update the calendar, recommend a specific purchase and make it.

You can also answer our emaila task that takes us a lot of time each day.

Along these lines, OpenAI has launched AgentGPT and Google Gemini 2.0, platforms for the development of autonomous AI agents.

For its part, Anthropic proposes two updated versions of its Claude language model: Haiku and Sonnet.

The development of artificial intelligence will allow computers to perform operations such as answering emails automatically.

The use of our computer by AI

Sonnet can use a computer like a person would. This means you can move the cursor, click buttons, type text, and navigate screens.

It also enables functionality to automate our desktop. Allows users to grant Claude access and control over certain aspects of their personal computers, how people do it.

This capability dubbed “computer use” could revolutionize the way we automate and manage our daily tasks.

In e-commerce, autonomous AI agents will be able to make a purchase for the user.

They will provide advice on business decision makingthey will manage inventory automatically, work with suppliers of all types, including logistics providers, to optimize the replenishment process, update shipping statuses until generating invoices, etc.

In the education sector, they will be able to customize study plans for students. They will identify areas for improvement and suggest appropriate learning resources.

We will move towards the concept of “centaur teacher”, aided by AI agents in education.

To avoid errors, some systems will always require people to approve any sensitive operations.

The approve button

The notion of autonomous agents raises profound questions about the concept of “human autonomy and human control.” What does “autonomy” actually entail?

These AI agents will introduce the need for pre-approval. What decisions will we allow these entities to make without our direct approval (without human control)?

We face a crucial dilemma: knowing when it is better to be “automatic” in using autonomous AI agents and when we need to make the decision, that is, resort to “human control” or “human-AI interaction.”

The concept of pre-approval is going to acquire great relevance in the use of autonomous AI agents.

The small ChatGPT that will enter the cell phone

2025 will be the year of the expansion of small and open language models (SLM).

These are language models that in the future may be installed on a mobile device, they will allow us to control our phone by voice in a much more personal and intelligent way than with assistants like Siri and They will reply to the email for us.

SLMs are compact and more efficient, they do not require massive servers to use. These are open source solutions that can be trained for specific application scenarios.

They can be more respectful of user privacy and are perfect for use on low-cost computers and cell phones.

They will be of interest for business adoption. This will be feasible because SLMs will have a lower cost, more transparency and, potentially, more auditability.

SLMs will make it possible to integrate applications for medical recommendations, in education, for automatic translation, text summary or instant spelling and grammar correction. Everything from small devices without the need for an internet connection.

Among their important social advantages, they can facilitate the use of language models in education in disadvantaged areas.

They can improve access to diagnoses and recommendations with specialized health SLM models in areas with limited resources.

Its development is essential to support communities with fewer resources.

They can accelerate the presence of the “centaur professor or doctor” in any area of ​​the planet.

Smartphones of the future will become much smaller, but also more powerful.

Advances in European AI regulation

On June 13, 2024, the European AI regulation was approved, which will come into force in two years. During 2025, evaluation norms and standards will be created, including ISO and IEEE standards.

Previously, in 2020, the European Commission published the first Assessment List for Trustworthy AI (ALTAI). This list includes seven requirements: human agency and oversight, technical robustness and security, data governance and privacy, transparency, diversity, non-discrimination and equity, social and environmental well-being, and responsibility. And they form the basis of future European standards.

Having evaluation standards is key to auditing AI systems. Let’s look at an example: what happens if an autonomous vehicle has an accident? Who takes responsibility? The regulatory framework will address issues such as these.

Governance mechanisms

Dario Amodei, CEO of Anthropic, in his essay titled Machines of Love and Grace (Machines of Loving Grace, October 2024), sets the vision of large technology companies: “I think it is essential to have a truly inspiring vision of the future, and not just a plan to put out fires“.

There are contrasting views of other more critical thinkers. For example, that represented by Yuval Noah Harari and discussed in his book Nexus.

Therefore, we need regulation. This provides the necessary balance for the development of a reliable and responsible AI and to be able to advance in the great challenges for the good of humanity highlighted by Amodei.

And together with them, have the necessary governance mechanisms as a “plan to put out fires.”

By Editor

Leave a Reply