This chip can solve the dirtiest problem in artificial intelligence

Artificial intelligence (AI) is not cheap. For you to be able to chat with solutions like the ChatGPT or Bard talking machines requires an enormous amount of energy. To give you an idea, a query in the conversational robot of OpenAI consumes three times more than a search in Google. A conversation of about 20 questions, or more, forces the company to use half a liter of water to cool its databases. Obviously, this causes the cost of maintenance, and the carbon emissions caused by the machines, to be very high.

IBM is trying to find a solution to this problem. The technology company is committed to the use of a new type of AI analog chip which, according to the company, has already been shown to be fourteen times more energy efficient than conventional digital chips.

In a study published in ‘Nature’, the company points out that its chip is composed of 14 nanometers containing 35 million phase change memory cells in 34 tiles. In today’s conventional computer chips, data is transferred between memory and the central processing unit (the GPU) each time a computation is performed, resulting in a massive energy investment, far in excess of what would otherwise be necessary to perform the operation requested by the user, is lost. IBM’s analog chip offers a partial solution to this problem by locating multiple processing units glued to memory.

Fourteen times more efficient

To test their chip, the team of technology researchers, headed by the researcher Stefano Ambroggio, tested his ability to process language by using two computerized speech recognition programs and then converting it to text; a small network (Google Speech Commands) and a large one (Librispeech).

The performance of the analog chip with the first program was almost equivalent to that offered by current technology. However, during tests with Librispeech (which contains recordings of book readings aloud, with a much larger vocabulary than Google’s solution) the ingenuity of Ambroggio’s team was up to 14 times more energy efficient.

According to the researchers, this test shows that analog chips can offer performance similar to existing technology in the field of AI without the need for such high power consumption. However, Hechen Wang, a researcher at Intel, points out in another article about IBM’s research that there is still a long way to go before this technology fully crystallizes.

Among other things, changes would have to be made to the algorithms for the chips to be functional and adapt the different applications and platforms for it. That is to say, it would be necessary to turn everything upside down and redo it so that the alternative proposed by IBM can operate. And that is not easy. Not much less. It could take decades to get there, Wang explains.

“The good news is that Ambrogio and his colleagues, along with other researchers in this area, are steering the ship, and we have set sail towards the realization of the goal,” says the researcher.

A difficult problem to solve

Indeed, the high cost required by the development of artificial intelligence continues to be a difficult problem to solve. Today, it is estimated that the databases of large technology companies consume between 1 and 2% of total electricity. And, furthermore, everything indicates that it will get worse in the future. According to data collected in a McKinsey study, by 2030, data centers located in the United States are expected to reach 35 gigawatts of electricity consumption per year compared to 17 gigawatts last year.

Beyond possible solutions to the problem, such as IBM’s analog chip, experts point out that it is important to invest in the search for new systems that help cool servers. Currently, according to a recent study from Purdue University, also published in ‘Nature’, it pointed out that the training of a language model, such as ChatGPT, requires the consumption of 4.9 million liters of water.

By Editor

Leave a Reply