Deep neural networks: how to access the hidden thoughts of Artificial Intelligence

Researchers at Kyushu University have developed a new method to understand how deep neural networks They interpret the information and classify it into groups.

Deep neural networks are a type of artificial intelligence (AI) that mimics the way human brains process information, but understanding how these networks “think” has long been a challenge.

Published in IEEE Transactions on Neural Networks and Learning Systems, the new study addresses the important need to ensure that AI systems are accurate and robust and can meet the standards required for safe use.

Deep neural networks process information in many layers, similar to how humans solve a puzzle step by step. The first layer, known as the input layer, incorporates the raw data. Later layers, called hidden layers, analyze the information. The first hidden layers focus on basic features, such as detecting edges or textures, such as examining individual puzzle pieces. Deeper hidden layers combine these features to recognize more complex patterns, such as identifying a cat or dog, similar to connecting pieces of a puzzle to reveal the big picture.

“However, these hidden layers are like a closed black box: we see the entrance and exit, but it is not clear what happens inside,” Danilo Vasconcellos Vargas, associate professor at the Faculty of Information Sciences and Technology, says in a statement. Electrical Engineering from Kyushu University. “This lack of transparency becomes a serious problem when AI makes mistakes, sometimes caused by something as small as changing a single pixel. “AI may seem smart, but understanding how it arrives at its decision is key to ensuring it is trustworthy.”

AI has begun to imitate the functioning of the human brain. (Photo: europapress.com)

Currently, methods for visualizing how AI organizes information are based on simplifying high-dimensional data into 2D or 3D representations. These methods allow researchers to observe how AI classifies data points (for example, grouping images of cats near other cats and separating them from dogs). However, this simplification has critical limitations.

“When we simplify high-dimensional information into fewer dimensions, it’s like flattening a 3D object into 2D: we lose important details and can’t see the full picture. Furthermore, this method of visualizing how the data is grouped makes it difficult to compare between different neural networks or classes of data,” explains Vargas.

In this study, researchers developed a new method, called the k* distribution method, that more clearly visualizes and evaluates how well deep neural networks classify related items.

The model works by assigning each entered data point a “k star value” that indicates the distance to the nearest unrelated data point. A high k star value means the data point is well separated (e.g., a cat far from any dog), while a low k star value suggests possible overlap (e.g., a dog closer to a cat than to any dog). other cats). By looking at all data points within a class, such as cats, this approach produces a distribution of k-star values ​​that provides a detailed picture of how the data is organized.

LOOK: Grok AI: what it is and how to use the AI ​​of X that filled social networks with ‘realistic images’

“Our method preserves the higher dimensional space, so no information is lost. It is the first and only model that can provide an accurate view of the ‘local neighborhood’ around each data point,” Vargas emphasizes.

Using their method, the researchers revealed that deep neural networks classify data into clustered, fractured, or overlapping arrangements. In a grouped arrangement, similar items (e.g. cats) are grouped closely together, while unrelated items (e.g. dogs) are clearly separated, meaning the AI ​​can classify the data well. However, patchy distributions indicate that similar elements are dispersed over a wide space, while overlapping distributions occur when unrelated elements are found in the same space; both distributions make classification errors more likely.

like in a warehouse

Vargas compares this to a warehouse system: “In a well-organized warehouse, similar items are stored together, making retrieval easy and efficient. “If items are interspersed, they become harder to find, increasing the risk of selecting the wrong item.”

AI is increasingly used in critical systems such as autonomous vehicles and medical diagnostics, where accuracy and reliability are essential. The k-star distribution method helps researchers, and even policymakers, evaluate how AI organizes and classifies information, pointing out potential weaknesses or errors. This not only supports the legalization processes necessary to safely integrate AI into daily life, but also provides valuable insights into how AI “thinks.” By identifying the root causes of errors, researchers can refine AI systems so that they are not only accurate but also robust, able to handle fuzzy or incomplete data and adapt to unexpected conditions.

“Our ultimate goal is to create AI systems that maintain accuracy and reliability, even when faced with the challenges of real-world scenarios,” concludes Vargas.

By Editor

Leave a Reply