Geoffrey Hinton – ‘godfather’ paved the way for deep learning technology

Geoffrey Hinton is often called the “godfather of deep learning” thanks to his great contributions in the field of artificial intelligence and machine learning.

The contributions of Professor Geoffrey E. Hinton and four scientists: Yoshua Bengio, Jen-Hsun Huang, Yann LeCun, and Fei-Fei Li to promote the advancement of deep learning were honored with the main prize worth 3 million USD ( more than 76 billion VND) VinFuture 2024.

The award committee recognized him for his leadership and foundational research in neural network architecture. His 1986 paper with David Rumelhart and Ronald Williams showed distributed representations in neural networks trained by backpropagation algorithms. This method has become a standard tool in the field of artificial intelligence and has enabled advances in image and speech recognition.

 

Scientist Geoffrey Everest Hinton. Image: Global and Mail

Geoffrey Everest Hinton, a British-Canadian cognitive psychologist and computer scientist, is widely recognized for his pioneering research in artificial intelligence (AI). Born on December 6, 1947 in Wimbledon, London, Hinton is a descendant of logician George Boole, who laid the foundation for digital circuit design theory.

Hinton earned a bachelor’s degree in Experimental Psychology from King’s College, Cambridge in 1970. He then continued to pursue a doctorate in artificial intelligence at the University of Edinburgh in 1977. His doctoral thesis supervisor was Christopher Longuet-Higgins, a pioneer in AI and cognitive science. This research laid the foundation for Hinton’s future contributions to machine learning and neural networks.

In the 1980s, with David Rumelhart and Ronald J. Williams, Hinton developed a simplified model of the brain called the neural network. They introduced the backpropagation algorithm, a neural network training method that has become a standard in machine learning.

Hinton’s career took him to the United States, where he became a professor at Carnegie Mellon University from 1982 to 1987. He then moved to Canada and joined the Computer Science department at the University of Toronto. Hinton’s research at the University of Toronto focuses on deep learning, a subset of machine learning that involves training neural networks to recognize patterns in data.

In 2013, Hinton joined Google’s Brain team, where he continued to work on deep learning and neural networks. His research at Google included developing TensorFlow, an open source software library for machine learning, and applying deep learning to many Google products and services.

Hinton’s contributions to AI have been recognized through many awards, including the 2018 Turing Award, which he shared with Yoshua Bengio and Yann LeCun for deep learning research. Hilton’s work has had a profound impact on AI, influencing the development of many technologies from voice recognition software to self-driving cars. With John Hopfield and Geoffrey Hinton were honored with this year’s Nobel Prize in Physics for their contributions to the field of machine learning.

Backpropagation: Hinton’s pioneering research

Hinton introduced backpropagation, a method used in AI and machine learning, in the 1980s. The algorithm essential to training neural networks is based on the mathematical concept of gradient descent. It allows adjusting the weights of the neural network by adjusting the weights in the neural network through backward error propagation from the last layer to the first layer.

Hinton’s backpropagation research was groundbreaking because it provided a viable method for training multilayer neural networks. Previously, training such networks was a daunting task due to the difficulty of adjusting the weights of the hidden layers. Hinton’s backpropagation algorithm solves the problem by calculating the gradient of the error function relative to the network weights, thereby adjusting the weights in a direction that minimizes the error.

The backpropagation algorithm is based on the chain rule, a basic mathematical rule. The chain rule allows the derivative of a complex function to appear as the derivative of more component functions. In the case of backpropagation, the chain rule calculates the derivative of the error function with respect to the network weights.

At that time, many researchers were skeptical about neural networks and their potential for practical applications. However, Hinton’s persistence and subsequent success of error propagation in many applications attracted much interest in neural networks in the 1990s. This period has been referred to as the “second wave” of neural networks. nerve.

Although successful, backpropagation has some limitations. For example, training a neural network requires a lot of data and computing resources. Additionally, it can sometimes lead to overfitting, where the network performs well with training data but poorly with new, never-before-seen data. However, backpropagation remains a mainstay of modern AI and machine learning.

Hinton and the rise of deep learning

Hinton’s AI research was instrumental in developing deep learning algorithms, used today in countless applications, from speech recognition software to self-driving cars. He focuses on artificial neural networks, especially backpropagation and unsupervised learning techniques. Hilton’s research in the 1980s with David Rumelhart and Ronald Williams led to the development of a useful fast method for applying rapid propagation to neural networks.

In addition, Hinton also made great contributions to unsupervised learning. Unsupervised learning is a type of machine learning that searches for previously undetected patterns in a data set with no pre-labels and minimal human supervision. In particular, Hinton developed a model called a Restricted Boltzmann Machine (RBM), a stochastic neural network that can learn probability distributions over a series of input data. For example, Google’s speech recognition system uses deep learning techniques based on Hinton’s research.

Google Brain Project: Hinton’s influence and role

Hinton’s research on artificial neural networks and backpropagation algorithms greatly influenced the development of the Google Brain project, the deep learning AI research team at Google. Hinton’s influence on the Google Brain project is clear in its use of deep learning algorithms. These layers gradually distill high-level features from the input raw data. For example, the lower layer can identify edges in image processing, while the higher layer recognizes human-related concepts such as numbers, letters or faces.

Hinton’s role in the Google Brain project goes beyond theory. In 2013, he joined Google and worked part-time with the Google Brain team. His work at Google included developing large-scale artificial neural networks such as a network of 16,000 computers that learned to recognize cats when watching YouTube videos. This is an important milestone for AI, demonstrating the ability of unsupervised learning in neural networks.

The backpropagation algorithm that Hinton co-invented is the basis on which deep learning systems work, including those developed by the Google Brain team. Capsule Network research is also integrated into the Google Brain project. Capsule networks are artificial neural networks that perform better at preserving hierarchical relationships and are designed to recognize the same object in different contexts, regardless of orientation or shape. This is a big step forward compared to traditional neural networks that often struggle with such tasks.

Hinton Capsule Networks: a revolution in image recognition

Hinton’s Capsule Networks, a new approach to image recognition, are being called a major upgrade in AI. These networks aim to address the limitations of standard convolutional neural networks (CNNs) used for image recognition tasks. Although effective in recognizing patterns in images, CNN needs to clearly understand the spatial hierarchy between simple and complex objects.

The basic building block of Capsule Networks is the “capsule”, a group of neurons that learns to recognize objects in images and their various characteristics such as position, size, orientation. Unlike CNNs that treat each feature as a separate entity, Capsule Networks understand features as related aspects of the same object. This allows Capsule Networks to maintain high accuracy even when objects are observed from many different angles or positions.

The key innovation in Capsule Networks is the dynamic routing algorithm. This algorithm helps the network decide where to transmit the results of each capsule based on the current input data. Dynamic routing makes Capsule Networks more flexible and adaptable, thus better suited to more complex image recognition tasks.

Capsule Networks also do a good job of preserving detailed information throughout the network. In CNN, pooling layers are used to reduce the data scale, which can lead to loss of important information. In contrast, Capsule Networks does not use a pooling layer. Instead, they use a process called “agreement routing”, in which the result of a capsule is transmitted to every possible receiver in the layer above, but only the receiver that matches the receiver prediction. get strong signal. This allows Capsule Networks to maintain high granularity and accuracy throughout the system.

Hinton’s prediction about the future of AI

One of Hinton’s most notable predictions is that AI will soon be able to understand and produce natural language at a level not inferior to humans. This prediction is based on rapid advances in machine learning and reinforcement learning algorithms.

Another area of ​​Hinton’s research is unsupervised learning, a type of machine learning in which algorithms learn from unlabeled data. Most AI systems today are based on supervised learning, where the algorithm is trained on a large labeled data set. However, Hinton believes that unsupervised learning is key for AI to more closely simulate how humans learn. He is developing new algorithms for unsupervised learning, aiming to create AI systems that can learn from the environment like a child.

By Editor

Leave a Reply