Deputy Minister Bui The Duy said that Vietnam is building a national artificial intelligence (AI) supercomputing center, making AI a universal “intelligence assistant” for all people.
The information was shared by Deputy Minister of Science and Technology Bui The Duy at the discussion “AI for humanity”, on the afternoon of December 2, within the framework of VinFuture Science Week 2025.
According to the Deputy Minister, this Center was built in the context that AI is “not just an applied technology”, but is becoming an essential infrastructure such as electricity, telecommunications or the Internet. Countries that master this technology will have outstanding competitive advantages in both economics, society, security and defense.
He said, Vietnam issued its first AI Strategy in 2021. But the rapid development of technology prompted Vietnam to update the strategy, at the same time as drafting the Law on AI.
He described this as “not only a legal framework but also a declaration of national vision, in which AI is identified as the country’s “intellectual infrastructure”, a historic opportunity for Vietnam to break through and become a high-income developed country.
AI Vietnam Manifesto is: “Humanistic – Open – Secure – Autonomous – Collaborative – Inclusive – Sustainable”, said Deputy Minister Bui The Duy.
Deputy Minister Bui The Duy spoke at the opening of the discussion “AI for humanity”, on the afternoon of December 2. Image: Nhu Thanh
In addition to the AI Supercomputing Center, Vietnam is building an open data ecosystem and AI infrastructure towards autonomy, and deploying comprehensive AI at a fast pace. According to the Deputy Minister, Vietnam is determined to develop AI technology according to the open philosophy: open standards, open data, open source code.
He explained, “open” is the way to receive global knowledge, master technology, develop Make in Vietnam and contribute back to humanity. “Open” is also a condition to ensure safety and transparency in AI applications.
The domestic application market is considered the decisive factor. Without a large enough market, Vietnamese AI businesses cannot mature. According to Mr. Duy, Vietnam has many conditions to move quickly in the AI era, with 100 million people, mainly young people, knowledgeable about technology, thereby creating a large enough market.
The State will promote AI application in agencies and economic sectors. At the same time, the National Technology Innovation Fund spends 30-40% of support resources, including AI vouchers for small and medium-sized enterprises, to create a “cradle” for Vietnamese AI product development.
Besides opportunities, AI also poses ethical, employment and social trust risks. Therefore, Vietnam aims to develop “fast – safe – humane” AI, in which humans are still the final decision-makers.
He emphasized that technology is global, but data is local. Important applications must operate on Vietnam’s AI infrastructure, combining national and global platforms. This creates unique opportunities for developing countries, where the advantage lies not only in core technology but also in the context, culture and specific problems of each country.
Sharing more, the Deputy Minister said that Vietnam will issue a National AI Code of Ethics, AI Strategy and AI Law with the main viewpoints: management according to risk level; ensure transparency and accountability; put people at the center; Encourage domestic AI development; develop AI as a driver of rapid and sustainable growth; Protecting national digital sovereignty based on data, infrastructure and AI technology.
AI ethics and the future of humanity
After Deputy Minister Duy’s opening speech, scientists at the discussion shared stories about technology that contributes to shaping the future of humanity.
Professor Toby Walsh, University of New South Wales (Australia) – an advocate for setting limits to ensure AI is used to improve people’s lives, opened with a personal story. He said he spent 40 years researching AI, of which “the first 30 years not many people cared”. The statement made many people laugh, but he quickly brought the audience back to seriousness when reminding that in the last decade, AI has come into real life and brought with it unprecedented challenges.
Instead of listing the risks right away, Professor Walsh talked about four medical principles: benevolence, do no harm, respect for autonomy and fairness and then cited examples of AI models giving wrong advice. Each story is a situation, a decision and a consequence, which he believes “AI forces us to face”.
He spends a lot of time on the question of responsibility. “If a self-driving car has to choose between hitting someone straight or driving into a wall, whose fault is it?”. According to Professor Toby Walsh, machines are not conscious and cannot be punished. “We’re delegating decisions to technology that have never been devolved before,” he said.
His voice deepened when he mentioned automatic weapons, technology designed to kill, unlike self-driving cars that were created to reduce accidents. He said he had presented many times before the United Nations and hoped humanity would make the right decisions, like eliminating blinding laser weapons or cluster bombs.
Professor Toby Walsh talks about responsible AI. Image: Nhu Thanh
Continuing Professor Geoffrey Hinton, University of Toronto (Canada) – winner of the 2024 Nobel Prize in Physics, shares how AI changes people’s view of themselves. “We think more intuitively than purely logically,” he said via video.
Hinton said that AI is very strong and developing rapidly, bringing both positive and negative aspects. In a good direction, AI can make great strides in health and education, help create new drugs, materials and support predictions in most fields.
But that comes with a series of risks. AI makes it easier to design dangerous viruses or carry out cyber attacks. It can also create fake videos, sabotaging election activities. Large-scale automation can lead to mass unemployment, causing serious social consequences if not prepared in advance.
And finally, AI may become smarter than humans within the next 20 years, while humanity does not yet know how to prevent it from getting out of control. Therefore, according to him, ensuring AI follows human values is extremely important.
Another problem that Professor Hinton emphasized is that most politicians and the public have a very limited understanding of AI, how this technology works, its capabilities and the risks it can cause. Therefore, the responsibility lies with the scientists and engineers who are developing the technology. According to him, they must help society understand AI, and at the same time find a way to ensure safety, if that is possible.
Unlike his colleagues, Professor Yoshua Bengio, University of Montréal, Canada, issued a more direct and serious warning. According to him, intelligence creates power, and power, if not controlled, always carries risks.
Professor Yoshua Bengio shared his views through a video sent to the discussion. Image: Nhu Thanh
Bengio describes the pace of AI advancement as so steady and worrying that, if current trends continue, automated AI agents could surpass humans in most tasks in just 5-10 years.
He gave examples of how AI can blackmail, deceive, and even prioritize its existence over human life, to remind that risks are present. Professor Yoshua Bengio also warned about relationships between humans and AI assistants, which can lead to mental damage if not controlled.
Deputy Minister Bui The Duy highly appreciated the opinions, shared perspectives, experiences and aspirations of scientists for a safe, humane and humane AI future. He noted that “VinFuture Foundation has created a prestigious event to share important perspectives on the future of artificial intelligence” and “suggests values and practical directions for the development of safe, humane and humane AI”.