Blog
White Label Consultancy | 25th February 2026
AI Law in Vietnam: What Businesses Need to Know
Vietnam is no longer a regulatory blank slate for artificial intelligence. In December 2025, the country officially passed its Law on Artificial Intelligence, set to take effect on 1 March 2026. For businesses operating in or entering the Vietnamese market, whether domestic or foreign, this marks a decisive shift: AI moves from an unregulated domain to a regulated one, designed to balance innovation, safety, and ethics.
The law is comprehensive in scope. It applies to any organisation that provides, implements, or uses AI systems within Vietnam’s jurisdiction, regardless of where the organisation is headquartered. For global companies already implementing the EU AI Act or other AI regulatory frameworks, Vietnam’s approach will feel broadly familiar, but with important local distinctions that require careful attention.
The risk-based model
Vietnam’s AI Law adopts a risk-based classification framework. Article 9 establishes three risk tiers (high, medium, and low) while Article 7 sets out a category of prohibited AI acts that are banned outright. Critically, classification is the responsibility of the provider and must be completed before commercialisation. Providers of both medium- and high-risk systems must also produce a classification record and notify the Ministry of Science and Technology (MOST) via the national AI one-stop portal prior to deployment.
1. Prohibited AI Acts (Article 7)
The law establishes a broad list of prohibited practices. Some mirror restrictions found in other AI regulations, but several go further, particularly around national sovereignty and public order. The following practices are strictly banned:
- AI systems that deliberately manipulate or deceive individuals, causing serious harm,
- Exploitation of vulnerable groups, including children, elderly persons, and persons with disabilities,
- AI-generated content that harms national security, public order, or social safety,
- Processing data to develop or operate AI systems in violation of personal data protection, intellectual property, or cybersecurity laws,
- Obstructing or disabling legally required human oversight mechanisms,
- Deleting or altering mandatory AI information, labels, or warnings.
2. High-Risk AI Systems (Articles 9, 13, 14)
High-risk systems include AI systems that may cause significant harm to the life, health, rights, and legitimate interests of organisations and individuals, national interests, public interests, and national security. The key concept here is the possibility of causing significant harm.
Key obligations for providers of high-risk AI systems include:
- Undergoing a conformity assessment before deployment,
- Establishing and maintaining risk management systems,
- Ensuring the quality and integrity of training and operational data,
- Maintaining technical documentation and operational logs,
- Enabling meaningful human oversight,
- Complying with transparency and incident-handling obligations,
- Providing accountability to state agencies, without being required to disclose source code or trade secrets.
Foreign providers should note that they may also be required to appoint a local representative or establish a legal presence in Vietnam.
Compliance obligations extend beyond providers. AI implementers (organisations that deploy AI systems for commercial purposes or service provision) must operate systems in accordance with their intended purpose, ensure data security and confidentiality, maintain human intervention capabilities, fulfil transparency obligations, and handle incidents appropriately.
Finally, AI users (organisations or individuals directly interacting with an AI system or its outputs) must adhere to all applicable guidelines and safety measures and must not unlawfully interfere with system functionality.
3. Medium-Risk AI Systems (Article 15)
Medium-Risk AI Systems are systems that may cause confusion, influence, or manipulate users due to the inability to recognise that the interacting entity is an AI system or that the content is AI-generated. Both providers and implementers must ensure the transparency obligations (Article 15, read with Article 11). Users must comply with the notification and system labelling requirements.
4. Low-Risk AI Systems (Article 15(2))
Low-risk systems are subject to lighter regulatory oversight. Providers and implementers must be prepared to respond to authorities if violations or rights impacts arise. Users bear responsibility for ensuring their use of these systems remains lawful.
Penalties
The law provides administrative penalties and civil compensation for damages caused by AI systems. Developers and implementers also face liability when they are at fault for allowing the AI system to be hacked or unlawfully interfered with. The specific fine amounts and penalty structures will be prescribed in detail by the Government via subordinate regulations. The law also encourages providers and implementers to take out civil liability insurance as an additional safeguard.
Entry into force
The Vietnamese AI Law will take effect on 1 March 2026. However, for AI systems placed on the market before that date, the following dates apply:
- General compliance deadline: 1 March 2027 (12-month grace period)
- Healthcare, education, and finance sectors: 1 September 2027 (18-month grace period)
What global companies should do
For organisations already operating under global AI regulatory regimes (such as the EU AI Act), Vietnam’s framework will be structurally recognisable. These are risk-based, impose pre-deployment obligations on high-risk systems, and require transparency for AI interactions. However, several meaningful differences stand out.
Vietnam’s law does not include a dedicated “general-purpose AI” category, an increasingly significant distinction as foundation models become embedded across industries. Enforcement is centralised through MOST rather than a distributed multi-authority model like in the EU AI Act. The law also carries a pronounced emphasis on national sovereignty, public order, and state security that reflects Vietnam’s broader digital governance philosophy, which organisations must factor into how they design and document their AI systems.
Businesses entering the Vietnamese market should conduct early risk classification, review transparency obligations, and assess local representation requirements.
Critically, providers, implementers and users of medium and low AI systems must ensure they have developed processes to respond to these novel requirements, such as handling incidents of medium risk AI systems, providing information to authorities when there are signs of impacts of rights of organisations and individuals due to the use of low-risk AI systems.
Conclusion
Vietnam’s AI Law signals that Southeast Asia is entering a new phase of AI governance maturity. For businesses with operations in or directed at Vietnam, this is an immediate compliance reality, with meaningful obligations applying from March 2026.
Organisations that proactively classify their AI systems, establish strong AI governance frameworks, build compliant transparency and incident-handling processes early will be far better positioned to manage regulatory exposure and maintain operational continuity in one of Asia’s most dynamic digital markets.
How We Can Help
At White Label Consultancy, we support organisations in navigating AI regulations across jurisdictions, including emerging frameworks in Southeast Asia and beyond. Our services span AI risk classification, transparency and governance reviews, readiness assessments, and ongoing compliance support. Whether you are entering the Vietnamese market for the first time or reviewing an existing AI portfolio against new legal requirements, we help translate complex obligations into practical, actionable strategies.
Get in touch to discuss how we can support your AI compliance journey in Vietnam and across the region.