You have no items in your shopping cart.

Copied article link.

The ASEAN Economic Community Digest: A Business-friendly ASEAN Guide for AI Ethics and Governance

Picture of The ASEAN Economic Community Digest: A Business-friendly ASEAN Guide for AI Ethics and Governance

5 min read//

AI has transformed business operations into more efficient, competitive, and productive ones. For example, GrabMaps uses AI to cleanse and process the street view aggregated from data from merchants, drivers, and delivery partners to create a digital map to help drivers reach their destination quickly. Meanwhile, Kata.ai offers custom chatbots to automate customer service in Bahasa Indonesia.

According to Kearney, AI could add a 10 to 18 per cent GDP uplift across ASEAN, with a value of nearly 1 trillion US dollars by 2030. In ASEAN, 80 per cent of surveyed businesses in the region are already in the early stages of AI adoption.

Nonetheless, the upsurge of AI permeating the economy and social life is not without risk. AI relies heavily on data generated by users’ activities. Limitations in the data pool or coding may result in biases in decision-making. Additionally, AI poses risks such as privacy breaches and vulnerability to attacks due to inadequate cybersecurity that may expose AI systems. Therefore, AI governance in ASEAN is imperative to mitigate these risks, including ensuring inclusivity, data protection, and cyber resilience while achieving AI’s full potential for the ASEAN economy.

Against this background, the ASEAN Digital Ministers recently endorsed the ASEAN Guide for AI Ethics and Governance. The Guide serves as a practical guide for organisations in the region that wish to design, develop, and deploy traditional AI technologies in commercial and non-military or dual-use applications. It focuses on encouraging alignment within ASEAN and fostering the interoperability of AI frameworks across jurisdictions. It also includes recommendations on national-level and regional-level initiatives that governments in the region can consider implementing to design, develop, and deploy AI systems responsibly.

ASEAN Guide on AI Governance and Ethics

The ASEAN Guide on AI Governance and Ethics aims to empower organisations and governments in ASEAN to design, develop, and deploy traditional AI systems responsibly and increase users’ trust in AI. The Guide contains seven guiding principles to ensure trust in AI and promote the design, development, and deployment of ethical AI systems that consider the broader societal impact. The Guide also recommends a governance framework that supports the responsible use of AI, which organisations or businesses should adopt in the governance structure, the level of human involvement in AI-augmented decision-making, operations management, and stakeholders’ interaction and communication.

The Seven Guiding Principles

  • Transparency and explainability

Transparency refers to disclosing if an AI system has been used in decision-making, the data it uses, and its purposes. Meanwhile, explainability is the ability to communicate the reasoning behind an AI system’s decision in an understandable way to all relevant stakeholders. These principles build public trust by ensuring that users are aware of the use of AI technology, how information from their interaction is utilised, and how the AI system makes its decisions using the provided data.

  • Fairness and equity

To ensure fairness, deployers are encouraged to have measures in place to ensure that the algorithmic decisions do not further exacerbate or amplify existing discriminatory or unjust impacts across different demographics. The design, development, and deployment of AI systems should not result in unfair biases or discrimination. In addition, the datasets used to train the AI systems should be diverse and representative. Appropriate measures should be taken to mitigate potential biases during data collection, pre-processing, training, and inference.

  • Security and safety

Safety refers to ensuring the protection of developers, deployers, and users of AI systems. Therefore, impact or risk assessment should be conducted to identify and mitigate risks that may arise from the AI system. Additionally, deployers should conduct relevant testing or certification and implement the appropriate level of human intervention to prevent harm when unsafe decisions occur. Meanwhile, security refers to ensuring the cybersecurity of AI systems, including mechanisms against malicious attacks specific to AI, such as data poisoning, model inversion, the tampering of datasets, byzantine attacks in federated learning, as well as other attacks designed to reverse engineer personal data used to train the AI.

  • Human-centricity

AI systems should respect human-centred values and pursue benefits for human society, including well-being, nutrition, happiness, et cetera. Especially when AI systems are used to make decisions about humans or aid them, it is imperative that they are designed with human benefit in mind and do not take advantage of vulnerable individuals.

  • Privacy and data governance

AI Systems should have proper mechanisms to ensure data privacy and protection and maintain and protect the quality and integrity of data throughout their entire lifecycle. Thus, data protocols must be set up to govern who can access data and when. The way data is collected, stored, generated, and deleted throughout the AI system lifecycle must comply with applicable data protection laws, data governance legislation, and ethical principles.

  • Accountability and integrity

Deployers should be accountable for decisions made by AI systems for compliance with applicable laws and respect for AI ethics and principles. AI actors, or those involved in at least one stage of the AI system life cycle, should act with integrity throughout the AI system lifecycle when designing, developing, and deploying AI systems. Therefore, organisations should adopt clear reporting structures for internal governance, clearly stating the different roles and responsibilities of those involved in the AI system lifecycle.

  • Robustness and reliability

AI systems should be sufficiently robust to cope with execution errors, unexpected or erroneous input, and stressful environmental conditions. Deployers should conduct rigorous testing before deployment to ensure robustness and consistent results across various situations and environments environments.

The AI Governance Framework

  • Internal governance structures and measures

Organisations need to establish internal governance structures to monitor how AI systems are designed, developed, and deployed. For example, organisations could consider establishing a multi-disciplinary and central governing body to oversee AI governance, provide independent advice; and develop standards, guidelines, tools, and templates to help teams design, develop, and deploy AI responsibly. Deployers also need to ensure that proper guidance and training resources are provided to the individuals involved in the governance process and that broader awareness is raised across the organisation. Nonetheless, in considering the above recommendations, developers and deployers should also take heed of factors such as a company’s size and capacity to ensure the governance is relevant and fitting for the business.

  • Determining the level of human involvement in AI-augmented decision-making

Businesses should determine the risk level and the human involvement category in AI-augmented decision-making. The assessment could evaluate the AI solutions on two axes–the probability and severity of harm to users and individuals involved in the AI system lifecycle. For example, AI systems with high seriousness and likelihood of harm should adopt a human-in-the-loop approach where humans can assume complete control of the system and decide when it is safe to execute decisions. The assessment should be made for all user types, and deployers are encouraged to provide special consideration to the impact on vulnerable and/or marginalised populations.

  • Operations management

AI governance should be built into all AI systems lifecycle, which consists of (1) project governance and problem statement definition, (2) data collection and processing, (3) modelling, (4) outcome analysis, and (5) deployment and monitoring. Deployers should conduct risk-based assessments of the AI systems before starting any data collection and processing or modelling. Following the risks assessed, deployers should put in place mitigation measures to manage the risks relating to AI systems. Additionally, throughout the data collection and processing, constant monitoring of datasets used and variable performance of the model across different target populations sub-groups should be conducted to mitigate risks of unjust bias. Even after the AI system has been developed and deployed, deployers need to continue reviewing the system, datasets, and model metrics periodically and make reasonable efforts to ensure the accuracy, relevance, and reliability of data and outcomes. Developers may also refer to the relevant ISO standards for data robustness, quality, and other data governance practices.

  • Stakeholder interaction and communication

Businesses must develop trust with all relevant stakeholders throughout the design, development, and deployment of AI. Deployers should consider providing general disclosure of when AI is used in their product and/or service offering. Furthermore, deployers could also consider developing a standardised policy that dictates what level of information, who to provide information, and how to provide information to stakeholders. Deployers could consider providing information related to the needs of the users as they navigate the interaction with the system. Lastly, deployers should put in place a feedback mechanism for users and other mechanisms to give feedback on the performance and output of the AI system.

A Business-friendly AI Governance

A business-friendly ASEAN Guide for AI Ethics and Governance is crucial for the region’s technological development. It fosters innovation by providing clear guidelines without excessive burdens. This allows companies to confidently invest in AI while ensuring responsible development. The guide promotes trust by addressing ethical concerns and attracting a more comprehensive range of users and investors. This creates a win-win situation, where businesses thrive alongside a future-proofed ASEAN landscape for AI. The Guide is business-friendly for the following reasons:

  • Inclusive development of the Guide

The Guide was developed with extensive consultation with the private sector. Close consultation and references to actual cases were included to ensure that the Guide is business-friendly for all businesses operating in ASEAN and will enable ASEAN businesses to thrive by responsibly utilising AI.

  • Establishing user trust in AI-powered products and services

The Guide includes the necessary principles to establish users’ trust in AI-powered products and services, such as transparency and explainability. It recognises that fostering users’ trust in AI is a linchpin to a vibrant AI ecosystem. As users trust that their data are protected and that AI is developed and deployed in a way that is not adverse to their interests, they will be more open to AI-powered services and products, increasing usage and loyalty.

  • Encouraging innovation

The Guide promotes innovation by leaving implementation details to companies and local regulators under the regional AI risk assessment and governance framework. It also recognises better approaches than a one-size-fits-all approach to AI governance. The Guide provides more freedom for businesses to experiment with innovations to advance further AI development and innovation, which would benefit long-term technological development in ASEAN. It facilitates ASEAN as a conducive place to test ideas and conduct business experiments for AI-powered services and products in increasing market demand in the global digital landscape.

  • Facilitating economic and cultural diversity in ASEAN

ASEAN Member States have different levels of economic development, digital readiness, and vast cultural and language diversities. The Guide encourages business and tech players to consider countries’ differences to enable AI to advance social progress, become a growth engine for all ASEAN Member States, and halt further inequalities. The Guide also creates room for regional collaboration on AI policy development in ASEAN, facilitating the interoperability of AI frameworks in the region.


This article appeared in The ASEAN Secretariat (https://theaseanmagazine.asean.org/article/the-asean-economic-community-digest-a-business-friendly-asean-guide-for-ai-ethics-and-governance/).

Copied article link.