
Introduction
The third AI Action Summit, hosted in France and chaired by France and India, has come to a close without the United States of America (US) and the United Kingdom (UK) signing the summit’s declaration on inclusive and sustainable artificial intelligence (AI). Sixty other countries, including China, France and India, have signed the declaration.
In a speech to delegates, US Vice President J.D. Vance claimed that excessive regulations would stifle innovation and that the US would champion AI that is free of ideological bias.
Following the release of new and cheaper AI models by DeepSeek, a Chinese AI company, the US is concerned that it will no longer be a leading AI player, making the global AI race and its associated geopolitics hotter than before.
In criticising regulations, it seems that the US is prioritising regaining its position in the AI industry, setting aside earlier indications, however slight, that it would join the emerging shared consensus on global AI governance that promotes inclusion, safety and sustainability.
Current US priorities on AI contradict previous efforts on global governance
The current US stance espoused by Vance is at odds with the ASEAN-U.S. leaders’ statement on promoting safe, secure and trustworthy AI adopted in Laos last October. In that statement, the US joined the Association of Southeast Asian Nations (ASEAN) in committing to strengthening the safety, security, and trustworthiness of AI systems and to collaborating on the development of interoperable AI governance approaches and frameworks, among other things. It remains to be seen whether the US will still cooperate on advancing AI in the region, including on capacity building, technology transfer and facilitating research, as described in the statement.
Furthermore, the notion that AI can be free of ideological bias is misguided. Researchers around the world, including those at KRI, have repeatedly made the case for how AI model code, training data and reinforcement of AI outputs all bear the values of developers, history and society baked into them, implicitly and explicitly. The very idea that AI should be free of ideological bias is itself an ideological bias.
Nor is regulation necessarily a bad thing. KRI’s recent report on AI Governance in Malaysia finds that AI policy stakeholders in Malaysia, whether in government, industry, academia or civil society, recognise that an appropriate balance of regulations, guidelines and standards will enable safe, trustworthy and responsible AI development and deployment in the country.
An agile regulatory framework is needed to minimise AI risks while being able to adjust to accommodate new developments in AI technology. Such a framework would include a combination of hard regulations (such as laws) and soft regulations (such as best practices), not to stifle innovations, but to enable the safe and responsible development of trustworthy AI.
Malaysia as ASEAN Chair can advance global AI governance
Malaysia, in its current position as ASEAN chair, has the opportunity to drive these conversations forward at a regional level. KRI’s report recommends that Malaysia engage in international initiatives on AI governance in three ways.
First, Malaysia should develop its position on debates regarding global AI governance. Once Malaysia has decided how it will deal with AI-related matters of national and public interest such as AI-generated content on social media, it can develop best practices and guidelines that it can take to international discussions.
Second, Malaysia should identify feasible avenues of participation in global governance discussions and international rules setting. While Malaysia’s voice may not carry during a Global North-dominated summit such as the one in France, Malaysia can play an important role in regional initiatives. In fact, Malaysia is currently leading the establishment of an ASEAN AI Safety Network (ASEAN AI Safe).
According to the Ministry of Digital, this network will “facilitate AI safety research, promoting safe and responsible development and adoption of AI across public and private sectors, and encouraging harmonisation and interoperability of AI safety within ASEAN Member States.”
Third, Malaysia should engage strategically through these international avenues of participation. Alliance-building with like-minded countries can amplify Malaysia’s voice in global AI governance discussions. Regular knowledge sharing with international experts will also build local AI governance capacity. For example, understanding the standards setting process and considerations when setting product specifications for safe AI at the International Standards Organisation (ISO) can inform how national standards are set for safe AI deployment.
Conclusion
As KRI puts it, “the push towards AI adoption needs to be accompanied by governance measures to ensure that the technology is built and used in a beneficial and safe manner, bringing positive outcomes and minimising negative impacts. AI governance is therefore an important component in the pursuit of meaningful technology adoption.”
If the US does not see the importance of AI governance, then countries that do, such as Malaysia, should seize the opportunity to play a bigger role in global AI governance.