Views
Jun 28, 2024
6
Minutes read

Introducing the AI Risk Series

No items found.
Author
Dr Jun-E Tan
Senior Research Associate
Dr Jun-E Tan
Senior Research Associate
Co - Author
No items found.
Loading the Text to Speech AudioNative Player...
Key Takeaway
Data Overview

What is artificial intelligence (AI), and what is it not? How can AI go wrong?

The rapid spread of AI systems into our daily life has raised concerns about unmitigated risk factors brought about by the emerging technology.  KRI is releasing a series of articles to chronicle our research around risks associated with the fast-developing area of AI. Each article within the series will explore literature within a dimension of risk (e.g. vulnerabilities within the technology, systemic impacts, etc.) made relevant to the context of Malaysia, and consider potential policy solutions.

Within this introductory article, we establish the baseline understanding of AI that will carry us through the series. This includes a working definition of AI, some key assumptions and variables that shape the quality and impacts of AI systems. To set the scene, some background on Malaysia’s policy direction in AI adoption and governance is also provided.

introducing-the-ai-risk-series
Views
Individual perspectives on current issues help people understand the issue better and raise awareness through informed opinions and reflections.

Introduction

Artificial intelligence (AI) has come a long way since its beginnings in the 1950s, when a group of academics gathered to "find how to make machines use language, form abstractions and concepts, solve kinds of problems now reserved for humans, and improve themselves¹." In the seven decades that have passed, the abilities of machines to perform these tasks have improved by leaps and bounds, due to advancements in computational architectures and hardware, the amount of data available for processing, and improvements in supporting technology such as sensors and the Internet of Things². At the same time, the rapid spread of Al systems into our daily life has raised concerns about unmitigated risk factors brought about by the emerging technology.

The objective of the Al Risk Series is to chronicle KRI's research around risks associated with the fast-developing area of Al. Each article within the series will explore literature within a dimension of risk (e.g. vulnerabilities within the technology, systemic impacts, etc.) made relevant to the context of Malaysia, and consider potential policy solutions.

Within this introductory article, I will be establishing the baseline understanding of Al that will carry us through the series. This includes a working definition of Al, some key assumptions and variables that shape the quality and impacts of Al systems. To set the scene, some background on Malaysia's policy direction in Al adoption and governance will also be provided.

What is AI?

Defining artificial intelligence is tricky, in part because many definitions pin the concept to human intelligence, referring to the ability of machines to perform tasks traditionally in the realm of what humans can do, such as perception, reasoning, learning and planning³. As advancements in technology narrow the gap between human and machine, the goalposts of what Al means also tend to shift.

Within our discussions to follow, we will use the OECD definition which is sufficiently specific yet broad in its framing. According to OECD, an Al system is "a machine-based system that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments. Different Al systems vary in their levels of autonomy and adaptiveness after deployment."

Most of the technologies that we refer to as Al fall under the field of machine learning, which uses statistical methods to infer patterns from data provided. An important branch of machine learning is deep learning, which uses a computational architecture called artificial neural networks, inspired by the human brain, with layers of abstraction of the input data enabling computers to draw complex connections for more effective outputs. Techniques and approaches to Al are not limited to machine learning and deep learning⁵, but these are the most relevant to ground our discussion.

In terms of function, there are two types of machine learning models: discriminative or generative. Discriminative models classify or discriminate between different types of data instances (e.g. determining if an email is spam, or if an image contains a cat) and generative models create new data instances (e.g. large language models generating text, or diffusion models generating images⁶). These two types of models bring forth countless applications that we encounter in everyday life, from recommendation systems on social media to Al chatbots. Combined with other technologies such as sensors, robotic arms and internet connectivity, Al is able to perceive its surroundings, perform physical tasks and communicate with a wider network.

What AI is not

The vast capabilities of Al systems have ignited public imagination. As nations and corporations race each other to build and adopt the newest and shiniest applications to remain competitive, there is a sense of techno-optimism and reluctance to regulate in fear of curbing innovation. Following the release of ChatGPT in 2022 and the subsequent boom of generative AI, the Al hype thickened, this time towards speculation of Al causing societal-scale risks to the point of human extinction®.

Amid these mixed signals, there are three key assumptions that will underlie our discussions of Al risk, as follows:

AI is not a cure-all solutionIt is a powerful technology to see patterns and make predictions based on existing data, but is not a panacea to solve all the problems that we have, especially social problems that often have causes rooted elsewhere. It is therefore important to first study the problem at hand to consider if Al is an appropriate solution and design a tool that is fit for purpose. We should not rule out the option to use low-tech solutions if they are found to be effective and sufficient.

AI is not sentientIndustry leaders and even heads of states have warned about dangers of an imminent Al superintelligence surpassing human intelligence and causing existential threats to humankind. Such concerns remain hypothetical and overblown, and risk distracting us from crafting policy solutions to current problems that Al has already created in society¹⁰. Attributing human-like qualities to machines can also mislead users into placing undue trust and overreliance on Al systems which are not perfect or infallible¹¹,

AI is not value-neutralThe choices made by Al are often perceived as being objective, in part because they are data-driven. However, scholars have pointed out that developers and researchers embed their own values and assumptions into design decisions, such as choosing the datasets used to train the Al models or defining weights and thresholds for data classification¹². Government agencies and private companies who own the technologies also shape the priorities of the systems according to their aims, such as maximising efficiency or making profit.

How AI can go wrong

A comprehensive mapping of how Al systems might go wrong is a Herculean task given the breadth and evolution of sectoral applications powered by the technology. Instead, we can approach the question by looking at some key variables underlying the technology, which link directly to the quality of the outputs and possibilities of unintended consequences on human rights and societal well-being.

According to researchers from the Berkman Klein Center for Internet & Society of Harvard University¹³, three such variables include:

  • Quality of training data - As machines produce outputs and predictions based on available data, the quality of training data is fundamental and inextricably linked to the quality of the Al system. An oft-mentioned problem is "garbage-in, garbage-out", describing the situation whereby Al systems fed with inaccurate or biased data produce outputs of the same nature.
  • System design - As mentioned in the previous section, designers and builders may encode their biases and policy choices into Al systems. The choice of algorithms can also obscure the ability to trace and troubleshoot problems, such as in the case of deep learning algorithms which perform well but are opaque in how decisions and outputs are generated (a.k.a the "blackbox effect")¹⁴.
  • Complex interactions - Al systems, once deployed and operational, interact with humans, the physical environment, and society's social, political and cultural institutions. These can generate any number of complications and unintended consequences. For instance, there was a case in Belgium where a suicide case was linked to conversations with an Al chatbot¹⁵. There can also be deliberate weaponisation of Al technologies by actors with harmful motives, such as using deepfake technologies for scams or automating cyberattacks with AI.

For more real-life examples, interested readers can refer to the AIAAIC Repository for crowd-sourced reports of Al, Algorithmic, and Automation incidents and controversies¹⁶.

AI in Malaysia

According to the Government Al Readiness Index (2023) by Oxford Insights¹⁷, Malaysia ranks 23rd globally with a high overall score of 68.71, much higher than the average score of 43.69 for upper-middle income countries. A breakdown of the score reveals that Malaysia does well in the government (79.99) and data & infrastructure (72.00) pillars, but has room for improvement in the technology sector (54.13) pillar¹⁸.

These findings broadly corroborate with the digital landscape in Malaysia, where internet connectivity and device penetration are close to 100%¹⁹ and the government is actively moving towards digital transformation²⁰. Al-specific policy documents such as the National Al Roadmap (2021-2025) have been mooted in recent years, with national guidelines for Al governance and ethical use to be launched in 2024 to ensure responsible adoption²¹.

Other notable Al initiatives include rolling out Al for Rakyat²², a nation-wide public literacy programme²³, as well as the National Al Sandbox aiming to spur the creation of 900 Al startups and 13,000 new Al talents by 2026²⁴. These leverage on public-private partnerships with global tech companies (the former with Intel, and the latter with Nvidia) to support the local ecosystem.

Malaysia is moving ahead in pushing for Al awareness and adoption, and the Al governance community is growing and coalescing. Stakeholders recognise that there are multiple dimensions to Al risk, and much work needs to be done to ensure that safeguards and accountability mechanisms are in place²⁵. We hope that our Al Risk Series will be able to contribute to the discourse, and support Malaysia's journey towards an Al-enabled future that is inclusive and beneficial for all.

Read Full Publication
featured report

Conclusion

Download Resources
Files uploaded
Footnotes
Attributes
References

Abail, Issam Eddine, Gopal Nadadur, and Enrico Santus. 2023. "Technology Primer: Artificial Intelligence & Machine Learning'. Harvard Kennedy School Belfer Center for Science and International Affairs. https://www.belfercenter.org/publication/technology-primer-artificial-intelligence-machine-learning.

AIAAIC. 2024. 'ΑΙΑΑIC Al, Algorithmic, and Automation Incidents'. 2024. https://www.aiaaic.org/aiaaic-repository/ai-algorithmic-and-automation-incidents.

Banoo, Sreerema. 2024. 'Accountability: Cracking the Code: Tackling the Complexities of Al Governance'. The Edge Malaysia, 22 January 2024. https://theedgemalaysia.com/node/698005.

Center for Al Safety. 2024. 'Statement on Al Risk | CAIS'. Center for AI Safety. 2024. https://www.safe.ai/work/statement-on-ai-risk.

Department of Statistics Malaysia. 2022. 'ICT Use and Access by Individuals and Households Survey Report 2021'. Putrajaya: Department of Statistics Malaysia.

EngageMedia. 2021. 'Governance of Artificial Intelligence (AI) in Southeast Asia'. EngageMedia. https://engagemedia.org/wp-content/uploads/2021/12/Engage_Report-Governance-of-Artificial-Intelligence-Al-in-Southeast-Asia_12202021.pdf.

Gerchick, Marissa, Tobi Jegede, Tarak Shah, Ana Gutierrez, Sophie Beiers, Noam Shemtov, Kath Xu, Anjana Samant, and Aaron Horowitz. 2023. "The Devil Is in the Details: Interrogating Values Embedded in the Allegheny Family Screening Tool'. In 2023 ACM Conference on Fairness, Accountability, and Transparency, 1292-1310. Chicago IL USA: ACM. https://doi.org/10.1145/3593013.3594081.

Government of Malaysia. 2021. 'Malaysia Digital Economy Blueprint'. Ministry of Economy. Putrajay.

Google for Developers. 2022. 'Background: What Is a Generative Model? | Machine Learning'. Google for Developers. 18 July 2022. https://developers.google.com/machine-learning/gan/generative.

Koya, Zakiah. 2024. "Al for Rakyat" a Step towards Making M'sia Top Global Al Hub, Says Rafizi'. The Star, 16 January 2024. https://www.thestar.com.my/news/nation/2024/01/16/ai-for-rakyat-a-step-towards-making-m039sia-top-global-ai-hub-says-rafizi.

Murugiah, Surin. 2024. 'Mosti Targets Creation of 900 Startups by 2026 through Al Sandbox Programme'. The Edge Malaysia, 19 April 2024. https://theedgemalaysia.com/node/708580.

O'Shaughnessy, Matt. 2023. 'How Hype Over Al Superintelligence Could Lead Policy Astray'. 14 September 2023. Carnegie Endowment for International Peace. https://carnegieendowment.org/2023/09/14/how-hype-over-ai-superintelligence-could-lead-policy-astray-pub-90564.

Oxford Insights. 2023. 'Gov Al Readiness Index'. Oxford Insights. 2023. https://oxfordinsights.com/ai-readiness/ai-readiness-index/.

Personal Data Protection Commission Singapore. 2019. 'Model Al Governance Framework'. 2019. https://www.pdpc.gov.sg/help-and-resources/2020/01/model-ai-governance-framework.

Raso, Filippo A., Hannah Hilligoss, Vivek Krishnamurthy, Christopher Bavitz, and Levin Kim. 2018. 'Artificial Intelligence & Human Rights: Opportunities & Risks'. SSRN Scholarly Paper. Rochester, NY. https://doi.org/10.2139/ssrn.3259344.

Russell, Stuart. 2019. Human Compatible: Al and the Problem of Control. Penguin Uk.

Russell, Stuart, Marko Grobelnik, and Karine Perset. 2024. 'What Is AI? Can You Make a Clear Distinction between Al and Non-Al Systems? OECD.AI'. 6 March 2024. https://oecd.ai/en/wonk/definition.

Russell, Stuart, Karine Perset, and Marko Grobelnik. 2023. 'Updates to the OECD's Definition of an Al System Explained'. 29 November 2023. https://oecd.ai/en/wonk/ai-system-definition-update.

The Star. 2024. 'AI Code of Ethics, Governance Guidelines Almost Complete, Says Ministry', 8 March 2024, sec. News. https://www.thestar.com.my/news/nation/2024/03/08/ai-code-of-ethics-governance-guidelines-almost-complete-says-ministry.

UNESCO. 2021. 'Recommendation on the Ethics of Artificial Intelligence - UNESCO Digital Library'. 2021. https://unesdoc.unesco.org/ark:/48223/pf0000380455.

Walker, Lauren. 2023. 'Belgian Man Dies by Suicide Following Exchanges with Chatbot'. The Brussels Times, 28 March 2023. https://www.brusselstimes.com/430098/belgian-man-commits-suicide-following-exchanges-with-chatgpt.

Weidinger, Laura, John Mellor, Maribeth Rauh, Conor Griffin, Jonathan Uesato, Po-Sen Huang, Myra Cheng, et al. 2021. 'Ethical and Social Risks of Harm from Language Models'. arXiv. http://arxiv.org/abs/2122.04359.

World Economic Forum. 2019. 'Al Governance: A Holistic Approach to Implement Ethics into Al'. Geneva: World Economic Forum.

Photography Credit

Related to this Publication

No results found for this selection
You can  try another search to see more
No results found for this selection
You can  try another search to see more

Want more stories like these in your inbox?

Stay ahead with KRI, sign up for research updates, events, and more

Thanks for subscribing. Your first KRI newsletter will arrive soon—filled with fresh insights and research you can trust.

Oops! Something went wrong while submitting the form.
Follow Us On Our Socials