
Introduction
Artificial intelligence (AI) has come a long way since its beginnings in the 1950s, when a group of academics gathered to "find how to make machines use language, form abstractions and concepts, solve kinds of problems now reserved for humans, and improve themselves¹." In the seven decades that have passed, the abilities of machines to perform these tasks have improved by leaps and bounds, due to advancements in computational architectures and hardware, the amount of data available for processing, and improvements in supporting technology such as sensors and the Internet of Things². At the same time, the rapid spread of Al systems into our daily life has raised concerns about unmitigated risk factors brought about by the emerging technology.
The objective of the Al Risk Series is to chronicle KRI's research around risks associated with the fast-developing area of Al. Each article within the series will explore literature within a dimension of risk (e.g. vulnerabilities within the technology, systemic impacts, etc.) made relevant to the context of Malaysia, and consider potential policy solutions.
Within this introductory article, I will be establishing the baseline understanding of Al that will carry us through the series. This includes a working definition of Al, some key assumptions and variables that shape the quality and impacts of Al systems. To set the scene, some background on Malaysia's policy direction in Al adoption and governance will also be provided.
What is AI?
Defining artificial intelligence is tricky, in part because many definitions pin the concept to human intelligence, referring to the ability of machines to perform tasks traditionally in the realm of what humans can do, such as perception, reasoning, learning and planning³. As advancements in technology narrow the gap between human and machine, the goalposts of what Al means also tend to shift.
Within our discussions to follow, we will use the OECD definition which is sufficiently specific yet broad in its framing. According to OECD, an Al system is "a machine-based system that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments. Different Al systems vary in their levels of autonomy and adaptiveness after deployment."
Most of the technologies that we refer to as Al fall under the field of machine learning, which uses statistical methods to infer patterns from data provided. An important branch of machine learning is deep learning, which uses a computational architecture called artificial neural networks, inspired by the human brain, with layers of abstraction of the input data enabling computers to draw complex connections for more effective outputs. Techniques and approaches to Al are not limited to machine learning and deep learning⁵, but these are the most relevant to ground our discussion.
In terms of function, there are two types of machine learning models: discriminative or generative. Discriminative models classify or discriminate between different types of data instances (e.g. determining if an email is spam, or if an image contains a cat) and generative models create new data instances (e.g. large language models generating text, or diffusion models generating images⁶). These two types of models bring forth countless applications that we encounter in everyday life, from recommendation systems on social media to Al chatbots. Combined with other technologies such as sensors, robotic arms and internet connectivity, Al is able to perceive its surroundings, perform physical tasks and communicate with a wider network.
What AI is not
The vast capabilities of Al systems have ignited public imagination. As nations and corporations race each other to build and adopt the newest and shiniest applications to remain competitive, there is a sense of techno-optimism and reluctance to regulate in fear of curbing innovation. Following the release of ChatGPT in 2022 and the subsequent boom of generative AI, the Al hype thickened, this time towards speculation of Al causing societal-scale risks to the point of human extinction®.
Amid these mixed signals, there are three key assumptions that will underlie our discussions of Al risk, as follows:
AI is not a cure-all solutionIt is a powerful technology to see patterns and make predictions based on existing data, but is not a panacea to solve all the problems that we have, especially social problems that often have causes rooted elsewhere. It is therefore important to first study the problem at hand to consider if Al is an appropriate solution and design a tool that is fit for purpose. We should not rule out the option to use low-tech solutions if they are found to be effective and sufficient.
AI is not sentientIndustry leaders and even heads of states have warned about dangers of an imminent Al superintelligence surpassing human intelligence and causing existential threats to humankind. Such concerns remain hypothetical and overblown, and risk distracting us from crafting policy solutions to current problems that Al has already created in society¹⁰. Attributing human-like qualities to machines can also mislead users into placing undue trust and overreliance on Al systems which are not perfect or infallible¹¹,
AI is not value-neutralThe choices made by Al are often perceived as being objective, in part because they are data-driven. However, scholars have pointed out that developers and researchers embed their own values and assumptions into design decisions, such as choosing the datasets used to train the Al models or defining weights and thresholds for data classification¹². Government agencies and private companies who own the technologies also shape the priorities of the systems according to their aims, such as maximising efficiency or making profit.
How AI can go wrong
A comprehensive mapping of how Al systems might go wrong is a Herculean task given the breadth and evolution of sectoral applications powered by the technology. Instead, we can approach the question by looking at some key variables underlying the technology, which link directly to the quality of the outputs and possibilities of unintended consequences on human rights and societal well-being.
According to researchers from the Berkman Klein Center for Internet & Society of Harvard University¹³, three such variables include:
- Quality of training data - As machines produce outputs and predictions based on available data, the quality of training data is fundamental and inextricably linked to the quality of the Al system. An oft-mentioned problem is "garbage-in, garbage-out", describing the situation whereby Al systems fed with inaccurate or biased data produce outputs of the same nature.
- System design - As mentioned in the previous section, designers and builders may encode their biases and policy choices into Al systems. The choice of algorithms can also obscure the ability to trace and troubleshoot problems, such as in the case of deep learning algorithms which perform well but are opaque in how decisions and outputs are generated (a.k.a the "blackbox effect")¹⁴.
- Complex interactions - Al systems, once deployed and operational, interact with humans, the physical environment, and society's social, political and cultural institutions. These can generate any number of complications and unintended consequences. For instance, there was a case in Belgium where a suicide case was linked to conversations with an Al chatbot¹⁵. There can also be deliberate weaponisation of Al technologies by actors with harmful motives, such as using deepfake technologies for scams or automating cyberattacks with AI.
For more real-life examples, interested readers can refer to the AIAAIC Repository for crowd-sourced reports of Al, Algorithmic, and Automation incidents and controversies¹⁶.
AI in Malaysia
According to the Government Al Readiness Index (2023) by Oxford Insights¹⁷, Malaysia ranks 23rd globally with a high overall score of 68.71, much higher than the average score of 43.69 for upper-middle income countries. A breakdown of the score reveals that Malaysia does well in the government (79.99) and data & infrastructure (72.00) pillars, but has room for improvement in the technology sector (54.13) pillar¹⁸.
These findings broadly corroborate with the digital landscape in Malaysia, where internet connectivity and device penetration are close to 100%¹⁹ and the government is actively moving towards digital transformation²⁰. Al-specific policy documents such as the National Al Roadmap (2021-2025) have been mooted in recent years, with national guidelines for Al governance and ethical use to be launched in 2024 to ensure responsible adoption²¹.
Other notable Al initiatives include rolling out Al for Rakyat²², a nation-wide public literacy programme²³, as well as the National Al Sandbox aiming to spur the creation of 900 Al startups and 13,000 new Al talents by 2026²⁴. These leverage on public-private partnerships with global tech companies (the former with Intel, and the latter with Nvidia) to support the local ecosystem.
Malaysia is moving ahead in pushing for Al awareness and adoption, and the Al governance community is growing and coalescing. Stakeholders recognise that there are multiple dimensions to Al risk, and much work needs to be done to ensure that safeguards and accountability mechanisms are in place²⁵. We hope that our Al Risk Series will be able to contribute to the discourse, and support Malaysia's journey towards an Al-enabled future that is inclusive and beneficial for all.