
Addressing Data-centred Rights: Protecting Our Digital Doubles
The rapid advancement of digital technologies presents a multitude of novel and complex challenges to the protection of human rights, especially violations that cut across the digital and physical. In this article, I offer a conceptual framework for digital rights in general and from there, zoom into data-centred rights, one of the less-discussed aspects of digital rights, which deserves more attention for the societal implications for years to come.
Conceptual framework for digital rights
One of the problems of advocating for digital rights is a lack of consensus of what the concept means. In 2019, I conducted a study interviewing 24 digital rights advocates and activists from Malaysia, the Philippines and Thailand, and built a conceptual framework to encapsulate four main spheres of digital rights.
The four spheres of digital rights are as follows: 1) conventional human rights translated to digital spaces, 2) data-centred rights, 3) rights to access digital spaces and services, and 4) rights to participate in digital governance.
The study found that within Southeast Asia, the digital rights movement focused mostly on the dimension of human rights translated to digital spaces, notably on issues such as freedom of expression and online gender-based violence. Access to the digital was not a key focus, possibly because governments in the region had prioritised digitalisation and connection. Participating in digital governance was mainly at national or subnational levels on influencing government policy on ICT, with forums for international standards setting mostly out of reach.
Data-centred Rights
Data-centred rights, also a peripheral issue at that point, was just starting to appear as part of the discourse, with few civil society organisations working on raising awareness about why data privacy and security was important. It is not an easy point to articulate, that a set of data points would constitute a “digital double” of oneself (or one’s car, or one’s city, etc), which could be used to make high level decisions that could act against or for the individual’s rights or living conditions.
From one angle, with the relentless datafication of ourselves and our lives, such as online activity, biometric data of various parts of our bodies, our medical and financial data, and even how we move and breathe with the advent of the metaverse - our digital doubles have become more and more comprehensive representations of us. From another, data-driven artificial intelligence (AI) systems such as machine learning software are increasingly used to mine insights and automate decision-making in different fields.
As we stand in 2023, this area of digital rights is still quite underdeveloped in the region, even when risks and harms rise in importance as AI technologies permeate our everyday life. Our online interactions are mediated by recommendation systems offering personalised content (such as social media feeds and online shopping); and offline, our movements are increasingly digitised and tracked and monitored by corporations and government bodies.
For instance, in Malaysia, we already see some of these technologies in use to provide public services, such as the Penang state government using facial recognition technology for CCTV surveillance to combat crimes, or the court systems in Sabah and Sarawak piloting predictive statistical analysis, with the intention to move towards machine learning, to assist in decisions on sentencing for drug and rape offences.
While there has been little reporting on the efficacy and safety of these systems in the Malaysian context, similar implementations elsewhere have raised concerns, such as facial recognition systems and risk recidivism software in use in the US amplifying racial biases and marginalising the marginalised further. Another example of decision-making based on data and digital bodies that has an outsized impact on society is the social credit systems in China, which rate the behaviour of citizens, companies, and even government agencies, and offer rewards or punishments accordingly.
Governance of Data and AI Applications
There are at least two ways to view this problem of protecting data-centred rights. The first is to protect the data itself, and this includes comprehensive policies and procedures of data governance to protect data privacy and security, typically managed at an organisational level. The scope of data covered should not be limited to personal data representing people, but also nonpersonal data and metadata from which inferences can be made.
The second is to govern the applications that make use of this data. In 2021, the European Union released a draft of its proposed Artificial Intelligence Act, which may set a worldwide standard for AI regulation (as did the General Data Protection Regulation, or GDPR, for data protection). Notably, the Act classifies and regulates applications by level of risk imposed on EU citizens by any AI application.
Applications that pose “unacceptable risk” such as subliminal manipulation and exploitation of vulnerable groups, or social credit scoring by public authorities, or real-time biometric identification systems in public spaces, are proposed to be prohibited outright. “High risk” applications (e.g. applications for recruitment, assessing consumer creditworthiness, safety-critical systems, etc.) will be subject to more regulatory oversight than “low or minimal risk” applications (e.g. AI chatbots, spam filters, and most other AI systems).
As with most initiatives like this, the devil is in the details, and many have offered in-depth analyses and critiques about what is proposed. In general, it is a good move away from industry self-regulation, as well as narrowly defined and poorly applied AI ethics. Situating AI technologies in their application and societal contexts, with a risk-based approach, is an important step to add another layer of protection to data-centred rights.
Conclusion
A conceptual breakdown of digital rights offers us a clearer clarity of the problem areas, and we see that each sphere of digital rights comes with its historical context and stakeholders, as well as existing research and advocacy issues. The breadth of what is covered can then be mapped out, to address gaps and form bridges among state and non-state actors.
While all spheres of digital rights are important and have real life implications, the area of data-centred rights is the least-understood and the fastest growing. In keeping with the global AI regulatory landscape, as Malaysia moves ahead with its National AI Roadmap, Digital Economy Blueprint, and National Fourth Industrial Revolution (4IR) Policy, it will also have to ensure that data and AI regulatory structures are in place to protect its citizens, so that risks of the technologies do not outweigh their benefits.