Views
Mar 6, 2023
6
Minutes read

Addressing Data-centred Rights: Protecting Our Digital Doubles

Author
Dr Jun-E Tan
Senior Research Associate
Dr Jun-E Tan
Senior Research Associate
Co - Author
No items found.
Loading the Text to Speech AudioNative Player...
Key Takeaway
Data Overview
This article offers a conceptual framework for digital rights and zooms in on data-centred rights as a gap to be addressed within digital rights work. Data-centred rights are important because of the increasing datafication of our lives that create “digital doubles” of ourselves, which can be used to make high level decisions with real life implications. At the same time, data-driven artificial intelligence (AI) technologies are increasingly used to mine insights and automate decision-making in different fields. Two ways of protecting data-centred rights are through safeguarding the data itself, and governing AI applications that make use of the data. In keeping with global trends that are moving away from industry self-regulation and AI ethics, Malaysia should ensure that data and AI regulatory structures are in place to protect its citizens so that risks of the technologies do not outweigh their benefits.
addressing-data-centred-rights-protecting-our-digital-doubles
Views
Individual perspectives on current issues help people understand the issue better and raise awareness through informed opinions and reflections.

Addressing Data-centred Rights: Protecting Our Digital Doubles

The rapid advancement of digital technologies presents a multitude of novel and complex challenges to the protection of human rights, especially violations that cut across the digital and physical. In this article, I offer a conceptual framework for digital rights in general and from there, zoom into data-centred rights, one of the less-discussed aspects of digital rights, which deserves more attention for the societal implications for years to come.

Conceptual framework for digital rights

One of the problems of advocating for digital rights is a lack of consensus of what the concept means. In 2019, I conducted a study interviewing 24 digital rights advocates and activists from Malaysia, the Philippines and Thailand, and built a conceptual framework to encapsulate four main spheres of digital rights.

The four spheres of digital rights are as follows: 1) conventional human rights translated to digital spaces, 2) data-centred rights, 3) rights to access digital spaces and services, and 4) rights to participate in digital governance.

The study found that within Southeast Asia, the digital rights movement focused mostly on the dimension of human rights translated to digital spaces, notably on issues such as freedom of expression and online gender-based violence. Access to the digital was not a key focus, possibly because governments in the region had prioritised digitalisation and connection. Participating in digital governance was mainly at national or subnational levels on influencing government policy on ICT, with forums for international standards setting mostly out of reach.

Data-centred Rights

Data-centred rights, also a peripheral issue at that point, was just starting to appear as part of the discourse, with few civil society organisations working on raising awareness about why data privacy and security was important. It is not an easy point to articulate, that a set of data points would constitute a “digital double” of oneself (or one’s car, or one’s city, etc), which could be used to make high level decisions that could act against or for the individual’s rights or living conditions.

From one angle, with the relentless datafication of ourselves and our lives, such as online activity, biometric data of various parts of our bodies, our medical and financial data, and even how we move and breathe with the advent of the metaverse - our digital doubles have become more and more comprehensive representations of us. From another, data-driven artificial intelligence (AI) systems such as machine learning software are increasingly used to mine insights and automate decision-making in different fields.

As we stand in 2023, this area of digital rights is still quite underdeveloped in the region, even when risks and harms rise in importance as AI technologies permeate our everyday life. Our online interactions are mediated by recommendation systems offering personalised content (such as social media feeds and online shopping); and offline, our movements are increasingly digitised and tracked and monitored by corporations and government bodies.

For instance, in Malaysia, we already see some of these technologies in use to provide public services, such as the Penang state government using facial recognition technology for CCTV surveillance to combat crimes, or the court systems in Sabah and Sarawak piloting predictive statistical analysis, with the intention to move towards machine learning, to assist in decisions on sentencing for drug and rape offences.

While there has been little reporting on the efficacy and safety of these systems in the Malaysian context, similar implementations elsewhere have raised concerns, such as facial recognition systems and risk recidivism software in use in the US amplifying racial biases and marginalising the marginalised further. Another example of decision-making based on data and digital bodies that has an outsized impact on society is the social credit systems in China, which rate the behaviour of citizens, companies, and even government agencies, and offer rewards or punishments accordingly.

Governance of Data and AI Applications

There are at least two ways to view this problem of protecting data-centred rights. The first is to protect the data itself, and this includes comprehensive policies and procedures of data governance to protect data privacy and security, typically managed at an organisational level. The scope of data covered should not be limited to personal data representing people, but also nonpersonal data and metadata from which inferences can be made.

The second is to govern the applications that make use of this data. In 2021, the European Union released a draft of its proposed Artificial Intelligence Act, which may set a worldwide standard for AI regulation (as did the General Data Protection Regulation, or GDPR, for data protection). Notably, the Act classifies and regulates applications by level of risk imposed on EU citizens by any AI application.

Applications that pose “unacceptable risk” such as subliminal manipulation and exploitation of vulnerable groups, or social credit scoring by public authorities, or real-time biometric identification systems in public spaces, are proposed to be prohibited outright. “High risk” applications (e.g. applications for recruitment, assessing consumer creditworthiness, safety-critical systems, etc.) will be subject to more regulatory oversight than “low or minimal risk” applications (e.g. AI chatbots, spam filters, and most other AI systems).

As with most initiatives like this, the devil is in the details, and many have offered in-depth analyses and critiques about what is proposed. In general, it is a good move away from industry self-regulation, as well as narrowly defined and poorly applied AI ethics. Situating AI technologies in their application and societal contexts, with a risk-based approach, is an important step to add another layer of protection to data-centred rights.

Conclusion

A conceptual breakdown of digital rights offers us a clearer clarity of the problem areas, and we see that each sphere of digital rights comes with its historical context and stakeholders, as well as existing research and advocacy issues. The breadth of what is covered can then be mapped out, to address gaps and form bridges among state and non-state actors.

While all spheres of digital rights are important and have real life implications, the area of data-centred rights is the least-understood and the fastest growing. In keeping with the global AI regulatory landscape, as Malaysia moves ahead with its National AI Roadmap, Digital Economy Blueprint, and National Fourth Industrial Revolution (4IR) Policy, it will also have to ensure that data and AI regulatory structures are in place to protect its citizens, so that risks of the technologies do not outweigh their benefits.

Read Full Publication

Article highlight

Protecting digital rights means safeguarding our data and regulating AI, ensuring technology serves society without compromising individual freedoms.

featured report

Conclusion

Download Resources
Files uploaded
Footnotes
Attributes
References
["European Commission. 2021. “Proposal for a Regulation of the European Parliament and of the Council Laying Down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act) and Amending Certain Union Legislative Acts.” April 21, 2021. https://eurlex.europa.eu/legal-content/EN/TXT/HTML/?uri=c=CELEX:52021PC0206&from=c=EN.","Future of Life Institute. n.d. “Analyses.” The Artificial Intelligence Act. Accessed February 15, 2023. https://artificialintelligenceact.eu/analyses/.","Khazanah Research Institute. 2021. “#NetworkedNation: Navigating Challenges, Realising Opportunities of Digital Transformation.” Kuala Lumpur: Khazanah Research Institute.","Larson, Jeff, Surya Mattu, Lauren Kirchner, and Julia Angwin. 2016. “How We Analyzed the COMPAS Recidivism Algorithm.” ProPublica. May 23, 2016. https://www.propublica.org/article/how-we-analyzed-the-compas-recidivismalgorithm.","Lee, Amanda. 2020. “What Is China’s Social Credit System and Why Is It Controversial?” South China Morning Post. August 9, 2020. https://www.scmp.com/economy/chinaeconomy/article/3096090/what-chinas-social-credit-system-and-why-it-controversial.","Mok, Opalyn. 2019. “Penang Launches Country’s First Facial Recognition CCTV Surveillance.” Malay Mail, January 2, 2019, sec. Malaysia. https://www.malaymail.com/news/malaysia/2019/01/02/penang-launches-countrysfirst-facial-recognition-cctv-surveillance/1708422.","Najibi, Alex. 2020. “Racial Discrimination in Face Recognition Technology.” Science in the News, Harvard University Graduate School of Arts and Sciences. Racial Discrimination in Face Recognition Technology (blog). October 24, 2020. https://sitn.hms.harvard.edu/flash/2020/racial-discrimination-in-face-recognitiontechnology/.","Tan, Jun-E. 2019. “Digital Rights in Southeast Asia: Conceptual Framework and Movement Building.” In Exploring the Nexus between Technologies and Human Rights: Opportunities and Challenges for Southeast Asia, edited by Ying Hooi Khoo and Deasy Simanjundtak. Bangkok: SHAPE-SEA. . 2020a. “A Critical View of AI Ethics: Looking at the Substance of Ethical Guidelines.” EngageMedia (blog). August 10, 2020. https://engagemedia.org/2020/artificialintelligence-ethical-guidelines/.","Tan, Jun-E. 2020b. “Problems in Putting Principles into Practice: A Critical View of AI Ethics.” EngageMedia (blog). August 17, 2020. https://engagemedia.org/2020/artificialintelligence-ethics-in-practice/."]
Photography Credit
Cover photo by Andy Kelly on Unsplash.

Related to this Publication

No results found for this selection
You can  try another search to see more

Want more stories like these in your inbox?

Stay ahead with KRI, sign up for research updates, events, and more

Thanks for subscribing. Your first KRI newsletter will arrive soon—filled with fresh insights and research you can trust.

Oops! Something went wrong while submitting the form.
Follow Us On Our Socials