MY DATA RIGHTS: Feminist Reading of the Right to Privacy and Data Protection in the age of AI
The existent injustices that marginalised groups and sexual minorities face shape how they experience technology, policy, and regulation – either to achieve social justice or increase current inequalities and harms. In the era of AI based innovations such as automated decision making and algorithms this context is important to understand how we can safeguard against harms and injustices while using these technologies to dismantle barriers of oppression. The development, implementation and governance of AI based innovations, especially around data that is fed into them, raises important questions on their impact on society and digital rights.
The key policy questions asked where:
1) What would a gender-responsive data protection and privacy law entail to ensure gender safeguards against AI gendered harms?
2) How can civil society play a role in ensuring a gender transformative law and practice with a focus on the right to privacy and data protection?
The major highlights from the research are:
AI discourse in South Africa
Policy conversations on governing AI have begun in the country with a focus on ensuring economic gains through the creation of employment opportunities by upskilling citizens; being innovative enough to attract global business as well as legal and regulatory compliance for global trade. However concerns have been raised that there is a risk of perpetuating digital disparities and inequality in the discourse around AI.
Uneasy Access – Gendered concerns of AI
Issues emerging from a gender perspective include the lack of agency and control over data, problematising consent in contexts of unequal power dynamic, loss of privacy, discrimination and bias at the intersection of race, class and gender.
Perceived concerns of AI harms
The research conducted finds that there is in general a lack of documentation on harms in the South African context. However through interviews and open ended survey questions, important contextual issues that were recurring references were race, economic status, homophobia and gender-based violence.
Responsiveness of current Protection of Personal Information Act to AI and gender issues
A gendered reading of the Protection of Personal Information Act (2013) found a need for the current law to be contextually responsive, address the issue of gender exclusionary language and the lack of nuance of gender and sexuality harms embedded into the law. There were examples of civil society making use of the law to challenge surveillance issues however a gendered nuance is missing.
To be gender responsive means to design and implement policies that consider gendered realities of the society we live in and ensure that injustices are not replicated as we race towards digital development.
The recommendations stemming from this paper cover four important areas:
1) Policy and regulation:
Policy and regulation would require context-based implementation and assessment of current and future privacy and data protection laws with regards to Artificial Intelligence innovations. This would be complemented by collaboration between Civil Society, the legal community and other relevant stakeholders.
2) Research and documentation:
Research and documentation is necessary to fill the knowledge gap in understanding the context based impact of AI based innovations. Case studies documenting the impact of AI on marginalised groups would highlight the necessary responsive means to safeguarding against harms and injustices. Research may also be used to support development of governance models and develop AI registrars to document the proliferation of AI, where it is used and the impact it has. Resourcing these initiatives to ensure their public and open resources is important.
3) Public awareness:
Public awareness requires collaborative, relatable, and innovative campaigns to raise understanding of the opportunities and challenges of AI with regards to privacy and data protection. Public awareness would focus on campaigns for diverse marginalised groups, creation of collaborative spaces that are safe for gender and sexual minorities to learn and raise their concerns and resourcing from different stakeholders to ensure the necessary support needed for public participation
4) Responsibility of technical community:
The technical community carries the responsibility of carrying out public engagement and sharing information on how their systems work to ensure accountability and trust of AI based innovations. Civil society can be drawn into ethical guideline development that is cognisant of experiences of injustice for marginalised groups. In design and implementation of AI based solutions – digital literacy, privacy by design and context responsiveness should form underlying guiding principles in designing solutions that would not lead to further social injustices
Download the executive summary and policy paper below: