AI & Gender

Assessing the Adequacy of Data Protection Laws in Africa

This is a tabulation of some of the regulatory instruments on data protection in Africa, followed by a write up of the key trends in data protection and some recommendations for a more robust and harmonised data protection legislation. The table explores the framing of the provisions on automated decision-making systems, the right to consent, the right to be informed and the right to object in the African Union’s Malabo Convention, the SADC Model law on Data Protection and South Africa’s Protection of Personal Information Act (POPIA). The write up accompanying this table is a summary of the findings made on these key regulatory pieces and recommendations on how they can be improved to offer more protection to data subjects.

AI Framing

The Artificial Intelligence (AI) Framing spreadsheet is a thematic arrangement of how AI is being framed in South Africa, specifically, and the rest of the African continent, more generally. The recurring themes in AI framing on the continent were the need for the regional protection of data and the need for accountability in the implementation of AI initiatives, AI and the right to privacy, AI and the fourth industrial revolution, cyber security, and the intersection of AI and gender. Most of the writings reviewed on the framing of AI in South Africa, specifically, were centred around the fourth industrial revolution and the need for the enforcement of the data protection legislation that had, at the time, been passed, but not yet enforced. 

On the continent, more generally, the areas of focus were discussions on data subject rights, i.e. the right to privacy in particular, and the need for a regional data protection framework to protect data subjects’ rights. The prominent voices on these themes were academics, industry experts and journalists.

AI Initiatives

The AI initiatives table sets out the data collected to map players in the AI space in South Africa. Most of the AI initiatives and start-ups found make use of facial recognition technologies, automated decision making systems, machine learning and natural language processing. The data in this table was drawn from websites such as the IndabaX website, Alliance 4 AI, Center for Artificial Intelligence Research, Zindi, and arXiv. The use of AI in South Africa is scattered across different industries including the financial sector, marketing, medicine, security, education, agriculture and transport and logistics, with the finance sector showing the highest number of AI and tech start-ups.

Data Harms

The Data Harms table contains identified AI risks from a gender feminist perspective. These include discrimination, bias, violation of the right to privacy, gender based violence, and murder in some cases. It sets out the harm occurring, the harm basis which was either gender, race, class or a combination of these and other social identity markers, and practical examples of the harm occurring. The disparities in evidence are clear with most examples drawn from the global North and very few from the global South. Insights were drawn from the 2019 UN special rapporteur report on gender and privacy and platforms such as GenderIT, Botpopuli, Coding Rights, the APC GIS watch 2019 report, and the Algorithmic Justice League, amongst other sources. The table is an important resource for illustrating the disproportionate effects of AI on women and the need to advocate for a human rights based approach to data protection across the world.

Reading List

The reading list is a collation of the literature on AI across the world. It offers an extensive list of all the literature engaged with and is a resource pack for most things AI related. The writings reviewed mostly consisted of newspaper articles, journal articles, research papers, and policy assessments on the protection of the right to privacy around the world; AI in the global south; the fourth industrial revolution; feminist and data justice approaches to AI; automated decision making systems and their harms and opportunities; and bias and discrimination in AI.