The Rise of Chinese Surveillance Technology in Africa

By Bulelani Jili, EPIC Scholar-in-Residence

Many Chinese tech companies continue to export their facial recognition technologies into Africa markets while supporting domestic surveillance practices that include Uyghur and ethnic minority detection. Facial recognition technology, once framed as a potential ameliorant to social challenges like crime, is now widely criticized for racial bias and the risks the technology poses to privacy and civil liberties. These risks and more are clearly present in China’s current testing and expansion of facial recognition systems in Africa. The technology’s arrival in African countries like Zimbabwe is a mark of China’s growing geopolitical footprint and Chinese corporate expansion. 

What is facial recognition technology?

Facial recognition is a digitally automated process of comparing images of human faces to determine whether they represent the same individual. This process is contingent on an algorithm that first detects a face and then is able to rotate, scale, and align the image so that every face the algorithm compares it with will be in the same position. The algorithm also aims to capture qualities like skin pigmentation and eye color. The algorithm then examines and compares faces found in a biometric dataset where it issues a numerical score reflecting the degree of similarity between the face detected and the ones found in the dataset.

 Crucially, this probabilistic approach aims to identify likely matches. 

Identifying black faces

These systems do not operate perfectly and are, in fact, plagued by inaccuracies and biases, resulting in false matches which can undermine civil liberties or failures to match correct identification which can lead to denial of access to services or functions. The substantial disparities in the accuracy of being able to identify dark-skinned people has inspired much research and urgent attention from commercial companies. Recent studies show that algorithms trained with biased data have resulted in algorithmic discrimination. For instance, Buolamwini and Gebru have produced extensive work demonstrating bias present in automated facial analysis algorithms and datasets in regard to race and gender. Purportedly in an attempt to improve accuracy in these areas, companies like CloudWalk, a Guangzhou-based start-up, have entered developing markets like Zimbabwe in part to improve their means of facial recognition. By gaining access to a black population, their algorithm will supposedly be better trained at identifying darker-skinned people. 

More to the point, computer vision systems with better performance in identifying dark-skinned people give Chinese companies a comparative advantage over Western competition. 

cloudwalk and the zimbabwean state

The Zimbabwean government, working with CloudWalk, aims to establish a mass facial recognition program. This initiative was supplemented by a grant from the Guangzhou municipality given to Cloudwalk.  The purpose of this initiative is to supposedly improve administrative and security capacity. The Zimbabwean state has insisted that these technologies would empower the state to fight crime and advance the state’s law enforcement ambitions. Yet digital rights advocates have expressed trepidations over the country’s poor human rights record and the unwarranted surveillance and collection of citizens’ biometric data. Examining how these improvements in phenotypic and demographic accuracy of facial recognition could be used or abused requires urgent…

Continue reading

Leave a Reply

Your email address will not be published. Required fields are marked *