I organised and convened a webinar in collaboration with Open Rights Group Glasgow on the topic of Facial Recognition. The webinar took place on the 13th of October 2020. You can watch the video here.
Recently, activists and civil rights organisations such as Liberty, Open Rights Group and WebRoots have gathered and published evidence of the ways in which Automated Facial Recognition technologies risk infringing human rights.
Following on from their efforts, we have started to see a real push back against the trial and use of these technologies by law enforcement in public spaces. For example, its use by the police in Wales has been ruled unlawful and deemed to be a violation of human rights by the Court of Appeal of England and Wales in August 2020; Police Scotland has given up on its plans to deploy Facial Recognition technologies because of how they discriminate based on gender and race; US cities like Boston, San Francisco, or Portland have banned these technologies on the same basis. These debates have become even more salient in the context of the recent pro-democracy demonstrations in Hong Kong, and the Black Lives Matter campaign for racial justice in the US.
It is therefore more important than ever to continue and broaden a societal conversation on the implications of using Automated Facial Recognition technologies, the ways in which these technologies are deployed and regulated in different contexts, the risks that they pose, and how they can reinforce existing inequalities.
Join us online for a discussion which will tackle these questions from a historical, legal, techno-social, and human rights perspective.
Benedetta Catanzariti is a PhD candidate in Science, Technology and Innovation Studies at the University of Edinburgh, researching on the relationship between surveillance, AI and society. Her academic background is in philosophy and she is particularly interested in the way technology shapes our identity and contributes to reinforce or, alternatively, dismantle social inequalities. She is currently looking at the design of the classification techniques underpinning the development and use of automated facial and affect recognition systems.
Areeq Chowdhury is the founder and director of WebRoots Democracy, a think tank advocating for progressive and inclusive technology policy. Areeq Chowdry has worked at the Foreign and Commonwealth Office; the Department for Digital, Culture, Media and Sport; London City Hall; the UK Parliament; KPMG; and Future Advocacy. He has also provided commentary on technology policy issues for a range of media outlets including Al Jazeera, the BBC, and Sky News.
Lachlan D. Urquhart is a Lecturer in Technology Law at the University of Edinburgh. He is also a core member of the Centre for Data, Culture and Society and Director of the eLL.M in Information Technology Law. Lachlan is currently working on a major research project entitled ‘Emotional AI in Cities: Cross Cultural Lessons from UK and Japan on Designing for An Ethical Life’ which examines the socio-technical, governance and cultural dimensions of affect sensing technologies in urban life. His work sits at the intersection of computer science, information technology law, and computer ethics, and focuses on the technical, sociological, and interactional implications of living with interactive computing.