Racial Profiling and Technology

This project is conducted in partnership with the CJSM (Clinique juridique de St Michel) in Montreal, Quebec.

Artificial intelligence (AI) refers to "systems that demonstrate intelligent behavior by analyzing their environment and taking action, with a degree of autonomy, to achieve specific goals". Predictive policing, body cameras, facial recognition, or automatic license plate recognition are AI-based systems that are made available to the police to enhance their ability to act. As cities become more densely populated, technology can have a social utility by helping to maintain a level of security that law enforcement agencies can no longer provide alone. It can also save time or simplify the work of police officers. But these tools also pose a threat of permanent and widespread surveillance of the population. The rights and freedoms protected by the Canadian Charter (right to privacy, freedom of movement, association, opinion and demonstration) could be in jeopardy. In addition, numerous studies in the United States have proved the bias of these technologies towards racialized populations, revealing algorithmic discrimination. In a context where systemic racism towards Black and Indigenous populations is present in institutions such as the police, the conjunction of these factors of inequality reveals the urgency to analyze in order to act better now.

What artificial intelligence systems are Canada's police forces using to support their work? What impacts can these tools have on systemic racism? How do police services deal with the potential for discrimination from certain technologies? How can Black and Indigenous communities actively participate in understanding these risks and defend themselves more effectively? Is the legal framework robust enough to ensure that the principle of equality is respected?


This content has been updated on 1 March 2023 at 13 h 04 min.