Safiya U. Noble, Ph.D.

Safiya U. Noble, Ph.D.

 

This month, the United Nations will issue an investigative report on internet access in its ongoing commitment to arguing for free expression, particularly within the digital, as human right. I’m developing enhanced frameworks for thinking about the role of digital technology and human rights, and how digital media platforms are encroaching upon, and shifting the nature of human relationships. Indeed, I am arguing that deep machine learning and “artificial intelligence” will become a major human rights issue in the 21st century, and not in ways we may be inclined to think.

Typically, the practice of redlining has been most often used to practices in real estate and banking circles that create and deepen inequalities by race, where, for example, people of color are more likely to pay higher interest rates or premiums just because they are Black or Latino, especially if they live in low income neighborhoods. On the internet, and in our everyday uses of technology, discrimination is also embedded in computer code and, increasingly, in artificial intelligence technologies that we are reliant upon, by choice or not. This book is about the power of algorithms in the age of neoliberalism, and the ways those algorithms reinforce oppressive social relationships, and enact new modes of racial profiling that I’ve termed “technological redlining.” By making visible the ways that capital, race, and gender are factors in creating unequal conditions, I am bringing light to forms of technological redlining, which are on the rise. The near-ubiquitous use of algorithmically-driven software, both visible and invisible to everyday people, demands a closer inspection of what values are prioritized in such decision-making systems.”

— Safiya U. Noble, Algorithms of Oppression

The greater challenge before us will not be access to the internet, but freedom from machine-based decision making and control over our lives. The role of algorithms in shaping discourse about people and communities, or in everyday decisions like access to credit, mortgages, or school lottery systems is only the beginning. What we need are more thoughtful and rigorous approaches to thinking about what trade-offs are at play when we posit deep machine learning as inherently “more objective” or fair than human decision making — as if machines are not embedded with all types of values and features that are the constructs of human decisions. Artificial intelligence is a social construction and we have only begun to see the consequences of its mediation — from racially biased sentencing in U.S. courtrooms, to facial recognition and profiling in law enforcement technologies, to machine- based discrimination in AI experiments.

The right to be protected from artificial intelligence and deep machine learning will become a human rights issue in the 21st century. We need to engage more critically in how these technologies will only further discrimination and oppression around the world.