Safiya U. Noble, Ph.D.

Safiya U. Noble, Ph.D.

 

American Scientist. 106.5 (September-October 2018): p314+.

ARTIFICIAL UNINTELLIGENCE: How Computers Misunderstand the World. Meredith Broussard. 192 pp. MIT Press, 2018. $24.95.

ALGORITHMS OF OPPRESSION: How Search Engines Reinforce Racism. Safiya Umoja Noble. 256 pp. New York University Press, 2018. $28.

Over the past few years, a sizable stack of books on the uses and abuses of big data and artificial intelligence (AI) has hit library and bookstore shelves. The topic has been making headlines, especially since news broke that social media algorithms, in combination with bad actors, may have helped tip the United Kingdom’s Brexit vote and the 2016 U.S. presidential election.

Each of these recent titles presents a different facet of the larger problem. For example, Virginia Eubanks’s Automating Inequality (excerpted in the July-August issue) shows why decisionmaking by machine widens the gulf between rich and poor. In Technically Wrong, Sara Wachter-Boettcher describes how tech elements as basic as drop-down boxes are designed with bias at their core. Siva Vaidhyanathan’s new book, Antisocial Media, discusses the ways social media is undermining democratic institutions. My book, Programmed Inequality, examines the postwar high-tech industry in the United Kingdom to demonstrate how sexism can wreck an economic sector. Although these titles cover quite a bit of territory, there’s much more room for discussion, particularly because the algorithms under scrutiny drive so much of contemporary life in the United States.

Two recent books, Meredith Broussard’s Artificial Unintelligence and Safiya Noble’s Algorithms of Oppression, stand, out for their singular perspective and analysis. They model the kinds of questions we should be asking in our current moment of crisis, as well as once the headlines begin to fade.

Broussard, a data journalist, programmer, and professor, has written some of these headlines herself. Her book serves as a straightforward and necessary primer on the predictable ways–historically speaking–that AI and big data tend to let us down. The artificial unintelligence of the title, Broussard explains, derives from the gap between what tech boosters claim that AI and other data-driven decision-making can do and what they can actually do–as well as what they are likely to be able to do in the future. Broussard shows how the public, including the media and venture capitalists, has been taken in by the idea of general AI (think the movie version of AI: Star Trek’s computer, the Terminator, and the like), when what is really on offer is narrow AI–the very limited, math-based kind of artificial intelligence that can do certain tasks well, such as numerical analysis, but quickly loses true efficacy when deployed on the messier, more socially contingent problems that people want it to be able to solve.

As someone with a grasp of both code and rhetoric, Broussard lays out clearly and firmly just how duped we’ve been by the unique brand of technological boosterism that develops when technologists, and even whole fields of technology, have little to no social accountability. Broussard retains respect for the founders of computer science; nonetheless, she points out that researchers such as Marvin Minsky, the “father of AI,” had government money all but thrown at his work, regardless of results–as did most of the technophile researchers of his generation.

Minsky reported that he never had to write a grant proposal until he was three decades into his career, which means that for most of his working life, he never had to articulate the broad impact of his projects–nor did he have to justify to an outside audience why the work should be funded or the technologies made to exist.

Broussard’s examples show how technologies in this realm were assumed to be useful simply by virtue of existing–an idea born of military aims that quickly bled out into the rest of society and the American economy during the Cold War and its aftermath. Technology, in the context of the Cold War, was a way to do an end run around democratic consensus. From nuclear weapons to the space race, technology was deployed as a proxy military strategy without an actual declaration of war. But, once developed, technologies funded by the government for military and pseudo-military aims could also hold benefits for other spheres. Increasingly, tools designed for international conflict–from nuclear power to rocketry–began to be applied to other ends.

This process helped foster the conviction that technology was the primary way to solve problems, an understanding of the world that began to infuse American civil society with profoundly uncivil and antidemocratic ideals: If a technology could fix something, then scientists, engineers, and the people paying them had a say, while the will of the public fell by the wayside.

This technochauvinism, as Broussard calls it, leaves us with the idea that technology can do no wrong–or at least no wrong that another technology cannot fix. And the ideal of technology as the solution for almost any problem means that we are conditioned to see technological developments as progress, even when they are exacerbating, rather than ameliorating, deep and longstanding societal problems.

One such example opens Noble’s book, Algorithms of Oppression. Noble, a communications professor with a background in information science and sociology, recounts how, some years ago, she used Google to search for “black girls” using various modifiers. She intended to find information that her young cousins might be interested in–maybe turn up activities and organizations for black girls. Instead, the search returned pornography. Noble unpacks this example to show how it was anything but a technological error: Google makes its money not from serving up accurate results, but from selling ads, thus giving the company a major financial stake in continuing the long tradition of objectification of black people, particularly black girls and women, by majority white-controlled media.

Noble’s explicitly intersectional framework, which highlights how black women and girls experience online spaces, provides a necessary and jarring look at how profoundly our information tools–widely claimed to be neutral and universal–have failed socie’ty, particularly groups who are already vulnerable by virtue of history. Intersectionality, a term coined by civil rights scholar Kimberle Crenshaw, holds that accounting for race, and for how it overlaps with other, intersecting categories of oppression, is essential for understanding the harms that accrue when people engage with structures, institutions, or technologies not designed with them in mind. Noble shows how digital tools, deployed by the wealthiest and most privileged people in our society, make more billionaires while being sold publicly under the guise of advancement for all.

Yet “smarter” technologies have meant that those whose civil rights are the most vulnerable must be ever more vigilant. Facial recognition technologies and artificial intelligence systems designed to search and sort images end up hardening not only stereotypes but existing hierarchies, becoming the next tools in a long line of technologies used by those in power to maintain their position at the top. The structural discrimination that results–wherein existing categories of discrimination are strengthened by being built into new systems and technologies and the institutions that deploy them–means that we confuse what is popular with what is good, and what is offered with what is possible.

To connect our unequal future with our unequal past, Noble uses the concept of digital redlining, a term that references political practices that have sought to designate geographical areas by race, systematically concentrating wealth in white households by working to exclude black families from areas that benefited from publicly funded economic uplift. She draws parallels between past eras of geographic redlining and the present-day digital practice, demonstrating how the racism that has built and organized our online spaces goes mostly unnoticed or unremarked on by white upper- and middle-class Americans.

Ultimately, the same bloc that benefited from the original version of redlining continues to materially benefit from the racism online that keeps black women and girls on the fringes of today’s most powerful and lucrative technologies. Like the red lines drawn on census maps, the lines drawn across our online spaces–which assert who belongs where and determine who can expect to feel welcome or served by online technologies–have the ability to foreclose people’s opportunities. In the not-so-distant-future they may be able to change every aspect of a person’s life.

Although many authors have turned their attention to the structural discrimination inherent in high technology, women technologists of color such as Broussard and Noble offer an especially urgent critique that must not be ignored. Both authors present and amply support the idea that constructing our digital infrastructure on the same racist and sexist categories as our previous, nondigital infrastructure does not simply mean that the existing problems are carried forward. Rather, harm is amplified and entrenched. The consequences are particularly acute for groups least likely to be in power, making technological “progress” actually regressive when it comes to the civil rights of black people, women of all races, the LGBTQ+ population, and many others who live on the margins of what practitioners of high tech see as profitable or important.

The scope and reach of these technologies mean they have ever more societal power. In showing us how, through high tech, our culture is structuring the future on the mistakes of the past, Broussard and Noble alert us to the ways in which civil rights are silently and seamlessly rolled back by centralized technologies that masquerade as consumer products. Beyond sounding an alarm, however, both authors offer solutions that aim for a better, more inclusive digital future–such as search engines that require opting in to access pornographic, commercial, or racist content, and online technologies that are underwritten by public tax dollars rather than private corporations’ advertising dollars.

In the end, their books take up the task assigned to the technologies they critique: creating a vision of progress that truly means advancement for all.

The Scientists’ Nightstand, American Scientist’s books section, offers reviews, review essays, brief excerpts, and more. For additional books coverage, please see our Science Culture blog channel, which explores how science intersects with other areas of knowledge, entertainment, and society: americanscientist.org/bbgs /science-culture.

Historian Marie Hicks is an associate professor at the Illinois Institute of Technology, a 2018-2019 fellow at the National Humanities Center in Research Triangle Park, North Carolina, and the author of Programmed Inequality: How Britain Discarded Women Technologists and Lost Its Edge in Computing.