An app will not save us. We will not sort out social inequality lying in bed staring at smartphones. It will not stem from simply sending emails to people in power, one person at a time.

New, neoliberal conceptions of individual freedoms (especially in the realm of technology use) are oversupported in direct opposition to protections realized through large-scale organizing to ensure collective rights. This is evident in the past 30 years of active antilabor policies put forward by several administrations and in increasing hostility toward unions and twenty-first-century civil rights organizations such as Black Lives Matter. These proindividual, anticommunity ideologies have been central to the anti-democratic, anti-affirmative-action, antiwelfare, antichoice, and antirace discourses that place culpability for individual failure on moral failings of the individual, not policy decisions and social systems. Discussions of institutional discrimination and systemic marginalization of whole classes and sectors of society have been shunted from public discourse for remediation and have given rise to viable presidential candidates such as Donald Trump, someone with a history of misogynistic violence toward women and anti-immigrant schemes. Despite resistance to this kind of vitriol in the national electoral body politic, society is also moving toward greater acceptance of technological processes that are seemingly benign and decontextualized, as if these projects are wholly apolitical and without consequence too. Collective efforts to regulate or provide social safety nets through public or governmental intervention are rejected. In this conception of society, individuals make choices of their own accord in the free market, which is normalized as the only legitimate source of social change.

Excerpted from Algorithms of Oppression: How Search Engines Reinforce Racism, by Safiya Umoja Noble.

New York University Press

It is in this broader social and political environment that the Federal Communications Commission and Federal Trade Commission have been reluctant to regulate the internet environment, with the exception of the Children’s Internet Protection Act and the Child Safe Viewing Act of 2007. Attempts to regulate decency vis-à-vis racist, sexist, and homophobic harm have largely been unaddressed by the FCC, which places the onus for proving harm on the individual. I am trying to make the case, through the mounting evidence, that unregulated digital platforms cause serious harm. Trolling is directly linked to harassment offline, to bullying and suicide, to threats and attacks. The entire experiment of the internet is now with us, yet we do not have enough intense scrutiny at the level of public policy on its psychological and social impact on the public.

The reliability of public information online is in the context of real, lived experiences of Americans who are increasingly entrenched in the shifts that are occurring in the information age. An enduring feature of the American experience is gross systemic poverty, whereby the largest percentages of people living below the poverty line suffering from un- and underemployment are women and children of color. The economic crisis continues to disproportionately impact poor people of color, especially Black / African American women, men, and children.

about the author

Safiya Umoja Noble is an Assistant Professor at the University of Southern California (USC) Annenberg School of Communication. Noble is the co-editor of two books, The Intersectional Internet: Race, Sex, Culture and Class Online and Emotions, Technology & Design.

Furthermore, the gap between Black and White wealth has become so acute that a recent report by Brandeis University found that this gap quadrupled between 1984 and 2007, making Whites five times richer than Blacks in the US. This is not the result of moral superiority; this is directly linked to the gamification of financial markets through algorithmic decision making. It is linked to the exclusion of Blacks, Latinos, and Native Americans from the high-paying jobs in technology sectors. It is a result of digital redlining and the resegregation of the housing and educational markets, fueled by seemingly innocuous big-data applications that allow the public to set tight parameters on their searches for housing and schools. Never before has it been so easy to set a school rating in a digital real estate application such as to preclude the possibility of going to “low-rated” schools, using data that reflects the long history of separate but equal, underfunded schools in neighborhoods where African Americans and low-income people live.

These data-intensive applications that work across vast data sets do not show the microlevel interventions that are being made to racially and economically integrate schools to foster educational equity. They simply make it easy to take for granted data about “good schools” that almost exclusively map to affluent, White neighborhoods. We need more intense attention on how these types of artificial intelligence, under the auspices of individual freedom to make choices, forestall the ability to see what kinds of choices we are making and the collective impact of these choices in reversing decades of struggle for social, political, and economic equality. Digital technologies are implicated in these struggles.

These dramatic shifts are occurring in an era of US economic policy that has accelerated globalization, moved real jobs offshore, and decimated labor interests. Claims that the society is moving toward greater social equality are undermined by data that show a substantive decrease in access to home ownership, education, and jobs—especially for Black Americans. In the midst of the changing social and legal environment, inventions of terms and ideologies of “colorblindness” disingenuously purport a more humane and nonracist worldview. This is exacerbated by celebrations of multiculturalism and diversity that obscure structural and social oppression in fields such as education and information sciences, which are shaping technological practices. Research by Sharon Tettegah, a professor of education at the University of Nevada, Las Vegas, shows that people invested in colorblindness are also less empathetic toward others. Making race the problem of those who are racially objectified, particularly when seeking remedy from discriminatory practices, obscures the role of government and the public in solving systemic issues.

The entire experiment of the internet is now with us, yet we do not have enough intense scrutiny at the level of public policy on its psychological and social impact on the public.

Central to these “colorblind” ideologies is a focus on the inappropriateness of “seeing race.” In sociological terms, colorblindness precludes the use of racial information and does not allow any classifications or distinctions. Yet, despite the claims of colorblindness, research shows that those who report higher racial colorblind attitudes are more likely to be White and more likely to condone or not be bothered by derogatory racial images viewed in online social networking sites. Silicon Valley executives, as previously noted, revel in their embrace of colorblindness as if it is an asset and not a proven liability. In the midst of reenergizing the effort to connect every American and to stimulate new economic markets and innovations that the internet and global communications infrastructures will afford, the real lives of those who are on the margin are being reengineered with new terms and ideologies that make a discussion about such conditions problematic, if not impossible, and that place the onus of discriminatory actions on the individual rather than situating problems affecting racialized groups in social structures.

Formulations of postracialism presume that racial disparities no longer exist, a context within which the colorblind ideology finds momentum. George Lipsitz, a critical Whiteness scholar and professor at the University of California, Santa Barbara, suggests that the challenge to recognizing racial disparities and the social (and technical) structures that instantiate them is a reflection of the possessive investment in Whiteness—which is the inability to recognize how White hegemonic ideas about race and privilege mask the ability to see real social problems. I often challenge audiences who come to my talks to consider that at the very historical moment when structural barriers to employment were being addressed legislatively in the 1960s, the rise of our reliance on modern technologies emerged, positing that computers could make better decisions than humans. I do not think it a coincidence that when women and people of color are finally given opportunity to participate in limited spheres of decision making in society, computers are simultaneously celebrated as a more optimal choice for making social decisions. The rise of big-data optimism is here, and if ever there were a time when politicians, industry leaders, and academics were enamored with artificial intelligence as a superior approach to sense-making, it is now. This should be a wake-up call for people living in the margins, and people aligned with them, to engage in thinking through the interventions we need.

From Algorithms of Oppression: How Search Engines Reinforce Racism. Courtesy of NYU Press.

Related Stories

Source link