Diversifying the Digital Economy is a Civil Rights Issue

Facebook announced that it will hire 1,000 employees to its global ad review team in the next year after it was revealed that Russian operatives purchased hundreds of thousands of dollars of ads in an attempt to influence the 2016 US presidential election. This scandal has clear implications not just for campaign and election laws, but also for civil rights discourse. The content of the Russian ads aimed to stoke racial tensions and focused on polarizing issues such as LGBT rights and immigration.

The digital economy is plagued by many of the same issues of racism and sexism that exist in the traditional economy, yet, because the digital economy is rapidly evolving due to new developments in technology, it is subject to even less oversight and regulation. Social media is especially prone to discrimination because it amplifies existing social bias. In order to combat against implicit bias built into technology, the industry needs more women and people of color working in the digital economy.

Reforming an industry as indissoluble as the tech industry will prove difficult. The market capitalization of leading firms and the sheer number of monthly users further complicate the issue as creating a change of a critical mass will take time. Facebook alone reported $10.3 billion in revenue and 2.07 billion active users in its latest earnings report for Q3- 2017. That same quarter, Twitter boasted $332 million in revenue and 330 million monthly active users. Alphabet (the parent company for Google, YouTube, and other services) has a whopping $27.77 billion in quarterly revenue. Most of this revenue comes from advertising within the platform- and that can be problematic.

 

Exclusionary advertising

In 2016, investigative reporting from ProPublica revealed that companies can purchase Facebook ads to target certain races and exclude others. ProPublica successfully published a fake housing ad that excluded African-American users from viewing it on Facebook in a matter of minutes. This is illegal, housing advertisements that exclude certain races from applying are a violation of the Fair Housing Act of 1968, just as employment advertisements that exclude certain races are a violation of the Civil Rights Act of 1964. In another report, ProPublica revealed that Facebook ads can target micro-groups such as self-described “Jew haters.” While not against any law, this is certainly not ethical.

Traditional media, such as newspapers, developed a process to prevent publishing discriminatory housing and employment ads decades ago. Yet, Facebook hadn’t anticipated this problem. It wasn’t until the ProPublica article, combined with pressure from members of Congress and a class action lawsuit, that Facebook created an automated system to restrict housing, employment, and credits ads that target only certain races or genders. However, ProPublica published another article in November 2017 showing that Facebook’s new ad-review system is failing to block illegal ads.

Similarly, Facebook removed the opportunity to target certain micro-groups such as “Jew haters” from its ad purchasing page. Yet the reason that category exists is that users describe themselves that way, and Facebook scrapes these self-described identities using algorithms to create advertising micro-groups. Thus, there is a potential for other categories based on hate speech to emerge. Facebook has chosen to rely on automated systems to police automated systems. While an “automated system” might sound efficient, objective, and impartial, bias is built into many algorithms and artificial intelligence.

 

Racist algorithms?

Algorithms and artificial intelligence have a false specter of objectivity, even though they are ultimately coded by people; while algorithms and AI are deployed by technology, they are created by humans who are susceptible to human biases. There are myriad examples of unintended bias in computer programs, artificial intelligence, and algorithms that have disastrous outcomes. Here are three such examples:

 

 

The public need for diversity in technology

Facebook and other digital platforms have responded to accusations of bias on a case-by-case basis after investigative reporting created public and private pressure. However, the problems of bias in the digital economy go beyond just particular cases– there is a fog of bias surrounding the entire industry. 

Perhaps unsurprisingly, few minorities are represented in the tech workforce, from leadership down on to code monkeys. According to Facebook’s 2016 diversity report, “only 2% of its U.S. workforce is black and only 4% is Hispanic. When it comes to technical workers, the numbers are even worse: 1% is black, 3% Hispanic.” The statistics are similar for other tech companies, from Airbnb to Salesforce. The Congressional Black Caucus (CBC) launched the Tech2020 initiative to increase African-American representation in “all levels of the tech industry.” The CBC asks a salient question: will Facebook prioritize including minorities in its 1,000 new hires for advertisement screening?

 

Tech Diversity

 

Increasing diversity in the tech sector can help dissipate the implicit bias built into–and reinforced by–technology. Diverse people will bring their own perspectives and experience with bias to technology development, such as ensuring that photo recognition software is tested on diverse skin tones. In order to hire more diverse staff, tech industries should work with schools to train and recruit diverse staff starting as early as possible.

However, diversifying staff is a necessary, but not sufficient, condition for reducing bias in the tech sector. The burden to address implicit bias in tech shouldn’t fall on minority coders. In fact, a common reason that some of the few minority tech staffers leave the industry is because of sexual harassment, bullying, and racist stereotyping. This can cumulate in massive public scandals such as the repeated allegations of sexism at Uber. The whole tech industry needs effective and systematic training to identify, address, and reduce implicit bias. Additionally, tech industries should have a robust human resources department that is equipped to fairly handle instances of bias.

Decades-old civil rights laws, such as those for fair housing, equal employment opportunity, and hate speech, are being tested in the new frontiers of the digital economy. In order to reduce sexism and racism, it is important to increase representation of minorities in the tech industry and provide additional training to ensure that civil rights are intentionally respected in the new frontier.

+ posts

Krista O’Connell is a Master of Public Policy candidate and research assistant at Georgetown University where she focuses on education and social policy. She previously worked with immigrant youth and on the GradNation campaign to increase the national high school graduation rate.