Already, facial recognition technology is experiencing some kind of backlash. The IRS just announced that it is allowing taxpayers to opt out of facial recognition technologies for identity verification purposes, after activists and lawmakers pushed back against the biometric use of algorithmically tagged photos on social media, like “selfies,” as an invasion of privacy. Ideally, the benefits of identity verification in criminal suspect searches, missing children cases and border control are tremendous, but these societal net plusses must be balanced out against the very real negatives, such as the aforementioned abuses of privacy, as well as entrenching structural racism and marginalization, to which Big Tech is unfortunately prone. If done irresponsibly by the largest technology companies, facial detection and recognition algorithms present an enormous threat to human rights and civil liberties.
The most tragic consequence of having so few African Americans working at Big Tech companies over the years are the algorithmic biases that ensue. We know, for example, that the algorithms behind facial recognition technology underperform on African-Americans because the programmers, overwhelmingly white, have inherent biases in their data sets. Data sets that are almost exclusively white and male lead to intransigent gender and racial biases in technology, particularly in facial recognition technology that have major real-life consequences. Algorithms increasingly influence our everyday lives, and that’s why we should all be concerned about when those algorithms go sideways, exhibiting social inequality and the impulse to embrace such awe-inspiring new technology is strong in big city Mayors like Eric Adams and LaToya Cantrell of New York City and New Orleans, respectively. “Machine bias,”; the bias in the software used around the country in the criminal justice system, education and health, for example, has significant consequences in the lives of Black communities in America.
Vast, we now know, were the consequences of the paucity of African Americans working at Big Tech. As the influence of Big Tech grew astronomically over the last decade-plus, questions, often unanswered, accumulated regarding the lack of diversity in the writers of the algorithmic codes that presently rule our lives. What are the effects of algorithms on communities of color when those algorithms are written almost entirely by a non-diverse workforce? And, as artificial intelligence evolves, will it exacerbate societal inequalities, particularly against African Americans?
Then 2014 happened. The predominantly white, male workforce and upper management at companies like Apple, Alphabet, Amazon, Microsoft and Facebook took a look at themselves and decided upon a certain amount of transparency. Many Big Tech companies have still not had that initial reckoning, but in that year, in that moment, some of the biggest took a rare inwards look. In 2014 – which the Wall Street Journal called “the year Silicon Valley spilled its diversity data” – they disclosed their diversity reports for the very first time, and every year since then, as if like clockwork, we are greeted by rounds of we-can-do-betters.
However, there is notable progress. Over the last eight years, since Silicon Valley took the issue seriously, diversity, especially in relation to gender inequality, has improved. Women now make up over half of Netflix’s global workforce, but back in 2014, a staggering 83% of Google’s international workforce was male. That same year, Google’s workforce was 61% white, 30%, Asian, 3%, Hispanic and only 2%, black. Google, as of 2021, has a workforce of 33% women up from 31% in 2020, which is a significant change, but still leaves room for improvement. And women in 2022 make up 44.7% of the Twitter workforce globally up from 42.7% in January 2021. Further, Twitter raised the proportion of Black employees in the US to 9.4% over 6.9% last year. These improvements have come about largely because of 2014, a great year in the struggle against algorithmic bias, and finally acknowledging the problem.
The numbers remind us how much further we still need to go. Between 2014 and 2019, at Google and Microsoft, the share of US technical employees who are Black or Latinx rose by less than a percentage point. Considering the numbers and where they began before the 2014 conversation started, the notion of racial and gender bias in AI services is not entirely surprising. One flows from the other; the lack of Black voices in the rooms that develop those algorithms is the reason for the amplified errors down the chain. Algorithmic discrimination is a real thing, and it is unpopular. A recent JustCapital poll found that 73% of Americans agree in the promotion of a range of corporate diversity, equity and inclusion policies and actions because, even at the most practical, amoral level, facial recognition errors are just plain bad for business.
What is to be done? Legislation has the possibility of being substantive. The Council of the District of Columbia is presently considering legislation that would impose obligations on entities that use algorithms, and another solution may be soon at hand. Senators Ron Wyden and Cory Booker, along with Congresswoman Yvette Clarke, introduced the Algorithmic Accountability Act of 2022 earlier this month. “The Algorithmic Accountability Act of 2022 requires companies to complete algorithmic impact assessments that provide key details around a given algorithmic system, directs the FTC to create regulations, and hire additional staff to enforce them,” is how the Electronic Privacy Information Center (EPIC) describes the bill. The Algorithmic Accountability Act of 2022 is an update of the Algorithmic Accountability Act of 2019 from the 116th Congress, which was more about transparency than actual accountability of algorithms. An interesting add to the updated Act would require impact assessments when companies are using automated systems to make critical decisions and those impact assessments would involve providing key details of the algorithms, which have been state secrets to Big Tech for years.
But what can be done at this present moment about algorithmic bias? Transparency is key. This year, France has vowed transparency in government algorithms which is exemplar for the public sector. There is also a further need to advocate for increased media coverage of diversity reports to amplify transparency from the private sector, especially from Big Tech. The Data for Black Lives blog is a wonderful resource for surveilling data used against communities of color. Another port of rest against the swirling headwinds of bias in technology is the #Creators4BIPOC movement, which is a wonderful resource for Black Social Media Influencers. “’Working twice as hard to be half as good,’ is a trite phrase, but it’s true,” Brian Grey or Urbanbohemian told Logitech for Creators. “Our talent and ability are often overlooked, except for the one cultural history month a year where people remember we exist.”
The New York Times Presents docu-series’ episode “Who Gets to Be an Influencer?” produced by Lora Moftah and reported by Taylor Lorentz is very insightful. The show following the cast of Black influencers of Collab Crib is an excellent instance of dramatizing the obstacles and roadblocks that algorithms erect in front of young people of color on social media platforms. Another great documentary is Shalini Kantayya’s 90 minute “TikTok, Boom,” a follow-up to her path breaking 2020 Sundance doc “Coded Bias.” In Coded Bias,” which lit the festival circuit on fire two years ago (and is now streaming on Netflix), Kantayya told the story of computer scientist, Joy Buolamwini, M.I.T. Media Lab researcher who discovered big flaws in facial recognition technology. Buolamwini found that artificial intelligence had a problem with gender and race bias . “I experienced this firsthand, when I was a graduate student at MIT in 2015 and discovered that some facial analysis software couldn’t detect my dark-skinned face until I put on a white mask,” Buolamwini writes in Time. “These systems are often trained on images of predominantly light-skinned men.” The Coded Gaze is what she calls the systemic bias in artificial intelligence that can lead to discriminatory or exclusionary practices.
Joy Buolamwini is a legend in the struggle against algorithmic bias. Her research on the Gender Shades Project has uncovered startling gender and racial bias in AI systems sold by tech giants like IBM and Amazon. During the course of her testing, she found that companies using Big Tech AI facial recognition had impressive records guessing the faces of lighter-skinned men – the “pale male data set” — but errors increased 35-fold when guessing darker-skinned women. Serena Williams, Michelle Obama and Oprah Winfrey were all misidentified by facial recognition AI. Is it any wonder that so many tech activists, for so many years, have been wondering after the consequences of so few African Americans working at Big Tech?
Ron Mwangaguhunga is a Brooklyn based writer on media, culture and politics. His work has appeared within Huffington Post, IFC and Tribeca Film Festival, Kenneth Cole AWEARNESS, NY Magazine, Paper Magazine, CBS News.com and National Review online to name a few. He is currently the editor of the Corsair