Facebook Plans to Place Greater Priority on Removing Anti-Black Hate Speech Than Anti-White Posts

We may earn a commission from links on this page.
Image for article titled Facebook Plans to Place Greater Priority on Removing Anti-Black Hate Speech Than Anti-White Posts
Photo: Alexandra Popova (Shutterstock)

After years of being criticized for its hate speech policies, Facebook documents reveal that the social media platform plans on overhauling its algorithms so they will place greater emphasis on vulnerable groups, including Black Americans. White people, men and “Americans” will be deprioritized, the Washington Post reports.

The change in the algorithm could mark a big turning point for the company, which has, to the chagrin of marginalized groups, often looked at hate speech from a “race-blind,” “All Hate Speech Matters” perspective. Derogatory posts about white people and men (i.e. “men are trash”) were deemed equally as bad as more vicious posts about historically marginalized groups, such as African Americans, the LGBTQ community and Muslims.

Advertisement

The new approach is being deemed the WoW Project, signifying the “worst of the worst” hate speech being deemed a higher priority (I don’t know who comes up with these names).

Advertisement

From the Post:

As one way to assess severity, Facebook assigned different types of attacks numerical scores weighted based on their perceived harm. For example, the company’s systems would now place a higher priority on automatically removing statements such as “Gay people are disgusting” than “Men are pigs.”

Advertisement

Facebook will still consider disparaging remarks about white people, men and Americans hate speech, but engineers say the company’s systems have been changed to deprioritize those remarks as “low-sensitivity,” or less likely to be harmful, the Post reports. Internal documents estimate that about 10,000 fewer posts will be deleted each day due to the change.

“To me this is confirmation of what we’ve been demanding for years, an enforcement regime that takes power and historical dynamics into account,” Arisha Hatch, vice president at the civil rights group Color of Change, told the Post after reviewing the documents.

Advertisement

Since 2019, Facebook has been using algorithms to police hate speech, rather than human moderators. But throughout its history, the world’s most popular social media platform has had problems with evaluating hate speech fairly and precisely.

There have been numerous reports where Black Facebook users have discussed being flagged for violating community guidelines or having their accounts suspended for discussing racist events that happen to them because those accounts will typically involve them being critical of white people.

Advertisement

The algorithms also had difficulty with different formats of hate speech, which Facebook spokeswoman Sally Aldous hinted at in her statement to the Post.

“We know that hate speech targeted towards underrepresented groups can be the most harmful, which is why we have focused our technology on finding the hate speech that users and experts tell us is the most serious,” Aldous said. “Over the past year, we’ve also updated our policies to catch more implicit hate speech, such as content depicting Blackface, stereotypes about Jewish people controlling the world, and banned Holocaust denial.”

Advertisement

This issue has come to a head several times this year, due in part to the massive Black Lives Matter demonstrations that occurred around the country in response to the killing of George Floyd, as well as a contentious election cycle.

In June, Facebook employees walked out over the company refusing to take down President Donald Trump’s “when the looting starts, the shooting starts” post, which encouraged violence against racial justice protesters. Soon after, major corporations such as Unilever, Verizon and Coca-Cola began withdrawing ad spending from Facebook as part of an effort to get the social media giant to address its lackluster hate speech policies (an independent audit of the company found that several decisions management made were “serious setbacks for civil rights”).

Advertisement

But as the Post notes, Facebook was well aware of these issues long before 2020—thanks to Black Facebook employees who have been advocating for the company to improve the user experience for Black users for years. A study called Project Vibe (again, not sure who names these initiatives!) spoke directly to Black Facebook users about their experiences, with Black Facebook employees taking on work related to the project in addition to their other responsibilities.

One former Facebook employee said the project and its findings didn’t seem to go anywhere after being completed in 2018, and employee-initiated solutions related to the study reached a dead end when escalated to senior management.

Advertisement

“The results [from Project Vibe] reflected so negatively on Facebook that they didn’t want the study to be circulated around the company,” one former employee told the Post.

Another spokesperson for Facebook said Project Vibe was shared with the company in 2020, and that proposals that surfaced from the analysis were incorporated into another initiative, Project Blacklight (again….) which launched in 2019.