WILMINGTON, Del. – U.S. Senator Chris Coons (D-Del.), a member of the Senate Judiciary Committee, today led 14 colleagues on a letter to Facebook CEO Mark Zuckerberg calling on the company to fully address the problem of anti-Muslim bigotry on its platform, which has enabled offline violence against Muslims in the United States and elsewhere around the world.

Senator Coons is joined on the letter by Senators Richard Blumenthal (D-Conn.), Mazie Hirono (D-Hawaii), Dick Durbin (D-Ill.), Mark Warner (D-Va.), Robert Menendez (D-N.J.), Patrick Leahy (D-Vt.), Ben Cardin (D-Md.), Michael Bennet (D-Colo.), Gary Peters (D-Mich.), Amy Klobuchar (D-Minn.), Kirsten Gillibrand (D-N.Y.), Elizabeth Warren (D-Mass.), Chris Murphy (D-Conn.), and Bernie Sanders (I-Vt.).

“Facebook is a groundbreaking company that has revolutionized the way we communicate.  Unfortunately, the connectivity that can bring people together in many positive ways also has been used to dehumanize and stoke violence against Muslims, Black people, Latinos, immigrants, the Jewish community, Sikhs, Christians, women, and other communities here and across the world,” the Senators wrote.

Of particular concern is how Facebook has addressed the targeting of mosques and Muslim community events by armed protesters through the platform. In June 2019, Facebook responded to concerns about these practices by creating a “call to arms” policy that prohibits event pages that call for individuals to bring weapons to a location. However, the Senators note that Facebook has not taken adequate steps to enforce this policy, which should have barred an event page in Kenosha, Wisconsin earlier this year, as well as a 2019 event page used to plan an armed protest at the largest Muslim community convention in the country.

“We recognize that Facebook has announced efforts to address its role in the distribution of anti-Muslim content in some of these areas,” the Senators wrote. “Nevertheless, it is not clear that the company is meaningfully better positioned to prevent further human rights abuses and violence against Muslim minorities today.”

“As members of Congress who are deeply disturbed by the proliferation of this hate speech on your platform, we urge you to do more.”

An independent civil rights audit of Facebook from July 2020 highlighted disturbing examples of anti-Muslim abuse on the platform ranging “[f]rom the organization of events designed to intimidate members of the Muslim community at gathering places, to the prevalence of content demonizing Islam and Muslims, and the use of Facebook Live during the Christchurch massacre…” These concerns have also prompted current Facebook employees to write a letter demanding action on anti-Muslim bigotry and calling for broader structural changes.

In their letter, the Senators urge Facebook to take a number of actions to address these issues including collecting and publishing the data needed to understand the scope of the problem, publishing readily available information to help the public evaluate its response, and implementing a plan to ensure robust enforcement of its call to arms policy.

“We thank Sen. Coons and his colleagues for holding Facebook accountable for anti-Muslim hate and violence on its platform,” said Muslim Advocates Executive Director Farhana Khera. “Since 2015, Muslim Advocates has warned Facebook that the platform’s event pages were being used by violent militias and white nationalists to organize armed rallies at mosques. With their letter, these senators are raising needed attention to this critical issue. We need to know what Facebook plans to do to end the anti-Muslim hate and violence enabled by their platform—and end it now.”

Groups supporting the letter include Muslim Advocates, The Leadership Conference on Civil and Human Rights, Center for American Progress, Human Rights Watch, Human Rights Campaign, Asian Americans Advancing Justice, Bend the Arc: Jewish Action, Free Press, Global Project Against Hate and Extremism, Interfaith Alliance, Japanese American Citizens League, MediaJustice, National Hispanic Media Coalition, Shoulder to Shoulder, Sikh Coalition, and UltraViolet.

A copy of the letter is here and below. 

November 16, 2020

Mark Zuckerberg

Chairman and Chief Executive Officer

Facebook, Inc.

1601 Willow Road

Menlo Park, CA 94025

Dear Mr. Zuckerberg:

We write to express our deep concern regarding anti-Muslim bigotry on Facebook.  An independent civil rights audit of the company from July 2020 highlighted disturbing examples of anti-Muslim abuse on the platform ranging “[f]rom the organization of events designed to intimidate members of the Muslim community at gathering places, to the prevalence of content demonizing Islam and Muslims, and the use of Facebook Live during the Christchurch massacre . . . .”   These concerns have also prompted current Facebook employees to write a letter demanding action on anti-Muslim bigotry and calling for broader structural changes.   As members of Congress who are committed to protecting the Muslim community, we urge you to take immediate action to combat this bigotry on Facebook’s platforms.

Facebook is a groundbreaking company that has revolutionized the way we communicate.  Unfortunately, the connectivity that can bring people together in many positive ways also has been used to dehumanize and stoke violence against Muslims, Black people, Latinos, immigrants, the Jewish community, Sikhs, Christians, women, and other communities here and across the world.  The enabling of hate speech and violence against any group is not acceptable.  We appreciate that Facebook has taken certain steps to combat these problems.  For instance, you recently reversed a prior decision that had allowed content denying the Holocaust, and you have altered your policies to ban blackface and certain anti-Jewish stereotypes.  But much more must be done to protect these vulnerable communities.  With regard to the Muslim community in particular, the civil rights audit noted advocates’ “alarm that Muslims feel under siege on Facebook” and explained how attacks on Muslims present unique considerations that require separate analysis and response compared to other kinds of attacks.   Yet, the auditors noted, “Facebook has not yet publicly studied or acknowledged the particular ways anti-Muslim bigotry manifests on its platform.”  

Of particular concern is how Facebook has addressed the targeting of mosques and Muslim community events by armed protesters through the platform.  In June 2019, Facebook responded to concerns about these practices by creating a “call to arms” policy that prohibits event pages that call for individuals to bring weapons to locations.   Yet, in August 2019, when advocates reported to Facebook that a militia group was using an event page to plan an armed protest at the largest Muslim community convention in the country for the second year in a row, it took Facebook more than a full day to remove the content, a delay that Facebook acknowledged was too long and an “enforcement misstep.”  

Other recent events have demonstrated how Facebook has not taken adequate steps to enforce this call to arms policy.  In August 2020, a group called the Kenosha Guard posted an event page titled “Armed Citizens to Protect Our Lives and Property,” calling for armed individuals to gather in Kenosha, Wisconsin, following the shooting of Jacob Blake.  Notwithstanding multiple reports by users that this page violated Facebook policies, Facebook did not take the page down.  An armed 17-year-old traveled from out of state to join this gathering, fatally shot two protestors that night, and is charged with their murder.  You stated that the failure to take down the event page and the Kenosha Guard’s group page was “largely an operational mistake” because contract content moderators without specialized training failed to detect that the pages violated a new militia policy Facebook had established in August 2020.   Your statement was misleading as to the event page, however, because it did not mention that the event page also violated the call to arms policy that had been in place for over a year.  Importantly, we understand that the contractors who review user-reported content are not instructed to enforce a core component of the call to arms policy.  It is not apparent that Facebook ensures meaningful enforcement of this policy, and that is not acceptable.  As the Change the Terms Coalition has explained, that “isn’t an operational mistake – that’s a design defect.”  

We have similar concerns about Facebook’s efforts to ensure that the platform is not used to enable systematic violence and discrimination against Muslims around the world.  A United Nations report concluded that the company played a “determining”  role in violence against Rohingya Muslims in Myanmar, and Facebook has similarly acknowledged that the platform was used to “foment division and incite offline violence”  against the Rohingya.  Unfortunately, this is not an isolated incident.  According to a New York Times report published a month after anti-Muslim violence erupted in Sri Lanka in March 2018, “Facebook’s newsfeed played a central role in nearly every step from rumor to killing,”  despite numerous attempts by Sri Lankan activists and government officials to warn Facebook about potential outbreaks of violence.  In an especially horrific episode of anti-Muslim activity on Facebook, in March 2019, a white nationalist gunman broadcasted his 17-minute slaughter of 51 Muslims at two mosques in Christchurch, New Zealand, for the entire world to see using Facebook Live.  Reports indicate that the platform has also been used to support the internment of the Uyghurs in China and other human rights violations against this population, that Facebook and WhatsApp have been used to incite violence against Muslims in India, and that Facebook has been used to promote hate and violence in other areas around the world.

The civil rights audit and other reports have documented the shortcomings of Facebook that have led to these results over the years.  The United Nations explained in 2018 that Facebook launched its Myanmar-specific services without content moderators who spoke the necessary languages, without adequate technology, and without sufficient transparency and coordination with local organizations.  It also documented how speech in clear violation of Facebook’s policies remained on the platform notwithstanding multiple reports, and how even after the speech was taken down, re-posts continued to circulate months later.  Furthermore, the civil rights audit found that Facebook is not sufficiently attuned to how its algorithms “fuel extreme and polarizing content,” and thereby may “driv[e] people toward self-reinforcing echo chambers of extremism,” as seen in Myanmar and Sri Lanka.   Advocacy groups similarly detailed the extent and persistence of anti-Muslim hate content on Facebook India in multiple reports last year, concerns that have been amplified by recent allegations that some high-ranking employees at Facebook India have enabled hate speech against Muslims and others by applying the platform’s content moderation policies in a selective manner.  

We recognize that Facebook has announced efforts to address its role in the distribution of anti-Muslim content in some of these areas.  These include, for instance, adding country-specific staff and content moderators proficient in certain local languages, investing in proactive detection technologies, strengthening local fact-checking partnerships, and limiting the ability to reshare certain kinds of messages. 

Nevertheless, it is not clear that the company is meaningfully better positioned to prevent further human rights abuses and violence against Muslim minorities today.  In part, this is because Facebook still does not collect the information needed to evaluate the effectiveness of its responses.  For instance, Facebook reported that it took action on 22 million pieces of hate speech content in the second quarter of 2020, up from over 9 million in the first quarter.   It is not apparent, however, whether this is a sign of an improving or worsening problem, because this data lacks crucial context:  Facebook does not calculate or report on the overall prevalence of hate speech on the platform.  It is thus unclear how significant this increase is as a proportion of total hate speech or whether takedowns are increasing only because hate content on the platform is increasing.  Facebook recognizes that the statistic it reports “only tells part of the story,” and Facebook does estimate prevalence in other contexts.  Its failure to do so as to hate speech is concerning.  

In addition, the civil rights audit pointed out that for content that Facebook does remove, the company does not collect data about which protected groups were the target of the removed post.  This prevents Facebook and the public from understanding the volume of hate against a particular group, whether attacks against certain groups are consistently not removed, and whether there are gaps in Facebook’s policies that result in perpetuating or increasing hate speech and attacks against particular groups.  It is difficult to understand how Facebook can effectively combat hate speech without this information. 

There is also basic information that Facebook has or could readily make available, but which it has inexplicably declined to make public.  For instance, while pointing to its increases in country-specific staff and language-specific content moderators in certain areas, Facebook has declined repeated requests from advocates to provide detailed information about its country-specific staff or language-specific content moderators across the world.  Such information is necessary to evaluate Facebook’s suggestion that its additions are adequate and to determine whether there are gaps in coverage in other regions that should be addressed proactively before the next violent event.  Facebook similarly does not provide information about how the hate speech it has taken down is disaggregated by language or country of origin, information that would help identify volatile areas in need of further attention from content moderators or others at Facebook.  That is so even though Facebook has conceded that “[t]hese breakdowns are feasible for these count-based metrics” and that it “recognize[s] the value in having different subpopulations of the various metrics.”   The United Nations 2018 Myanmar report expressed “regret[]” that Facebook did not provide country-specific data about hate speech and deemed it “essential” that such information be disclosed.    

Though these concerns have been raised for years, Facebook thus far has not taken the steps required to effectively address hate and violence targeting Muslims.  In 2018, Facebook acknowledged that it “can and should do better” after its platform fueled violence in Myanmar and outlined steps it would take.   In 2020, Facebook “apologize[d] for” the human rights impacts that resulted from misuse of its platform in Sri Lanka and outlined more steps.   Despite these experiences, recent reporting suggests that today, Facebook is contributing to the spread of hate speech and violence against ethnic and religious groups in Ethiopia, where Facebook “dominates” the internet.   Meanwhile, it announced a call to arms policy to assuage concerns but has failed to adequately enforce it.  As members of Congress who are deeply disturbed by the proliferation of this hate speech on your platform, we urge you to do more.  We believe Facebook must frankly and openly detail the scope of the problem and take concerted and sustained actions to address this problem fully.  We respectfully request that you respond to the questions below by December 16, 2020.  As to each question, insofar as Facebook will commit to taking action, please provide details of its plan and expected timing.  

1.         Will Facebook commit to developing and implementing a plan to ensure robust enforcement of its call to arms policy, including through proactive review of event pages, content moderator review of user reports, and prioritization of highly reported events?  If not, why not?

2.         Will Facebook commit to collecting and publishing data about the overall amount and prevalence of hate content on the platform and whether hate content is increasing on its platform?  If so, please specify whether Facebook will break down this data by country and language.  If not, why not?

3.         Will Facebook commit to collecting and publishing data about which groups were the subject of the hate speech it removes and enforcement rates across groups?  If so, please specify the groups for which Facebook will provide this information.  If not, why not?

4.         Will Facebook commit to collecting and publishing country-specific or language-specific data on hate speech that is on or removed from the platform?  If not, why not?

5.         Will Facebook publish detailed information about the number of country-specific staff and language-specific content moderators it employs?  If not, why not?

6.         Will Facebook commit to studying regularly its civil rights and human rights impacts and making future human rights impact assessments or rights audits public in their entirety?  If not, why not?

7.         Will Facebook commit to establishing and publishing criteria that must be met for Facebook to expand or maintain usage of its services in markets at risk of hate content fueling religious and/or ethnic violence to ensure Facebook does not enable human rights violations?  If so, please specify the outside input that Facebook will solicit in developing these criteria.  If not, why not?

8.         Will Facebook conduct an analysis of how it can better design its systems and algorithms to not just identify and take down hate speech, but limit the reach of this content and its ability to cause offline violence?  If not, why not? 

9.         Will Facebook commit to creating a working group led by a senior employee with expertise in anti-Muslim bigotry specifically tasked with monitoring, reviewing, and coordinating efforts to proactively remove anti-Muslim content on the platform?  If not, why not?

Thank you for your consideration of our views.  We appreciate your prompt attention to this matter.
Sincerely, 

###