Meta, the company founded on the principle of building connectivity and giving people a voice worldwide, did just the opposite last year and enforced speech policies that violated Palestinian’s freedom of expression and freedom of assembly. That assessment, which claims Meta’s policies negatively impacted Palestinian’s basic human rights, didn’t come from a Big Tech critic or angry ex-employee. Rather, it came from a Meta-commissioned human rights assessment.
The report, conducted by the Business for Social Responsibility (BSR) investigates the impact Meta’s actions and policy decisions had during a brief but brutal Israeli military escalation in the Gaza Strip that reportedly left at least 260 people dead and left more than 2,400 housing units reduced to rubble. BSR’s report determined Meta managed to simultaneously over-enforce erroneous content removals and under-enforce truly harmful, violating content.
“Meta’s actions in May 2021 appear to have had an adverse human rights impact…on the rights of Palestinian users to freedom of expression, freedom of assembly, political participation, and non-discrimination, and therefore on the ability of Palestinians to share information and insights about their experiences as they occurred,” the report reads. “This was reflected in conversations with affected stakeholders, many of whom shared with BSR their view that Meta appears to be another powerful entity repressing their voice that they are helpless to change.”
The BSR report says Meta over-enforced content removal on a higher per-user basis for Arabic-speaking users. That disparity potentially contributed to the silencing of Palestinian voices. At the same time, the report claims Meta’s “proactive detection” rates of potentially violating Arabic content were much higher than that of Hebrew content. While Meta has formed a “hostile speech classifier” for the Arabic language, the same does not exist for Hebrew. That lack of a Hebrew hostile speech classifier, the report argues, may have contributed to an under-enforcement of potentially harmful Hebrew content.
Facebook and Instagram reportedly saw a surge in potentially violating cases up for review at the onset of the conflict. By BSR’s measures, the platforms saw case volume increase by tenfold on peak days. Meta simply didn’t have enough Arabic or Hebrew-speaking staff to deal with that outpouring of cases according to the report.
Meta’s over-enforcement of certain speech metastasized over time. Impacted users would reportedly receive “strikes” that would negatively impact their visibility on platforms. That means a user wrongly flagged for expressing themselves would then potentially have an even more difficult time being heard in future posts. That snowballing effect is troubling in any setting but especially dubious during times of war.
“The human rights impacts of these errors were more severe given a context where rights such as freedom of expression, freedom of association, and safety were of heightened significance, especially for activists and journalists, and given the prominence of more severe DOI policy violations,” the report reads.
Despite those significant shortcomings, the report still gave Meta some credit for making a handful of “appropriate actions” during the crisis. BSR applauded Meta’s decision to establish a special operations center/crisis response team, prioritize risks of imminent offline harm, and for making efforts to overturn enforcement errors following user appeals.
Overall though, BSR’s report is a damning assessment of Meta’s consequential shortcomings during the crisis. That’s not exactly the way Meta framed it though in their response. In a blog post, Miranda Sissons, Meta’s Director of Human Rights, acknowledged the report but expertly danced around its single most significant takeaway—that Meta’s actions harmed Palestinian’s human rights. Instead, Sissons said the report, “surfaced industry-wide, long-standing challenges around content moderation in conflict areas.”
The BSR report laid out 21 specific policy recommendations intended to address the company’s negative adverse human rights impact. Meta says it will commit to just 10 of those while partially implementing four more.
“There are no quick, overnight fixes to many of these recommendations, as BSR makes clear,” Sissons said. “While we have made significant changes as a result of this exercise already, this process will take time—including time to understand how some of these recommendations can best be addressed, and whether they are technically feasible.”
Though Meta’s moving forward with some of those policy prescriptions it wants to make damn sure you know they aren’t the bad guys here. In a footnote in their response document, Meta says its, “publication of this response should not be construed as an admission, agreement with, acceptance of any of the findings, conclusions, opinions or viewpoints identified by BSR.”
Meta did not immediately respond to Gizmodo’s request for comment.
Meta’s no stranger to human rights issues. Activist groups and human rights organizations, including Amnesty International have accused the company of facilitating human rights abuses for years. Most memorably in 2018, the United Nations’ top human rights commissioner said the company’s response to evidence it was fueling state genocide against the Rohingya Muslim minority in Myanmar had been “slow and ineffective.”
Since then, Meta has commissioned multiple human rights impact assessments in Myanmar, Indonesia, Sri Lanka, Cambodia, and India, ostensibly to address some of its critics’ concerns. Meta claims its assessments provide a “detailed, direct form of human rights due diligence,” allowing it and other companies to “to identify potential human rights risks and impacts” and “promote human rights” while seeking to “prevent and mitigate risks.”
While digital rights experts speaking to Gizmodo in the past said these were better than nothing, they still fell short of meaningfully holding the company truly accountable. Meta still hasn’t released a highly sought-after human rights assessment of its platform’s effect in India, leading critics to accuse the company of burying it. Meta commissioned that report in 2019.
In July, Meta released a dense, 83-page Human Rights Report summarizing the totality of its efforts so far. Unsurprisingly, Meta gave itself a high grade. Privacy human rights experts who spoke with Gizmodo emphatically criticized the report, with one equating it to “corporate propaganda.”
“Let’s be perfectly clear: This is just a lengthy PR product with the words ‘Human Rights Report’ printed on the side,” Accountable Tech co-founder Jesse Lehrich told Gizmodo.