This is the fifth in a series of blog posts focusing on Facebook’s role in inciting genocide against the Rohingya in Myanmar as a lens through which to explore corporate accountability, human rights and social media content regulation. The Myanmar military systematically used Facebook as a tool for ethnic cleansing, employing false news and inflammatory posts to stoke fear and justify their violence against the Rohingya. For years, Facebook ignored warning signs that it was contributing to deadly hate speech in Myanmar. Yesterday, the New York Time Times revealed that as the criticism against Facebook mounted last year, it concealed the extent of its fake news problem and employed a Republican opposition-research firm to discredit its critics and shift public anger toward rival companies — all while publicly claiming its commitment to improving its practices.
My first few posts in this series explained that U.S. law protects web publishers from legal claims that arise out of content posted by third parties and examined the obstacles to holding corporations accountable in the U.S court system for human rights abuses they commit overseas. These obstacles to accountability are not unique to Facebook or web publishers. Rather, human rights advocates have long called for an end to the accountability gap that allows transnational companies to violate human rights with impunity.
As I explained in my last post, the U.N. Guiding Principles on Business and Human Rights aim to bridge that gap by articulating states’ obligation to protect — and corporations’ responsibility to respect — human rights. As their name indicates, however, the U.N. Guiding Principles are merely guidelines — not laws, and thus not binding on anyone.
Similarly, the Organization of Economic Cooperation and Development (OECD) Guidelines for Multinational Enterprises provide non-binding standards for responsible business conduct, including human rights guidelines, which are consistent with the U.N. Guiding Principles. Unlike the Guiding Principles, however, the OECD Guidelines provide for the creation of National Contact Points at each of the 48 countries who have committed to implementing them. The National Contact Points offer a complaint mechanism where people harmed by companies can bring grievances, but companies’ participation in the process is voluntary — and the provision of any remedy is rare. To date, no complaint has been made against Facebook.
Facebook’s content moderation malfeasance has not gone unanswered, however. Civil society organizations, investor pressure, the U.N. Special Rapporteur on freedom of expression— a global chorus of researchers, advocates and activists are questioning Facebook’s power over public discourse.
Ranking Digital Rights, for example, uses a set of indicators to measure tech companies’ public disclosures of policies that affect users’ freedom of expression and privacy rights. Their indicators draw in part from the UN Guiding Principles on Business and Human Rights. They also draw on principles articulated by the Global Network Initiative (GNI), a multi-stakeholder initiative, which includes Facebook as a member, addressing internet and telecommunications companies’ responsibilities towards freedom of expression and privacy in the face of government demands to restrict content or hand over user information.
Earlier this year, a group of advocates and academics put forward the Santa Clara Principles on Transparency and Accountability in Content Moderation, which draws on the U.N. Guiding Principles on Business and Human Rights to set forth minimum standards to govern social media’s treatment of content take-downs. Recently, more than 70 civil society organizations called on Facebook specifically to abide by those standards.
These are just some of many, many example of civil society organizations around the globe calling on Facebook to respect human rights. They are grappling with how to close the Facebook accountability gap without crushing freedom of expression online.
So far, Facebook’s response has been reactionary and divisive — and belies their public commitment to do better. Its response is also entirely typical. It is yet another example of a corporation using its power and influence to silence critics and avoid regulatory oversight. The New York Times just confirmed that Facebook is no different — and that it cannot be trusted to fix the havoc it has wrought.