Meta can put anti-vax posts back on Facebook and Instagram

Meta can put anti-vax posts back on Facebook and Instagram

Today, let’s talk about a settled question that Meta has decided to reopen: what should the company do about misinformation related to COVID-19?

Since the earliest days of the pandemic, Meta has tried to remove false claims about the disease from Facebook and Instagram. And for just as long, the company has faced criticism for not doing a very good job. A year ago this month, asked about the role “platforms like Facebook” played in spreading misinformation about the disease, President Biden said “they’re killing people” — though he backtracked a day later.

Still, Biden expressed a fear deeply held among Meta’s critics: that the platform’s vast user base and algorithmic recommendations often combine to help fringe conspiracy theories reach large mainstream audiences, promote vaccine hesitancy, resistance to wearing masks and other health harms.

The pandemic is nowhere near over — an estimated 439 people died of COVID in the past 24 hours, up 34 percent over the past two weeks. And highly contagious Omicron subvariants continue to tear through the country, raising fears of a spike in cases of prolonged COVID — a condition that experts say has already been “a mass disabling event.” An estimated 1 in 13 American adults reported having prolonged COVID symptoms earlier this month, according to the US Centers for Disease Control and Prevention.

Despite that, Meta is now considering whether to relax some of the restrictions it has placed on Covid-related misinformation, including whether to continue removing posts about false claims about vaccines, masks, social distancing and related topics. It has asked the Oversight Board – an independent group funded by Meta to help it make difficult calls related to speech – for an advisory opinion on how to proceed.

Nick Clegg, the company’s president of global affairs, explained in a blog post on Tuesday:

In many countries, where vaccination rates are relatively high, life is increasingly returning to normal. But this is not the case everywhere, and the course of the pandemic will continue to vary significantly around the world – especially in countries with low vaccination rates and less developed health systems. It is important that any policy that Meta implements is appropriate for the full range of circumstances in which countries find themselves.

Meta is fundamentally committed to freedom of expression, and we believe our apps are an important way for people to make their voices heard. But some misinformation can lead to an imminent risk of physical harm, and we have a responsibility not to let this content spread. Our Community Standards guidelines seek to protect free speech while preventing this dangerous content. But it is not easy to resolve the inherent tensions between freedom of expression and security, especially when we are faced with unprecedented and fast-moving challenges, as we have been in the pandemic. That is why we are asking for advice from the Supervisory Board in this matter. Its guidance will also help us respond to future public health crises.

Despite all the criticism Meta has received about its enforcement of health misinformation, the steps they took clearly had a positive effect on the platform in some way. The company estimates it has removed more than 25 million posts under its stricter guidelines, which now require the removal of 80 separate false claims about the disease and its vaccines.

At the same time, the platform has undoubtedly exaggerated at times. In May 2021, I wrote about Meta’s decision to reverse a previous ban on discussing the possibility that COVID-19 leaked from a Chinese laboratory. The company made that decision amid a rise in hate violence against Asian people, fearing that conspiracy theories linked to the disease’s origins could be used to justify further attacks.

But as the debate over the virus’ origins intensified, Meta began to let people speculate again. (To date, no consensus has emerged on the issue.) I wrote at the time that the company probably shouldn’t have addressed the issue in the first place, instead using its existing hate speech policy to moderate racist posts:

I generally prefer an interventionist approach when it comes to conspiracy theories on social networks: given the damage done by followers of QAnon, Boogaloo and other extremist movements, I see real value in platforms reducing their reach and even removing them altogether.

On some issues, however, platform intervention can do more harm than good. Forbidding the laboratory leak hypothesis gave it the appearance of forbidden knowledge, when acknowledging the reality – that it’s improbable, but an open question – might have been just boring enough to prevent it from catching fire in these fever swamps.

Last week I asked Clegg why the company had decided to ask the board for a second opinion on health misinformation now. One, he said, Meta assumes will be future pandemics that bring with them their own policy questions. The company wants to get some expert guidance now, so that it can act more thoughtfully next time. And two, he said, the board of supervisors could take months to craft a statement. Meta wanted to get that process started now.

But more than anything, he said, the company wanted a check on its power — to get the board, with which it signed a new three-year, $150 million operating deal this month, to weigh in on what have been some pretty strict guidelines .

“This was a very dramatic expansion of our most onerous sanction,” Clegg told me. “We have not done it on this scale in such a short time before. … If you have awesome power, it is all the more important that you exercise that awesome power thoughtfully, responsibly, and openly. In my view, it would be curious and eccentric not to refer this to the supervisory board.”

It is actually one of the board’s two core tasks to weigh in on policies like this. Its primary duty is to hear appeals from users who believe their posts should be reinstated after being removed, or removed after being left in error. When the board takes these matters, its decisions are binding, and Meta has so far always respected its findings.

The board’s other central task is to give opinions on how Meta should change its guidelines. Sometimes it links these opinions to decisions in individual cases; other times, like with the COVID guidelines, Meta asks the board for something. In contrast to cases about individual items, the board’s opinions are not binding here – but to date, Meta has adopted approximately two-thirds of the changes the board has proposed.

Some people keep writing off the board anyway. Since even before it began hearing cases in 2020, the board has been the subject of withering complaints from critics who argue it serves no more than a public relations function for a company so beleaguered it had to change its name last year.

And yet it’s also clear that Meta and other social platforms have a deep need for the kind of rudimentary legal system a board like this can provide. In its first year, the board got 1.1 million anchors from Meta’s users. Before the board existed, they had no recourse when Facebook made a mistake beyond some limited automated systems. And every tough question about speech was ultimately asked by one person – Mark Zuckerberg – with no room for appeal.

It seems obvious to me that a system where these matters are heard by a panel of experts, rather than a lone chief executive, is superior, although it still leaves much to be desired.

So what happens now?

One possibility is that Meta’s policy team wants to relax restrictions on speech related to covid policy, but wants the coverage that a decision from the supervisory board will give them. They have reason to believe the board might reach that conclusion: it was filled with free-speech advocates, and generally when they’ve ruled against Meta, it’s been in the name of restoring positions the board believes were wrongfully removed.

That said, the company is also likely to be exposed to dirt from left-wing politicians and journalists, along with a certain number of users, if the board gives them the go-ahead to relax the guidelines and the company does. Clegg told me that if that were to happen, Facebook and Instagram would use other measures to reduce the spread of misinformation — such as adding fact-checking or reducing the distribution of false posts in feeds. But the mere existence of anti-vaxx content on the Meta will lead to new criticism – and possibly new damage.

Another possibility is that the board does not take the bait. Members may argue that removing health misinformation, while a drastic step, continues to be necessary — at least for the time being. The board is still relatively new, and largely unknown to the general public, and I wonder what appetite members have to stand up for people’s right to spread lies about vaccines.

Whatever the board decides, Clegg said, Meta will proceed cautiously with any changes. At the same time, he said, the company wants to be sensible in how it deletes user posts.

“I think you should use the removal sanction very carefully,” he said. “You should set the bar very high. You don’t want private companies to remove things unless it’s really demonstrably related to imminent, real harm.”

Leave a Reply

Your email address will not be published.