SOCIAL MEDIA

Zuckerberg Raises Questions Over Government Suppression Requests


Okay, it’s worth clarifying a key talking point when it comes to social media “free speech” and the perceived interference of government agencies in what social media companies have allowed (and why) on their platforms.

Today, Meta CEO Mark Zuckerberg submitted a letter to Representative Jim Jordan in which Zuckerberg expressed regret about the way in which Meta has handled some government suppression requests in the past, specifically in relation to COVID and the Hunter Biden laptop case.

Both of which are key conservative talking points, and foundational criticisms of modern social apps.

In X’s “Twitter Files” expose, for example, which was based on internal communications sourced shortly after Musk took over at the app, it was these two incidents that Elon Musk’s hand-picked journalist team sought to highlight as examples of government overreach.

But are they? Well, it depends on how you look at it.

In retrospect, yes, both are examples of government censorship which could point to problematic misuse of public information platforms. But when considering the information available to the platforms and moderation staff at the time, their responses to both also make sense.

In his letter to Rep. Jordan, Zuckerberg explains that:

“In 2021, senior officials from the Biden Administration, including the White House, repeatedly pressured our teams for months to censor certain COVID-19 content, including humor and satire, and expressed a lot of frustration with our teams when we didn’t agree. Ultimately, it was our decision whether or not to take content down, and we own our decisions, including COVID-19-related changes we made to our enforcement in the wake of this pressure. I believe the government pressure was wrong, and I regret that we were not more outspoken about it.”

Much like Twitter’s management at the time, Zuckerberg says that government officials were seeking to suppress certain views on the pandemic, especially those related to vaccine hesitancy, in order to maximize vaccine take-up, and get the country back to normal.

Indeed, as you may recall, President Biden went on record to say that social media platforms were “killing people” by refusing to remove anti-vax posts. At the same time, White House officials were also pressuring social platforms, with any means that they could, to get them to police anti-vax speech.

Which, as Zuckerberg further notes, put the platforms in a difficult position:

“I also think we made some choices that, with the benefit of hindsight and new information, we wouldn’t make today. Like I said to our teams at the time, I feel strongly that we should not compromise our content standards due to pressure from any Administration in either direction – and we’re ready to push back if something like this happens again.”

Former Twitter Trust and Safety chief Yoel Roth has acknowledged the same, that Twitter was being asked to remove posts and profiles that were amplifying anti-vax sentiment, while another former Twitter Trust and Safety head Del Harvey has also discussed the weigh-up they had to make in addressing such concerns.

If something was going to lead to somebody dying if they believed it, we wanted to remove that. If something was just … It wasn’t going to immediately kill you, but it wasn’t a great idea, or it was misinfo, then we would want to make sure we made note of that.”

In the context of the time, this statement is really the core of the debate, with government officials and health experts warning that COVID deaths would increase if vaccine take-up wasn’t maximized.

Hence, social platforms did act on more of these cases than they should have. But again, this was based on official information from health authorities, and the calls were being made in response to a rapidly changing pandemic situation.

As such, judging these calls in retrospect unfairly dismisses the uncertainty of the time, in favor of ideological perspectives around the broader pandemic response. Social platforms were a reflection of this, yes, but they were not the root source of the decisions being made on such at the time.

So is that a violation of “free speech”? Again, it depends on your perspective, but the logic and context of the time does suggest that such calls were being made in line with official advice, and were not being imposed as a means of information control or suppression.

Which then brings us to the Hunter Biden laptop story.

One of the most controversial political cases in modern history, the perception from conservatives is that social media platforms worked in collusion with the Democrats to suppress the Hunter Biden laptop story, in order to ensure that it was not given broader reach, and might therefore impact Biden’s Presidential campaign.

As Zuckerberg explains:

“In a separate situation, the FBI warned us about a potential Russian disinformation operation about the Biden Family and Burisma in the lead-up to the 2020 election. That fall, when we saw a New York Post story reporting on corruption allegations involving then-Democratic Presidential candidate Joe Biden’s family, we sent that story to fact-checkers for review, and temporarily demoted it while waiting for a reply. It’s since been made clear that the reporting was not Russian disinformation, and in retrospect, we shouldn’t have demoted the story. We’ve changed our policies and processes to make sure this doesn’t happen again – for instance, we no longer temporarily demote things in the U.S while waiting for fact-checkers.”

As the explanation goes, all social platforms were being warned of a story which sounded too ridiculous to be real, that Hunter Biden, the son of Joe Biden, had taken his laptop, loaded with confidential information, in for repairs at The Mac Shop in Wilmington, Delaware. Hunter Biden was seeking to recover the data from his laptop, but after he didn’t return to collect the device, or pay his bill in over 90 days, the store’s owner then handed the device over to authorities, who then found incriminating evidence on the hard drive.

Upon these initial reports, the story did sound like it couldn’t be true, that some random computer repairman had incidentally gained access to such damning information in the midst of an election campaign. As such, the suggestion was that it could be a Russian disinformation operation, which is what social platforms were being warned about, and then acted on in some instances, restricting the reach of the report. But upon further investigation, which concluded after the 2020 election, it was confirmed that the report was correct, sparking new accusations of suppression.

But again, as Zuckerberg notes, social platforms were being warned that this was misinformation, and they acted on such accordingly. Which points to questionable fact-checking by the FBI more so than the platforms themselves, who, on balance, were operating in good faith, based on the information they were receiving from official intelligence sources.

That still suggests that there may have been a level of suppression of the story at some level. But again, the suggestion that social platforms were working in collusion with the government to benefit one side seems incorrect, based on what we know of the case.

But in retrospect, both incidents raise questions about the impartiality of social platforms, and how they moderate content, and what motivates them to act on such. Both, based on these explanations, do seem like reasonable responses by moderation teams working on official information, but at what point should social platforms reject official sources, and simply let such information flow, regardless of whether it’s true or not?

Because there have been a lot of incidents where social platforms have correctly suppressed mis- and disinformation, and those efforts have arguably lessened real-world harm.

Which then brings us back to Del Harvey’s observation of the role of social platform moderation teams, that the job is to stop the spread of information that could lead to somebody, or many people, dying as a result. Anything less than that should be tagged with labels, or on X, marked with a Community Note.

Does that go far enough? Does that go too far, and should we just, as Elon sees it, allow all opinions to be heard, no matter how incorrect they may be, in order to then debate them in the public domain?

There are no easy answers on this, as what might be viewed as deadly misinformation to one group could be harmless chatter to another. And while relying on the merits of free debate does hold some appeal, the fact is that when Elon, in particular, shares something with his 200 million followers, it carries extra weight, and people will act on that as truth. Whether it is or not.

Is that the situation we want, enabling the most influential social media users dictate truth as they see it? And is that any better than allowing government influence on social apps?

Are we moving towards an era of greater free speech, or one where narratives can be shifted by those with the most to lose, simply by creating alternative scenarios and pitching them as truth?





Source link

MarylandDigitalNews.com