Facebook, Google and Twitter must appear before the Senate Commerce Committee of that country to answer for the way they moderate online content, a thorny and complicated issue that does not offer a single solution.
This Wednesday, the CEOs of Google, Twitter and Facebook will testify before the United States Senate, virtually.
It is normal if this generates a slight deja vu, because this has perhaps been the most intense (and tense) year in the relations that the large technology firms have with the Congress of that country. In July, representatives of some of these companies had already appeared before the Senate to talk about competition and monopolies and just a few weeks ago, the US House of Representatives presented a harsh report on the business practices of these emporiums.
And just a week ago, the Justice Department filed a lawsuit against Google for alleged maneuvers that hurt competition in the online advertising and search engine market.
So, yes, again the bosses of big companies appear before Congress. And although this time the issue is not minor, it is much more plagued with politics, with less than a week before a decisive election for that country, the world and, incidentally, for the internet in an era in which there are a ruler who operates more like a Twitter star than a statesman.
The Senate Commerce Committee will hear from Jack Dorsey (CEO of Twitter), as well as CEOs of Facebook, Mark Zuckerberg, and Google, Sundar Pichai, on the law known as Section 230, which protects online services of responsibility for content posted by others.
If you don’t know what section 230 (of a 1996 law) is, don’t worry, the vast majority of the US electorate doesn’t either, even though the Republican presidential candidate has been bringing up the subject over and over for a couple of weeks.
In short, section 230 gives online platforms a certain immunity from the content that circulates through their tools: the user is responsible for the content, not the company that facilitates its transmission.
That for one thing. On the other hand, it also gives them the freedom to moderate this information in a certain way, without losing the immunity of the first portion of this explanation.
And this is where the political tensions come in, linked to this election, but deeply tied to the last four years of political life in the United States, because the Republicans (and the far right) would like the platforms not to reach into content moderation and Democrats (and the most left wing of the political spectrum) call for more involvement in regulating this information.
Proponents of the law argue that it is a cornerstone of the internet by allowing online services to flourish without fear of a flood of litigation, but attacks on the law are on the rise across the political spectrum.
Dorsey, who will appear virtually before the Senate Commerce Committee, will call Section 230 of the Communications Decency Act “the Internet’s most important law for freedom of expression and security,” and will argue that repeal of the law will lead to more content surveillance, not less.
“Eroding the foundations of Section 230 could collapse the way we communicate on the Internet, leaving only a small number of giant and well-funded technology companies,” he will tell the Senate on Wednesday.
Zuckerberg will make a similar argument. “Platforms are likely to censor more content to avoid legal risks and are less likely to invest in technologies that allow people to express themselves in new ways,” a copy of his testimony reads. However, the CEO of Facebook seems willing to change the law. “I think Congress should update the law to make sure it works as intended,” he plans to say.
In the background, the debate is about the moderation of online content. This is a subject that, to put it lightly, is something like a huge swamp populated with huge crocodiles.
Or, as Corynne Mcsherry of the Electronic Frontier Foundation (EFF) puts it, “content moderation is not a silver bullet”: “We should not expect moderators to fix a problem that, in fairness, lies in system flaws. electoral. You can’t ask technology to fix something it didn’t create. “
What Mcsherry is referring to by the silver bullet is that despite all the efforts of the platforms (well-intentioned or not), there is no single solution to dealing with forms of content regulation online. And there is even a good discussion about whether these companies should act as judges on issues that intimately touch human rights, such as freedom of expression.
For example, at the beginning of the COVID-19 pandemic, many social media platforms changed their content moderation policies to make them more dependent on automation tools. Twitter, Facebook and YouTube increased their capabilities in areas such as machine learning to identify problematic content in an effort to protect their moderation teams and the privacy of their users.
The moderation done by algorithms has the enormous advantage that it is a process that can be implemented on a large scale, something that comes in handy given the enormous size of the platforms in question.
The problem here is that automated moderation presents serious problems when it comes to identifying contexts and subtleties that, in many cases, are the line that defines freedom of expression and the free circulation of ideas with political diversity of pieces that can fall into classifications as terrorism or incitement to hatred.
The EFF uses the following example to argue this point: “Often times, evidence of human rights violations or war crimes is caught in the web of automated content moderation, as algorithms have serious problems reading the context and thus differentiate content directly associated with terrorism with efforts to record these events, for example. This negative impact on content detection affects the Arab and Muslim communities more, and by far, ”.