The EU has continued to pressure social media companies to monitor content that it deems to be illegal under the recently enacted Digital Services Act (DSA).  There are serious questions about whether such efforts will result in censorship of legitimate information and political discourse if social media platforms were to give into the pressure being exerted on them.

EU Commissioner Thierry Breton has followed his letter to Elon Musk (discussed in my previous post) with a similar letter addressed to Sudar Pichai of Alphabet/Google/YouTube https://x.com/ThierryBreton/ status/1712866215591379259?s=20, seeking to enforce provisions of the DSA.   Mr. Breton asserts that there has been a surge in illegal content and disinformation on social media.  He gives very little in specifics, but asserts that there has been posting of “violent content depicting hostage taking and other graphic videos.”  He demands that “illegal” content be removed, and that mitigation measures be taken to reduce the “risks to public security and civil discourse stemming from disinformation.” 

It certainly is appropriate for the EU to demand that social media postings take down postings that advocate violence or assist terrorist organizations in recruiting or planning violence.  However, Mr. Breton’s letter goes way beyond this, and is problematic for several reasons. 

First, if the standard to demand what videos must be taken down is that they contain violent and graphic content, then legitimate news stories would appear to be subject to such censorship.  Even CNN and the BBC have already posted quite graphic videos of the carnage in Israel and Gaza.  Should social media posters be subject to a different standard? For those who care about free speech it is important that these stories be told whether by large media networks or by independent journalists or witnesses.

Second, Mr. Bretan’s demand that there be “mitigation measures” to reduce risks sounds like he is asking for the filtering of information by social media platforms.  If the platforms attempt to adhere to this demand, they will be left with the task of evaluating whether each and every post constitutes a “public security risk” or “disinformation,” terms that are not clearly defined by Mr. Bretan or the DSA, and probably are not capable of being clearly defined. The platforms will be left with the daunting task of determining the line between what is acceptable political rhetoric and what is content deemed illegal by the EU.  This is further complicated by the fact that this line may be different in the US and other parts of the world than it is in Europe.

Moreover, given the volume of postings on platforms like YouTube, X(Twitter) and Facebook, any effective “mitigation measures” likely would require the use of filtering software.  This type of software is not particularly adept at discerning between illegal versus protected speech. Think about how often spam filters - which have a much easier task – filter out both unwanted spam and wanted emails.  Thus, there is a real risk that, if Mr. Bretan were successful in imposing his vision of enforcement under the DSA, this would likely result in over-filtering of content preventing the posting of what should be protected by fundamental principles of freedom of expression.

The most troubling consequence of potential overreaching by the EU on this subject is that it could muzzle the efforts by those that seek to advance resolution of the Israeli/Palestinian conflict through dialogue rather than violence.  Even strident, angry, vitriolic rhetoric is preferable to war and violence.  If those who wish to advance their political cause or seek resolution of injustice by peaceful means are censored, that ultimately will leave the terrorists and war mongers with the only voice.