European Union’s recent steps to combat online extremism
Since the earliest days of the internet, the propagation of extremist content online has been one of the most challenging and dangerous misuses of online platforms. This circulation of hate has had an untold impact on the radicalisation, recruitment and training of terrorists across Europe and beyond.
Even though the use of online platforms is frequently highlighted by terrorist acts perpetrated by home-grown, European radicals, the process of online radicalisation has continued apace, unseen and arguably deliberately overlooked.
At long last, and perhaps provoked by the spate of devastating terrorist attacks in the latter half of 2020, Europe has finally taken two significant, tangible steps toward combating the spread of extremism online.
The Regulation on Preventing the Dissemination of Terrorist Content Online (TCO) is finally moving forward, after lying dormant at its trilogue state for over a year. In December 2020, the European Parliament, the Commission and the Council reached a much-anticipated agreement on the proposal for the TCO. This week, the proposal was voted on and approved in full by the Committee on Civil Liberties, Justice and Home Affairs (LIBE).
Similarly, the centrepiece of the von der Leyen Commission, the Digital Services Act (DSA), was unveiled last December and the consultation is ongoing. All going to plan, the Parliament will begin reviewing the proposal in the coming months.
Now, we have in place the beginning of a continent-wide architecture for holding both individuals and big tech companies accountable for harmful content.
In my work with the Counter Extremism Project (CEP), I have closely followed the development of both proposals since their inception. While a much welcome step in the right direction, both initiatives have their flaws.
With regards to the TCO, its emphasis on intentionality in the production and dissemination of terrorist content sets an unduly high bar for enforcing accountability measures in all but the most cut and dry cases.
A wider definitional scope for ‘dissemination’ itself might also have proved more effective. The current definition is limited to content made available through hosting service providers, when really, in order to eliminate pernicious caveats, it ought to defend against extremist content made available to third parties online in general.
On the other hand, the DSA’s effective power is notably weakened, for example, by its failure to support the use of automated tools and filters to remove manifestly illegal content. In an age when tech companies are already using these tools independently, arguments that automated filtering measures somehow infringe on the freedom of the internet, miss the point completely.
It’s not about freedom or unfreedom, it’s about who gets to determine the limitations we put in place. As the DSA moves through its next stages, we hope that the Parliament recognises this pivotal element.
Likewise, the DSA’s ban on general monitoring would incentivise already apathetic platforms not to adhere to their terms of service and duty of care to protect users. As things stand, under the legislation, platforms could choose to actively monitor, but they would be making things needlessly difficult for themselves, not only because of the effort it would involve, but also because they would thereby be giving up their limited liability protections.
Despite evident shortcomings, the legislators must also be commended for more progressive aspects of these proposals.
The legislation has done well to make arrangements for a pan-European, content specific notice-and-take-down system, forcing platforms to remove terrorist content within one hour of being notified about its existence.
This is a provision that was included in the original TCO and DSA proposals and managed to be retained despite some predictable pushback. It is widely known, and our own studies confirm that harmful, terrorist content causes the most damage within the first hours of its appearance, so the impact this provision is likely to have cannot be understated.
Although CEP has produced research demonstrating the insufficiency of notice-and-take-down systems when taken on their own, as in the case of NetzDG, it nonetheless represents an important step toward the creation of a safer online experience for European citizens.
The Member States will also now have the ability to impose sanctions for non-compliance with penalties proportionate to the size and nature of the platform. This means that at long last, tech companies are being held legally and financially responsible for the dangerous content that is spread across their platforms.
Lastly, the large number of robust transparency requirements laid out in the legislation, such as those demanding annual transparency reporting of service providers, will also help to ensure accountability across platforms, something that CEP has long advocated for.
The TCO and DSA thus represent a substantial improvement on the weak and outdated regulations previously in place to combat online extremism in Europe. There are a number of areas in which the legislation can and should be improved and only time will tell how critically undermining some of the weak enforcement mechanisms identified above will prove to be in practice. Nonetheless, after many years of stagnation, both proposals are a positive step towards a safer, more secure Europe, online and off.
Source: New Europe