Banning ISIS propaganda online could regulate radicals
For much of the Obama presidency, the U.S. State Department waged a narrative war against the world’s jihadists. Through a small office inside Foggy Bottom known as the Center for Strategic Counterterrorism Communications, a team of experts deployed counter-messages to al Qaeda and later the Islamic State.
As 2017 comes to a close, many fighting the online war for the hearts and minds of would-be terrorists are rethinking their strategy. Hearts and minds, of course, still matter. But the efforts of Western governments, academics and the private sector to craft clickable content and stories to discredit terrorists are often clumsy and slow. It’s also difficult to measure just how effective these campaigns really are.
Just consider the State Department’s own initiative: “Think Again, Turn Away.” It’s now defunct, but when it launched in English in 2013, the Twitter account for the program regularly tried to engage online with self-proclaimed members of the Islamic State. It got very weird, as government officials tried to zing terrorists in social media. Rita Katz, the director of the SITE Intelligence Group, called this outreach “ridiculous” in a scathing piece for Time magazine.
Now many experts in radicalization are asking a basic question: If you can’t beat ’em, why not ban ’em? Instead of coming up with new clickable content and stories to discredit terrorist propaganda, why not make the radicals’ toxic material much harder to find on the internet? Better yet, why not just remove it altogether from the web?
“I am unconvinced counter-narratives work,” Hany Farid, a computer science professor at Dartmouth College and a senior adviser to the nonprofit Counter Extremism Project, told me. “I don’t think they are harmful. I have not seen evidence they are actually effective.”
Farid is one of the leading voices today urging online platforms to treat terrorist propaganda like child pornography and work to uproot the material altogether from the internet. He helped to develop an algorithm that automates the process for identifying harmful videos online, even copies. This makes it much easier to comb the internet for toxic content so that it can be removed.
Recently Farid and the Counter Extremism Project won an important victory. As the New York Times reported this month, YouTube, which is owned by Google’s parent company, Alphabet, began removing the videos of the late al Qaeda theologian Anwar al-Awlaki from its platform. Awlaki was a particularly effective English-language voice for recruiting people online to join the global jihad. Many domestic terrorists cited his work as a kind of initial inspiration. And while he was killed in a drone strike in 2011, his videos outlived him in cyberspace. The Counter Extremism Project documented the pervasiveness of Awlaki videos in recent years, and finally persuaded Youtube to remove them.
That is an extreme case. But there are softer approaches as well. In 2015, Jigsaw, a national security incubator within Alphabet, tested a pilot program known as Redirect. It showed certain Google ads to people who searched with keywords that would lead to Islamic State propaganda. And the search results they received were curated videos intended to discredit key messages of the Islamic State. One such video showed an old woman yelling in Arabic at two Islamic State officials, shaming them for their wanton murder.
This doesn’t mean banning outright all videos that may be connected to an Islamic State account. Rather it means that people seeking them out will be prodded to view videos that undermine the themes of the recruiters.
Yasmin Green, director of research and development at Alphabet’s think-tank division, told me the company’s research “suggested that traditional counter-messaging efforts were not resonating with their intended audience. The material was either moralistic or mistrusted because of associations with government or mainstream media — institutions mistrusted by the people most vulnerable to violent Islamist recruitment.” She added: “The good news is that there exists content debunking ISIS mythology that already exists from credible voices. It just doesn’t always find the audience it deserves.”
Daveed Gartenstein-Ross, a senior fellow at the Foundation for Defense of Democracies and a consultant on Alphabet’s Redirect project, told me that this approach has promise. But he also added that it’s important to develop a better way of measuring what online counter-narratives actually work. “You cannot measure the impact of a video by how many people are watching it,” he said. “A video with lots of views isn’t necessarily hitting the right audience. A video that doesn’t get many views could be hitting the right audience.”
Alphabet’s executive chairman, Eric Schmidt, told an audience of top national security government officials, journalists and experts earlier this month that increasingly the company is looking to enforce the terms of service in a way it had not in the past. He described this as a “maturation of what we do.” “We started with a position that is the American general view, that bad speech will be replaced by good speech in a crowded network,” he said. Alphabet is now aware that bad actors can flood the internet with fake news, propaganda and other toxic material. Alphabet can make some of this material less prominent by identifying it and moving it lower in search results.
This approach may help solve the problems of foreign propaganda and online radicalization. It however poses new challenges. In effect, many in the West today are asking internet platforms to play a more active role in promoting and curating content. It’s something a few government experts I spoke with on background said they worried about. As the old saw goes, one man’s terrorist is another man’s freedom fighter. Awlaki videos are an easy case. We can draw a direct line between his words and the actions of killers like Nidal Malik Hasan, the U.S. Army psychiatrist who murdered 13 people in Fort Hood in 2009.
But it can get much harder. What about groups that support violent insurrection in Syria? What about online clerics who profess a radical view of Islam without going so far as calling adherents to take up arms? Should their content be removed from or de-emphasized in searches? Who decides?
Farid acknowledged the tension. “I have concerns about these monopolies too,” he said. He said today, a big part of the problem is the major internet companies are not transparent about how they enforce their terms of service. “After Charlottesville, people just woke up and found white supremacists on their network, and all of the sudden Daily Stormer was being kicked off all these platforms,” he said. “If this one group is going to get kicked off, you have to apply terms of service — which are strict in these cases — consistently.”
Perhaps that inconsistency is because the internet giants today are, as Schmidt said, in a process of maturation. It’s important to watch how they develop. The Internet is not a public utility. Private companies like Alphabet can remove or de-emphasize any content they choose. That is a power that can be used and abused in many directions, not just the current war against fake news and online radicalization.
Source: Newsmax