Facebook takes action as terrorists found using platform to recruit and campaign
“When terrorists misuse the internet,” Facebook explained in a blog post on Wednesday, July 24, “they often upload the same piece of content to multiple platforms to maximize their reach.” This has allowed the social media giant—along with Google, Microsoft and Twitter under the Global Internet Forum to Counter Terrorism (GIFCT) alliance—to “more quickly identify and take action against potential terrorist content on our respective platforms.”
In 2018, the alliance reached its intended milestone of assembling 100,000 such hashes. In the first half of 2019, though, the number has already doubled. There are now “more than 200,000 unique hashes in the database,” each of which is described as a digital fingerprint pointing to a known piece of terrorist content.
The scale speaks for itself, as does news from the frontline.
Israel’s internal security agency Shin Bet accused Iran this week of sponsoring a social media campaign to recruit individuals in Israel “for the benefit of Iranian intelligence.” According to reports, the would-be recruits were asked to gather information on the ground to help prepare terrorist attacks. Such targets included military and civilian locations. In a statement (also on Wednesday, July 24), Shin Bet said “the vast majority of Israeli citizens refused to cooperate, because they suspected it was a hostile element and cut off contact with them—security forces will continue to act with determination to thwart any activity by Iran and terrorist elements acting on its behalf against the security of the state.”
Shin Bet claimed the Iranian-backed campaign was orchestrated from Syria, under the guise of an individual known as Abu-Jihad, who used fake Facebook profiles and sought to entice target individuals onto messaging applications. There had already been arrests in Israel in recent months following the use of social media to recruit on behalf of Hezbollah and Hamas. “Using social networks,” Shin Bet said in a statement following this latest counter-terrorist operation, “is a method known to intelligence agencies as being used by terrorist organizations, including Hamas and Hezbollah, to recruit activists.”
The same day, Wednesday, July 24, a court in Abu Dhabi sentenced a Filipino man to 10 years imprisonment for “promoting the ideology of terrorist organizations” through social media channels—primarily Facebook. According to local media, the charges included “setting up and running accounts on social media, including a Twitter account called ‘Abu Zubair’, five accounts on Facebook with different names and a seventh account on Telegram—all urging young people to join terrorist groups and to provide financial support to their members and affiliates around the world.”
Although all the major social media platforms have come under increasing pressure in recent months for their inability to curtail content and engagement that promotes illegal terrorist activity, Facebook finds itself the leader of the pack. It is a role that has always been there, given its scale, reach and levels of engagement, but it reached new levels of public scrutiny after the attacks earlier this year in Christchurch.
At the G20 summit in Osaka last month, a statement was proposed by Australia and signed by all member-states, “urging online platforms to meet our citizens’ expectations that they must not allow the use of their platforms to facilitate terrorism and violent extremism conducive to terrorism.”
Unsurprisingly, Facebook was quick to welcome statement, with a spokesperson saying at the time that “we’ve been reviewing what more we can do to limit our services from being used to cause harm or spread hate.”
Back in March, Facebook was heavily criticized after two consecutive mosque attacks in Christchurch in which 51 people were killed, were live-streamed, highlighting the inadequacies of the company’s systems. It became clear that the company needed users to alert to a real-time incident, and none did. And as newer systems were put in place to use AI to determine harmful content, the need for volumes of specific training data became clear—Facebook acknowledged at the time that there simply had not been enough such data to make the system work.
Two months later, in May, Facebook joined other representatives from Big Tech at a meeting in Paris, hosted by French President Emmanuel Macron and New Zealand Prime Minister Jacinda Ardern, which launched the “Christchurch Call to Action,” which set out to prevent repeats, to stop the dissemination of terrorist content on social media.
Now, in addition to updating on the scale of content on its hash-based blacklist, Facebook has also announced joint protocols which will enable Big Tech to respond to crises like Christchurch—to enable and empower companies to more quickly and effectively respond to emerging and active events.” In essence, shifting the incident response balance from individual companies to a collective in the hope that it casts a wider net more quickly. “The protocol,” Facebook explained, “can be triggered by a real-world event involving murder or attempted murder of defenseless civilians or innocents—we will work together to categorize the type of incident and the anticipated level and degree of online impact.”
The news from the Middle East relates more to the use of social media as an engagement and messaging platform than for content distribution, but it is all clearly linked. Threads pulled will link to individuals and networks of individuals. Scraping social media, both at home and in overseas locations, has become commonplace in intelligence gathering operations. The same platforms have also become a hotbed for recruitment and compromise—targeting military personnel on social media was one of the topics raised in the recent cyber escalation between the U.S. and Iran. There is clearly a physical element to nation-state offensive cyber capabilities that usually has local individuals somewhere in the chain.
So, a week of unrelated social media news linked to ongoing terrorist activities and propaganda—but, clearly, it’s not unrelated at all. While there is a public interest in cleaning harmful content from the platforms we use each day, there remains a greater public interest in denying terrorist organizations access to the platforms they use to facilitate their campaigns. And, of course, to use those same platforms to identify the terrorists and their networks, to stop them, and to bring them to justice.
Source: Forbes