Stopping the spread of online Islamic State propaganda
The terrorist organisation Daesh generates and shares large amounts of high-production propaganda videos online. Automated content moderation tools can easily detect and remove these videos – but in 2017, most video hosting platforms didn’t have the data science capability to build them. Daesh was free to spread terrorist content online to recruit and radicalise people.
The Home Office asked Faculty to build a model for these video hosting platforms to automatically flag Daesh propaganda videos.
Faculty’s highly-accurate model detects terrorist content for human review, ultimately working towards wiping Daesh propaganda from these platforms.
Video hosting platforms were unable to automatically detect and remove terrorist propaganda. Terrorist groups targetted small platforms, in particular, to spread their propaganda on the internet.
Terrorist organisations like Daesh share propaganda videos online to radicalise vulnerable people and inspire ‘lone wolf’ attacks. Following a rise in these murderous attacks in the UK, the UK Home Office encouraged online platforms to crack down on extremism and terrorism.
Large video hosting platforms with advanced data science resources found it too difficult to build a tool to automatically remove terrorist propaganda with sufficiently high accuracy. They instead relied on video hashing and human moderators to manually process flagged videos and remove content that violated their terms of service.
However, smaller platforms didn’t have access to the same level of manpower to moderate content, meaning organisations like Daesh targeted them due to the far lower risk of content being removed. With largely unchallenged freedom to circulate terrorist material, small platforms were “swamped”.
The Home Office brought in Faculty to build a cross-platform model to automatically detect Daesh propaganda videos, stemming the flow of terrorist content online.
As the model cuts Daesh propaganda off at the source, the UK Police Force is using our technology to identify and analyse terrorism on the web, informing their counter-terrorism strategy.
Amber Rudd, Former Home Secretary said: “We know that automatic technology like this can heavily disrupt the terrorists’ actions, as well as prevent people from ever being exposed to these horrific images. This government has been taking the lead worldwide in making sure that vile terrorist content is stamped out.”
As reported by BBC News, the model can achieve a true negative rate of 99.995% and a true positive rate of 94%. This means that if a platform receives five million uploads per day, the model would catch almost all propaganda videos and flag only around 250 for human review.
Following the launch of Faculty’s propaganda detection tooling in 2018, all major video hosting platforms have implemented their own AI models to automate the detection of terrorist propaganda. Faculty’s own model has mainly been deployed by law enforcement to detect and analyse terrorist content online.
Faculty also made significant improvements to the model, extending it to detect al-Qaeda propaganda across video, audio, image and text. The model could even be used to eliminate other forms of illegal and harmful content, such as child sexual abuse material and hate speech, from online spaces.
Matthew Collins, Deputy National Security Advisor said: “Faculty’s work set a new standard for the automated detection of harmful and abhorrent terrorist video propaganda at a time when technology leaders across the world had said this wasn’t possible.
“Today, all the major communications service providers have developed and deployed their own AI models to automate the detection of terrorist propaganda and I’m extremely proud of our work with Faculty in pioneering this movement.”
Source: Tech UK