Facebook’s machine learning flags posts with terrorism content
Facebook has deleted 18 million pieces of terrorist content in the first three quarters of this year.
The social media company’s policy lead for counter terrorism and dangerous organisations, Dr Erin Marie Saltman, revealed how ‘machine learning’ was helping the company to censor content from groups including Islamists and white supremacists.
In a speech yesterday at the Institute of International and European Affairs (IIEA) in Dublin, Dr Saltman explained how the company uses the United Nations list when identifying terrorist originations.
“That is not just Daesh and al-Qa’ida, that does include our entire list of terrorist organisations that include some of the white supremacy groups that includes some of the more regional located groups,” she said.
Source: Independent