How do social media users talk about terrorism online?
Terrorism remains a persistent challenge worldwide. While portrayed as a threat to society and human civilisation by mainstream media, terrorists sell terrorism as freedom fighting via social networking sites and private messaging platforms. However, the actual workings of terrorism are largely shrouded in secrecy.
For the curious, a convenient resource are community question answering sites (CQAs), which are social media platforms where users ask questions, answer those submitted by others, and have the option to evaluate responses using UpVotes and DownVotes.
Research, conducted by this post’s authors, Dr. Alton Chua from Singapore’s Nanyang Technological University and Dr. Snehasish Banerjee from the University of York, UK, examined the use of the popular CQA site Yahoo! Answers for the topic of terrorism.
Terrorism has entered the main line of discourse in CQAs. Some users inquire about the genesis of terrorist groups (e.g. “Who exactly created ISIS?”) while others are keen to understand terrorists’ underlying motives (e.g. “Why are terrorist’s [sic] trying to hurt the beautiful Paris, France?”). Yet, current research has not systematically studied this continually growing corpus.
Three research questions were investigated: (1) What are the dominant themes reflected in terrorism-related questions? (2) How do answer characteristics vary with question themes? (3) How does users’ anonymity relate to question themes and answer characteristics?
The questions reflected the online community’s information needs ranging from the life of extremists to counter-terrorism policies. Some questions were innocuous. They were politically correct, maintained a civil tone or sought to promote peace without seeking contentious responses (e.g. “Does terrorism lead to adolescent depression?”). In contrast, others were sensitive. They either had a radicalisation flavour or could trigger discrimination and/or malevolence (e.g. “What is the best way to slaughter a Muslim terrorist?”).
Answers to these types of questions were laden with negative emotions reflecting hate speech and Islamophobia, making claims that were rarely verifiable. Responses to sensitive questions attracted significantly more UpVotes as well as DownVotes compared with those answering innocuous queries. Put differently, the online community was more engaged with terrorism-related questions if they were sensitive rather than innocuous.
Users who posted sensitive content generally remained anonymous. However, even for content posted non-anonymously, the actual identity of these contributors remained veiled. They chose screennames including those of celebrities, inanimate objects, or even those that were gibberish and appeared to have been created through random keystrokes. Clearly, these users wanted to be recognised as distinct individuals in the community, but were unwilling to take responsibility for the content they created.
Overall, this research demonstrates that proselytisation is possible on CQAs through both questions and answers. Just as marketers rely on users’ online behaviour to find potential customers, terrorist groups can trawl the internet to fish for new recruits. Questions and answers can be used to paint terrorists as victims of circumstances rather than aggressors.
The research further shows that a culture of trolling exists on CQAs. The topic of terrorism on CQAs seems to create a supercharged emotional cyber atmosphere that breeds hate speech. This explains why sensitive terrorism-related questions attracted more responses than innocuous ones did, and that more UpVotes were found among answers containing negative emotions than those which were positive or neutral. Out of a warped sense of pride, CQA users are enthused to play to the gallery through a demonstration of virtual nastiness.
There is now significant public pressure for social media giants such as Facebook and Twitter to do their bit in curbing the spread of extremist content and terrorist ideologies. CQAs such as Yahoo! Answers can play a part too. Removing the anonymity option, for example, ensures users do not shirk responsibility for the content they create.
Additionally, filters could be used to detect sensitive language when users proceed to submit content. While casual trolling should be allowed to embody the democratic ideals of free speech, filters could minimally stop overly nasty language and detect entries that promulgate political insurgency, breaches to national security, or acts of terror.
Source: Voxpol