2023 . 09 . 20

Unveiling propaganda on news articles: Cutting-edge models with linguistic and argumentative features

news image

Propaganda has long been employed as a powerful communication tool to promote a cause or a viewpoint, especially in politics, despite its often misleading and harmful nature. Given the number of propagandist and fallacious messages posted on online social media everyday, the need to automatically detect and categorise propagandist content is crucial to safeguard society from its potential harm. We proposed text models that tackle these tasks and analyze the features that characterise propagandist messages. We based our proposed models on state-of-the-art transformer-based architectures and enrich them with a set of linguistic features ranging from sentiment and emotion to argumentation features. The experiments were conducted on two standard benchmarks in the Natural Language Processing field: NLP4IF’ 19 and SemEval’20-Task 11. Both are collections of news articles annotated with propaganda classes. Our models outperformed state-of-the-art systems on many of the propaganda detection and classification tasks. F1 scores of 0.72 and 0.68 were achieved on the sentence-level binary classification task for NLP4IF’ 19 and SemEval’20-Task 11 respectively. For the fragment-level classification task, our models outperformed the SOTA model in some propaganda classes. For instance, using NLP4IF’ 19, F1 scores of 0.61, 0.42 and 0.40 were obtained for “flag-waving”, “loaded language” and “appear to fear” respectively.

 

Semantic and argumentative features behind propaganda

In our pursuit to understand propaganda’s linguistic characteristics, we considered four groups of features that have previously shown links to propaganda: persuasion, sentiment, message simplicity, and argumentative features. In the persuasion group, we examined speech style, concreteness, subjectivity, and lexical complexity. For sentiment, we gathered sentiment labels, emotion labels, VAD scores, connotation, and politeness measurements. Message simplicity was analyzed through exaggeration and various text length-related metrics. To measure most of these variables we used, or constructed, a variety of lexicons. Finally, we trained classifiers that helped us extract argumentative features. That is, which parts of the text correspond to claims, premises, or none of them. This is important to understand the logical structure behind propaganda.

Propaganda’s levels of detection.

We addressed both Sentence-Level Classification (SLC), which asks to predict whether a sentence contains at least one propaganda technique, and Fragment-Level Classification (FLC), which asks to identify both the spans and the type of propaganda technique. The evaluation of the FLC task varied depending on the dataset being used. One of the main differences lies in the number of propaganda categories considered in each corpus: 18 in NLP4IF’ 19, and 14 in SemEval’20-Task 11.

Sentence-Level Classification

To tackle SLC, we employ a range of models, including BERT, T5, Linear-Neuron Attention BER, Multi-granularity BERT, BERT combined with BiLSTM, and BERT combined with logistic regression. In our proposed models, we utilize the last 3 architectures and modify them to include semantic and argumentative features. Our proposed models surpassed the state-of-the-art architectures. In some cases, semantic features alone demonstrated slightly better results than combining them with argumentation features.

Fragment-Level Classification

On the NLP4IF’19 Dataset, we evaluate various models, such as BERT, RoBERTa and the transformer-based winner architecture from the NLP4IF’19 shared task. Our proposed architectures used BERT with CRF output layers, outperforming the state-of-the-art model for several propaganda techniques.

In the SemEval’20 T11 dataset, we implement solutions based on BERT, RoBERTa, and the winning approach of the SemEval’20 T11 challenge. Our proposed model combined a transformer architecture with a BiLSTM. In addition to textual input, we fed the model with semantic and argumentation features. Also, we used a joint loss function that considers the loss at the sentence level, span level, and for the additional features. Such a model outperformed the SOTA model in some propaganda classes. In general, we noticed that using different training epochs help to detect different propaganda techniques. For instance, the classes “bandwagon and reductio ad hitlerium” and “thought-terminating cliches” are learnt best at low training epochs, while “casual oversimplification”, is learnt at high training epochs.

This task remains challenging, in particular regarding the fine-grained classification of the different propaganda classes.

What’s next?

Propaganda leverages emotional and logical fallacies and it is present in all kinds of media. That is why we have turned our attention to the study of fallacies in Twitter (now X), the bustling hub of information and opinions. This is a challenging task since fallacy identification, many times, relies on the context in which the text exists. Given the short length of tweets, such context is not always available. We are currently working on the definition of transformer-based architectures that will help us classify fallacies in this social media and continue our journey to fight misinformation and promote a more informed society.

 

Author: Mariana Chaves (UCA-3IA)