003. Parler Analysis-Part III: Moral Framing: How Does It Work on Parler?
Why Analyzing Morality?
In our last analysis, we probed into the texts of the posts on Parler, finding out that Parler users focused on different topics when referencing external sources with different biases and factualities. It is easy for anyone coming from a different ideology to refute their ideas and anyone consuming high-quality information to debunk the misinformation. However, it’s more interesting to think about a deeper question: how did these ideas and opinions become valid in their heads—how do they validate the ideas? To understand this issue, the introduction of framing (Entman, 1993) is useful in our context.
Framing, in layman’s terms, could be understood as how people foreground elements of an issue and make it compelling in the communication (Mendelsohn et al., 2021). It’s somehow analogous to photography. You can try to cast light to the same object from different directions, and the results, the shades of the objects, would likely be varied from each other based on where the lights come from. Thus, if we can somehow understand how Parler users chose to “cast their lights” on the topics being discussed on Parler, we can know how they framed their arguments. Further, we can somehow tap into their psyche and the alt-right mentality.
On the surface level, people who posted these extreme-biased ideas and spread the misinformation were based on their political bias; but on a deeper level, these deviations in political ideologies were attributed to the differences in subjective judgments (Wang & Liu, 2021). That fundamental difference in how they judge issues gives rise to the flourishment of different ideologies (Graham et al., 2009). Of all the mechanisms and categorizations of human judgments, a salient and common one is the moral judgment. People’s ideas on what’s morally good and bad drove them into different ideologies and grouped them differently, which furthered formed, structured, and perpetuated the digital ecology we observed online.
Deploying framing into the analysis of far-right online spaces is not a new thing. Numerous research has shed light on the communication patterns on far-right social media through the lens of populism framing (Beland, 2020) and “us-vs-they” framing (Krug, 2020), etc. On the other hand, moral framing has also been used to analyze controversial social issues and critical social movements, for example, the killing of George Floyd (Priniski et al., 2021) and the Black Lives Matter movements (Rezapour et al., 2019). Converged on the middle ground, we are here to try to bring the framing analysis of far-right social media and the morality analysis of social movements together and to investigate the moral framing of alt-right social media, especially on the Capitol coup.
It’s hard to not talk about the Moral Foundation Theory (MFT) (Graham et al., 2013) when it comes to morality. Among the theories of morality, it’s a popular choice. Research has shown that collective actions could be influenced by people’s social-psychological motivations (van Zomeren, 2013). Thus, we can cut into the understanding of the collective actions of couping from the perspective of their moral judgments. There are five dimensions in MFT, namely Care, Fairness, Loyalty, Authority, and Sanctity. Each dimension is a diverging spectrum of virtue and vice, so for the virtue of care, the vice would be harm.
How To Analyze Morality?
A simple data preprocessing was performed for the dataset. I selected only the original posts of all posts because reposts didn’t contain user-generated content. They were merely just a repeat of the posts being reposted. This filtration thinned the size of the data to 230k. Further, I sifted out the original posts with less than 10 words—fewer words weren’t able to convey enough information for analysis. In the end, the posts with terms of “DC/Washington DC/Capitol” were flagged. I eventually ended up with two datasets, one was the Capitol-related posts with 12.5k posts, and the other was the remaining 83.3k originals. The content of each of these posts was analyzed with the Extended Moral Foundation Dictionary (eMFD) (Hopp et al., 2021). The eMFD is an improvement of the original Moral Foundation Dictionary; it was construed with a lot of cloud-sourced data annotations and it expanded the lexicons for each moral foundation (Hopp et al., 2021). I was able to get the probability scores for each of the moral foundations (for example, the vice score of Care), as well as the ratio of moral lexicons and non-moral lexicons for every post.
What’s Out There In the Wild Wild World of Parler
A Quick Look at the Morality Metrics
Before we tried to understand the moral framing on Parler and how Parler users used different aspects of morality to bolster their arguments, we need to know how prevalent moral rhetorics are. Overall, the use of moral words was common on Parler as was indicated by the ratio of the moral words and nonmoral words (M=1.43, SD=1.44). It means that on average there are more moral words than non-moral words in a post, so the mean was above one. However, it’s interesting that the variance of the scores was huge as the standard deviation is larger than the value of the mean. It might indicate the use of moral rhetorics was not consistent on the platform. It was polarized and skewed the distribution of the values.
Figure 1 shows the breakdown of the distribution based on whether the post was about the Capitol riots. Both Capitol posts and non-Capitol posts had the largest share of the posts that has the same number of moral words and nonmoral words. The distribution of the non-Capitol posts (M = 1.47, Mdn = 1.04, SD = 1.5) were slightly right-skewed than the Capitol ones (M = 1.17, Mdn = 1, SD = 0.96). A Maan-Whitney U test was conducted due to the unequal sample size and the non-normal nature of the distribution. Obtaining the p-value less than 0.001, the observed difference in the median, where non-capital posts have a higher moral-nonmoral ratio, was significant. In sum, in general, using moral words and moral rhetorics were popular on Parler, and non-Capitol posts were more morally charged.
Next, I looked into the distribution of each moral foundation. Figure 2 shows the distribution of the probability scores on all moral foundations for Capitol posts. For all five dimensions, the vice scores were always higher than those of the virtues’. This pattern was also observed in the non-Capitol posts as shown in Figure 3.
Even though all vice foundations scored higher than the virtue ones, the gap between vice and virtue scores is different for different dimensions. As Figure 4 shows, the most strikingly obvious gap between the vice and virtue was observed in the dimension of care, while the gap for the dimension of loyalty is not that large.
Also, when looking at the difference between Capitol posts and non-Capitol posts, the Capitol posts had smaller median values across all dimensions than the non-Capitol ones. However, the virtue-vice gap of the Capitol posts was larger than the non-Capitol posts. That is to say, Capitol posts tend to have moral words that tend to focus on the immoral, not moral, implications, especially when it relates to care, authority, and fairness. To sum up all these observations, when Parler users talked about the Capitol riots and used moral rhetorics, they tend to have more opinions on whether it brings care or harm, it’s a subversion of authority, and it shows loyalty or betrayal. And their opinions were usually on the immoral side.
Combing the data of moral foundation scores and the moral-nonmoral ratio, we found the usage of moral words correlated with the posts having higher vice probabilities. In Figure 5, we plotted the scores of each moral foundation and the moral-nonmoral ratio and then color-coded the virtue and vice. As the simple trend lines indicated, the more moral words in a post, the more likely the vice scores would be higher, and the virtue scores will be lower. It might imply that if a post has profuse moral rhetorics, it’s more likely talking about the vice side of morality—so basically, a tirade that people went on ranting on some issue with abundant moral reasoning. However, this could be also the issue of the dictionary where more vice words were included and it’s easier to match a vice word than virtue ones, thus amplifying the effects and the probability of the vice side.
Next, we tried to apply more sophisticated analysis to these data to see how moral rhetorics bore a relationship between a post apropos to the Capitol riots or not. An Explainable Boosting Machine (EBM) model was built; it is a tree-based model with automatic interaction detection and summarized explainability (Lou et al., 2013).
Figure 6 shows the most important features in the model with an AUC of 0.73. The vice scores of authority, care, the moral-nonmoral ratio, and both the vice and virtue of fairness were the largest indicators for distinguishing the Capitol posts and non-Capitol posts. Ranking in the top three, the moral non-moral ratio again stood out as a distinguisher for capitol posts, reinforcing the significance tests above. Authority and care were singled out before as the important areas of morality for the posts, and here in the model they still remained influential. Fairness overall is an important predictor for the Capitol posts, for both virtue and vice. The fairness scores of capitol posts were likely showing the opposite trend of the non-Capitol ones, so both vice and virtue were enlisted.
Another simple logistic regression model was constructed with Capitol relatedness as the dependent variable and all moral foundation scores as the independent variables. Table 1 shows the coefficients for the variables. All variables reached the significance level of 0.001. For capitol posts, it’s more likely to associate with the vice of authority, the subversion, and the virtue of care, the harm; they both obtained the largest coefficient. It’s also worth noting that the vice of care and the virtue of authority had positive coefficients, implying these two dimensions are generally important for the online discussion on Capitol riots on Parler. All the virtue and vice of fairness and sanctity had negative coefficients, indicating the discussion of fairness and sanctity was especially not important. It’s also interesting to see the virtue of loyalty positively correlated with the Capitol posts while the vice was not. This might have to do with patriotism and other keywords indicating loyalty appearing significantly more in the Capitol posts.
When Sentiments Meet Morality
We also had the sentiment classifications on the original posts, barring their moral scores. The classification was based on VADER, a sentiment analyzer optimized for social media (Hutto & Gilbert, n.d.). Figure 7 shows the distribution of the sentiments among all originals with ten or more words; most of them were classified as negative, followed by positive, and ended with neutral. For all Capitol posts and non-Capitol posts, this ranking still held true. This might reflect the extremism of the platform as most posts are charged and what’s left were only a few posts with neutrality.
When looking into the percentage of different sentiments within the Capitol posts and non-Capitol posts, Figure 8 shows that generally more negative posts were presented in the capital posts, at the same time, we had fewer positive ones and an equal percent of neutral ones.
To further validate the relationship of sentiment and moral rhetorics, a logistic regression model with only Capitol posts was tested. Table 2 shows the model coefficients. Of all the independent variables that were significant, higher probabilities on the virtue of loyalty would lead to positive sentiment overall. Other influential factors include the virtue of care and authority. Lower scores on the vice of care, fairness, and sanctity were also significantly related to positive. The results show us that the online discussion of the Capitol riots, if positive, were eschewing talking about unholiness, harm, and fairness or betrayal, instead, more likely to have moral words associated with loyalty, authority, and care.
To close this analysis, I ended up using a keyword extraction for posts with different sentiments and moral foundations scores. The keyword extraction was conducted using the YAKE algorithm (Campos et al., 2020). For each sentiment category and for each moral foundation, the Capitol posts whose scores for this specific moral foundation were larger than the median value were subsetted. All posts that met the criteria were concatenated into one long string before sending to the YAKE. Surprisingly, the top keywords extracted from these subsets were primarily homogenous:
For negative Capitol posts, “trump supporters” were the most important keywords for posts with high virtue of care, loyalty, and authority while “breach capitol building” was the one for the virtue of sanctity and fairness, “capitol police” was the most important one for all vice moral dimensions for the positive posts, and “capitol” building” and “capitol police” for posts with higher than median moral nonmoral ratio.
For positive Capitol posts, “trump supporters” topped all vices and virtues of five moral dimensions except for fairness where it was surpassed by the term “American trump supporters”.
Neutral posts were different than the positive and negative ones. For the virtue of care, loyalty, sanctity, and the vice of authority, the top keyword was“Romney boards flights”. The virtue of authority had “President Trump speech”. “Purdue votes” were the most important keyword for the vice of care, fairness, and sanctity. The vice of loyalty’s top keyword is “Capitol building”.
Epilogue
To wrap up, this post focused on the application of moral foundation theories to the posts on Parler, with a specific interest in the posts related to the Capitol riots. The main take-aways are:
Moral framing or moral rhetorics were common on Parler, implied by the distribution of moral non-moral words ratio. Specifically, we observed a significantly lower level of the moral ratio for capital posts. This indicates the intentional or unintentional avoidance of the use of morality in the online discourse on the Capitol riots by Parler users.
Vice, rather than virtue was more prevalent in the moral words for all five moral foundations. The pattern was consistent for both Capitol and non-Capitol posts. Care, authority, and loyalty were the most common moral foundations in the online discussions. These findings were bolstered by the distribution of the moral foundation probabilities.
Compared with non-Capitol posts, Capitol posts had a larger virtue/vice gap in care, authority, and fairness, suggested by the median scores for each moral foundation. This might indicate the dimensions of the morality that had been utilized the most for the discussion of Capitol riots on Parler.
Capitol posts were closely associated with the vice of care and the virtue of authority and loyalty, negatively associated with the vice of loyalty. Both sanctity and fairness were less likely to be in Capitol posts and were far away from the moral framing. These findings were supported by machine learning and statistical models.
The majority of the posts on Parler, no matter Capitol posts or non-Capitol posts, had negative sentiments and less than 20% of the posts were sentimentally neutral. Capitol posts had a slightly higher percent negative posts.
Capitol posts with positive sentient also usually had higher scores on the virtue of loyalty, care, and authority, and lower scores on the vice of care, fairness, and sanctity. The online discussion of the Capitol riots, if positive, were eschewing the morality about unholiness, harm, and fairness or betrayal, and more likely to talk about how it relates to loyalty, authority, and care.
The most important keywords for negative and positive posts were about Trump, Trump supporters, Capitol building, and Capitol police. However, the main role for neutral posts was about Romney.
Till this post, I have exhausted the analyses that I planned for the leaked Parler dataset. We scrutinized the sources that were used on Parler in terms of biases and factualities, we examined the topics that these biased and misinformation posts, we explored the moral framing of the posts related to the Capitol posts. No matter what take-aways you will have for these analyses and how much you are buying these findings, I just want to simply remind us all to reflect on the world we are in right now, and perhaps makes you think about balkanization and tribalism, collective and individual, public sphere and civic engagement, liberal and aggregative, freedom and responsibilities, propaganda and misinformation, and human sovereignty, agency, and data. Peace.
References
Beland, D. (2020). Right-wing populism and the politics of insecurity: How president Trump frames migrants as collective threats. Political Studies Review, 18(2), 162–177.
Campos, R., Mangaravite, V., Pasquali, A., Jorge, A., Nunes, C., & Jatowt, A. (2020). YAKE! Keyword extraction from single documents using multiple local features. Information Sciences, 509, 257–289. https://doi.org/10.1016/j.ins.2019.09.013
Entman, R. M. (1993). Framing: Toward Clarification of a Fractured Paradigm. Journal of Communication, 43(4), 51–58. https://doi.org/10.1111/j.1460-2466.1993.tb01304.x
Graham, J., Haidt, J., Koleva, S., Motyl, M., Iyer, R., Wojcik, S. P., & Ditto, P. H. (2013). Moral foundations theory: The pragmatic validity of moral pluralism. In Advances in experimental social psychology (Vol. 47, pp. 55–130). Elsevier.
Graham, J., Haidt, J., & Nosek, B. A. (2009). Liberals and conservatives rely on different sets of moral foundations. Journal of Personality and Social Psychology, 96(5), 1029–1046. https://doi.org/10.1037/a0015141
Hopp, F. R., Fisher, J. T., Cornell, D., Huskey, R., & Weber, R. (2021). The extended Moral Foundations Dictionary (eMFD): Development and applications of a crowd-sourced approach to extracting moral intuitions from text. Behavior Research Methods, 53(1), 232–246. https://doi.org/10.3758/s13428-020-01433-0
Hutto, C. J., & Gilbert, E. (n.d.). VADER: A Parsimonious Rule-based Model for Sentiment Analysis of Social Media Text. 10.
Krug, A.-L. (2020). Framing of” Us-vs-Them” in Right-Wing Communication–How Tweets are used as Forms of Communication by a German Right-Wing Party.
Lou, Y., Caruana, R., Gehrke, J., & Hooker, G. (2013). Accurate intelligible models with pairwise interactions. Proceedings of the 19th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 623–631. https://doi.org/10.1145/2487575.2487579
Mendelsohn, J., Budak, C., & Jurgens, D. (2021). Modeling Framing in Immigration Discourse on Social Media. ArXiv Preprint ArXiv:2104.06443.
Priniski, J. H., Mokhberian, N., Harandizadeh, B., Morstatter, F., Lerman, K., Lu, H., & Brantingham, P. J. (2021). Mapping Moral Valence of Tweets Following the Killing of George Floyd. ArXiv:2104.09578 [Cs]. http://arxiv.org/abs/2104.09578
Rezapour, R., Ferronato, P., & Diesner, J. (2019). How do Moral Values Differ in Tweets on Social Movements? Conference Companion Publication of the 2019 on Computer Supported Cooperative Work and Social Computing, 347–351. https://doi.org/10.1145/3311957.3359496
van Zomeren, M. (2013). Four Core Social-Psychological Motivations to Undertake Collective Action. Social and Personality Psychology Compass, 7(6), 378–388. https://doi.org/10.1111/spc3.12031
Wang, R., & Liu, W. (2021). Moral framing and information virality in social movements: A case study of #HongKongPoliceBrutality. Communication Monographs, 1–21. https://doi.org/10.1080/03637751.2021.1918735