Chapter 7: Demagoguery, Democracy, and Digitality
by Nora Lee Heist, Misti Yang, Damien Smith Pfister, Edward Summers, Purdom Lindblad
The aftermath of the 2016 US presidential election surfaced the complexities of contemporary digitally mediated propaganda—especially in relation to teaching about demagoguery and rhetorical criticism. Traditional conceptions of demagoguery focus on a singular demagogue, and rhetorical criticism has historically been oriented toward dispelling the fallacious arguments of that demagogue. However, in Demagoguery and Democracy, Patricia Roberts-Miller argues that demagoguery is not driven by one bad actor, but flourishes “when demagoguery becomes the normal way of participating in public discourse” (2). While there are certainly still individual, recognizable, charismatic figures who leverage digital technologies in the service of “traditional” demagoguery (the ex-Twitterer-in-Chief, Donald Trump, is a prime example), contemporary digital demagoguery is orchestrated by organized collectives of anonymous actors, like Russia’s now infamous Internet Research Agency (IRA) or networked groups radicalized on platforms like 4chan and Reddit.
Roberts-Miller’s understanding of demagoguery as a cultural phenomenon helps explain how the networked public sphere allows demagogic rhetoric to flourish. Following Roberts-Miller, demagogic culture thrives when identities overwhelm arguments: “Group membership, then, can seem to serve as a kind of ‘proof’—without our noticing that our proof is actually our conclusion” (38-39). Since social media platforms are places where identities are salient means of creating community, they are potentially potent sites of demagoguery. Indeed, the IRA used identity-based appeals to create echo chambers and sow division, often appealing to variants of “if you belong in X group, you should vote in this way.” The IRA did not invent computational propaganda, but its intervention in the 2016 campaign offers a compelling case study in how the combination of algorithmically driven microtargeting and “dark posts” shape public opinion by emphasizing identities over arguments (Woolley and Howard; Jamieson).
Just as rhetoricians attended to the role of propaganda in the wake of World War II (Sproule), we felt called to analyze these propaganda advertisements from the 2016 election. However, the social media infrastructure behind microtargeting (focusing advertisements to very specific interests or demographic features) and dark posts (posts that are visible only to selected targets) makes the task of rhetorical analysis more difficult. How are we to decipher the potentially demagogic rhetoric of computational propaganda campaigns if the texts themselves are not public? In the wake of the election, most analyses of the IRA’s computational propaganda campaign looked at only a handful of the advertisements as a larger pool of IRA ads had not been released. On May 10, 2018, Democrats on the House Intelligence Committee released over 3,500 advertisements that Facebook made available to them as part of their ongoing investigation into Russian interference in the US election. Compared to Twitter’s later release of nine million IRA tweets, these advertisements provided a manageable dataset to analyze in order to understand the rhetorical strategies employed by the IRA to influence the 2016 election.
Caption: One of the 3,500 advertisements promoted by the IRA’s campaign. IRADs image 8953.
The release of these advertisements spurred our decision to focus on demagogic rhetoric when planning for a large lecture course on rhetorical criticism at the University of Maryland in Fall 2018. Our goal was to equip students with a critical lexicon that would help them identify and resist demagogic and fascist rhetoric. Roberts-Miller’s Demagoguery and Democracy was our central text, supplemented by earlier analyses of fascist rhetoric by Kenneth Burke and in other foundational works of rhetorical criticism. The central, collaborative project of the class was a large-scale rhetorical analysis of the IRA ads that had just been made publicly available by the House Intelligence Committee. Over the course of a semester, we built a publicly accessible website that hosted the IRA advertisements and students’ coding of them using tags. The site’s database allows users to click on a single tag, like “immigration” or “MAGA,” to see all the advertisements that share that tag. The dataset produced from the project allows for a more complex analysis of this multifaceted computational propaganda campaign. In this chapter’s reflection on teaching Demagoguery and Democracy in tandem with the large-scale analysis of a recent propaganda campaign, we elaborate on how we used Roberts-Miller’s text to orient our approach to building the database, some insights that were generated out of this project, and our reflections on the activity overall. We conclude with thoughts on the interface between rhetorical criticism and methods of computational analysis coming from the digital humanities.
From Concept to Database
The course description keyed into digitality’s influence on demagoguery by underlining how a “critical orientation is particularly important given our current political moment, shaped as it is by digital demagogues, assertions of ‘fake’ news, the power of bots, algorithms, and computational propaganda, and the rising tide of fascist rhetoric.” The semester was organized around different approaches to rhetorical criticism, putting each approach—from close textual analysis to algorithmic criticism—into conversation with questions about demagoguery and democracy in order to expand students’ critical repertoire. Kenneth Burke’s “Rhetoric of Hitler’s ‘Battle’” was the exemplary analysis of “classical fascism,” with Roberts-Miller’s Demagoguery and Democracy offering a more contemporary intervention.
The IRA Ads (IRADs) project was the centerpiece of the course as it bridged issues of criticism, digitality, and demagoguery. Miriam Posner explains that digital projects, like the IRADs project in the course, consist of sources that are processed and presented (“How Did They Make That?”). Thinking through the IRADs project’s sources, processing, and presentation provides a framework for understanding our plan of action. A number of factors made the Facebook ads released by the House Intelligence Committee an ideal source for conducting a large-scale distant reading of digital demagoguery with our students: (1) the number of ads, (2) the wide range of issues and themes addressed by the ads, and (3) the circulation of the ads ahead of the 2016 presidential election. Given the digital scaffolding required to facilitate the analysis, Pfister and his graduate teaching assistants, Heist and Yang, reached out to Lindblad and Summers in the Maryland Institute for Technology in the Humanities (MITH) for advice and expertise on processing the ads into a more usable format. They recommended, and later built, a database using Omeka (See Appendix 10: Resources for Launching Omeka Digtal Collection), an open source web-publishing platform that is primarily used for curating and exhibiting digital collections. The built-in tagging function of Omeka provided a good tool for coding, or processing, the ads, and because it was designed for exhibiting digital artifacts, Omeka also offered a solution for presenting the final findings in a searchable format.
Processing required more than a digital platform—it required the skills of rhetorical criticism. In order to utilize the tagging function in Omeka, our instructional posse first had to create a codebook inspired by the topoi of demagogic rhetoric that students learned in class. Created in Google Sheets and shared via Google Drive, the final codebook consisted of nearly 150 terms drawn organically from the themes of the ads, rhetoric’s critical lexicon, and Roberts-Miller’s insights (See Appendix 9: Complete IRADS Project Codebook). Because of time constraints, and the difficulties of crowdsourcing a codebook with 160 students, students were not able to create the codebook itself.1 They were, however, responsible for coding in groups of four to five over several weeks of discussion section meetings. We instructed them in the best practices of coding, including the importance of working in pairs, reviewing the codebook systematically, and checking other group members’ work.
While the results of the coding were presented as a standalone project on our Omeka site, students were also responsible for developing a final group paper analyzing the ads with the help of the aggregated findings (See Appendix 8: Computational Analysis Assignment). Our colleagues in MITH added the ability to download .csv files of search results, which became a key function for enabling students to develop greater insights. For example, they could search for ads that had co-occurring tags, download a .csv file, and then copy the ads’ text into Voyant Tools to create a word cloud. The files also provided students with details related to ad spends, targeting, and impressions (i.e., when an ad appears in a user’s feed). In analyzing the results, the students developed a fine-grained understanding of propaganda strategy and rhetorical criticism as they had to judge whether a particular advertisement fit a particular code. For example, a focus on the metaphor codes of “darkness and light,” “infestation,” and “invader” drew students’ attention to the ways verbal and visual metaphors “polarize[d] a complicated political situation into us (good) and them (some of whom are deliberately evil)” (Roberts-Miller 34). Examining keyword codes such as “freedom” and “justice” alongside “Republican” attuned students to the role of ideographs—“the basic structural elements, the building blocks, of ideology” (McGee 7)—in shaping “arguments from identity” (Roberts-Miller 49).
Bridging Rhetorical Criticism and Computational Analysis
The IRADs project revealed the challenges and advantages of doing rhetorical criticism as part of a large-scale digital humanities project. For students, and arguably rhetorical critics writ large, moving from computational data to what would be considered more “conventional” rhetorical insights is an ongoing process of negotiation. As rhetorical studies’ engagement with the digital humanities continues to grow and develop (see Ridolfo and Hart-Davidson), rhetorical critics must consider what their insights can offer computational datasets and how to make the move from “big data” to rhetorical insight. Nonetheless, the process of moving from a large dataset to a close textual analysis yielded rich interpretations of this complex computational propaganda campaign. Students developed six skills and sensibilities as a result of this project: students (1) crafted stimulating research questions using computational tools, (2) developed data-based arguments and analyzed the widespread reach of the IRA posts, (3) applied multiple methods of rhetorical criticism, (4) combined distant- and close-reading techniques, (5) recognized social media platforms as rhetorical artifacts and made arguments about their rhetorical significance, and (6) strengthened a critical sensibility towards digitally mediated spaces. We elaborate on each of these points in turn.
First, with the combined datasets of the codes and the tag matrix, students generated provocative research questions. As both Berry and Ramsay have observed (Berry; Ramsay, Reading), computational methods increase the kinds of noticing that are possible. Or, as Ramsay notes specifically, “this process of creation yields insights that are difficult to acquire otherwise” (“On Building”). The computational approach made trends across ads and codes apparent, and students’ research questions, which we quote here, reflected those insights:
“How are culture and gender constructed in the most popular ads based on impressions?” (Connolly et al. 2)
“How did the publication of the Russian propaganda create political drama and heighten tensions surrounding issues of immigration leading up to the 2016 election?” (Hart et al. 3)
“What does the use of the myth of darkness and light teach us about how [IRADs] wanted to persuade people to view existing demographics in the U.S.?” (Grant et al.)
“How do ads focused on smaller picture issues affect undecided voters compared to big picture political ads when it comes to persuading them to vote a certain way?”2 (Cooper et al. 2)
These questions demonstrate the types of observations computational methods make possible. Students came to these questions by looking at images that shared a set of tags, by identifying images with the most interactions and looking for trends in the tags, and by using other dataset-sorting processes made possible by the systematic coding of the IRA propaganda.
Second, students developed the ability to use data generated by distant reading to shape their arguments and refine their insights. For example, students made decisions about what ads to analyze based on which posts had the most impressions and clicks, commonly used words, or popular tags. Student group Grant et al.’s semiotic criticism of images portraying a clash between forces of darkness and light resulted from their discovery “that ‘darkness and light’ had been tagged 316 times making it one of the most popular tags and significantly above the overall average of a single tag use” (3). The tools of the dataset allowed these students to focus on one theme within the dataset and create a more manageable set of images for analysis.
Caption: Tags were selected based on IRA posts’ use of partisan labels, rhetorical concepts, compositional elements, and policy topics. For instance, students labeled this image with tags for economy, infrastructure, Liberal, meme, photograph, Republican, the people, Wall Street, white, and yellow. IRADs image 9342.
The coded dataset also enabled students to make insightful judgments about the reach of the IRA posts. For example, one final paper analyzed how “the IRA promoted their agenda by taking advantage of the pride and patriotism that the American people associate with the American flag” (Balzer et al. 1). Because of the coding efforts, the group was able to determine that the top ten posts that “[employed] the U.S. flag and the myth that American patriotism is under attack” had 156,555 impressions (Balzer et al. 2). This kind of granular analysis provides more empirical backing for claims about propagandistic topoi; more importantly, it activated a conversation about the kind of influence an “impression” might make. On the one hand, students recognized that an impression doesn’t necessarily persuade; on the other, they grew to appreciate the power of reiteration and repetition in propaganda campaigns. Students pondered: if just ten advertisements could secure over 100,000 impressions, and the entire dataset of 3,500 advertisements created over 37 million impressions, what kind of effect might the entire IRA campaign have had across Facebook, Twitter, Instagram, and other platforms?
Third, not only did students identify trends across the dataset, but they also applied more traditional tools of rhetorical criticism to make sense of the images within those specific trends. The computational tools enabled students to create a coherent, organized text for analysis despite the overwhelming number of images circulated by the IRA. Typically, students engaged in the project used computational tools to narrow their analytical focus:
Thus in order to select the images for our analysis, we searched for images in the dataset that included the tags “illegal immigration,” as this was our general topic, and “invader,” as this metaphor has a blatantly negative connotation. The search resulted in a plethora of options, giving us the validation that there were, in fact, underlying methods to intensify political drama with the concept of immigration in the United States. (Hart et al. 2)
Students then applied rhetorical methods to analyze individual images within their narrowed dataset:
The first of these advertisements, published by “Stop Refugees,” uses the visual metaphor, comparing immigrants to crime and danger. The presence of this metaphor is signified through the presence of caution tape, connecting the ideas of danger and crime to immigrants. By doing so, the advertisement alludes to the need for separation to avoid the harm that immigrants and refugees are expected to bring. (Hart et al. 5-6)
In this example, students employed metaphoric criticism. In addition to metaphoric criticism, students used a variety of rhetorical methods covered over the course of the semester, including tetradic criticism, narrative criticism, mythic criticism, ideographic criticism, semiotics, and dramatism. Collectively, their rhetorical analyses of individual images supported their arguments derived from the computational analysis of the larger dataset; the meaning-making process was co-constituted by distant- and close-reading insights.
Fourth, the project made clear the benefits of pairing close reading with distant reading (Coles and Lein 3). As opposed to making these critical approaches an either/or, students drew insights from trends the computational tools made clear and employed rhetorical sensibilities to make more negotiated judgments about the propaganda work at play. This both/and perspective equipped students with the ability to make better sense of the digital landscapes they navigate and the demagogic rhetoric they encounter. We see this project as one model for incorporating rhetorical and computational approaches in the classroom, bringing these two skillsets together to explore the workings of computational propaganda and demagogic rhetoric in digital media ecologies.
As rhetorical criticism classrooms adapt to consider the digital spaces that students encounter every day, leaning into the tension between Leff and Sachs’ traditional conception of close textual analysis and Ullyot’s defense of distant reading can provide a generative middle ground that plays with these two modes of criticism. Students must know and understand how to engage both rhetorical stances if they are to navigate twenty-first-century rhetorical problems. The IRADs project demonstrated how processes of distant and close reading are implicated with one another. Teaching students to see these approaches as mutually exclusive would have been a disservice, failing to equip them with the skills and sensibilities needed to manage the information abundance they deal with in everyday digital spaces. Helping students make sense of a large dataset using both distant- and close-reading techniques provided them with a rhetorical sensibility for recognizing and critiquing strategies of digital demagoguery.
Fifth, the project helped students see social media platforms as rhetorical artifacts, grasping the centrality of digital platforms to this propaganda campaign. Students accounted for the impact of specific content as well as the rhetorical influence of Facebook’s algorithms and policies. By taking Facebook ads seriously as a site of rhetorical influence, students were able to more fully theorize the implications of targeted ads. Students recognized that “specific posts and ads were carefully designed and delivered to those people who already believed in or had the tendency of believing in the idea conveyed in these posts and ads” (Keating, Schaul, and Shapiro) and that targeting was often based on identities in order “to further polarize the opinions between U.S. citizens” (Connolly et al. 1). This conclusion reflects how the digital insights of the project amplified Roberts-Miller’s argument that demagoguery flourishes when identities trump argument. This connection was mirrored in students’ arguments that Facebook played an important role in the IRA’s campaign for disinformation because in “today’s digital world, individuals will accept information without any validation” (Balzer et al. 8). Another group suggested that the “ads were posted on Facebook as a strategic propaganda tool” because the “simple click of a button results in a domino effect” in a culture where “the trend is to follow what others do” (Cooper et al. 8).
Sixth, the IRADs project pushed students to attune themselves rhetorically, bringing a critical approach to mediated spaces. The project asked students to reveal the tactics of the IRA and uncover the strategies of an online propaganda campaign that sought to undermine US democracy. Students came to important conclusions about the consequences of subtle techniques employed by the IRA ads:
“Based on extensive analysis and use of both computational and semiotic criticism, we can conclude that IRADs perceived that there were underlying issues of racial tensions in the U.S. that could be exploited in order to manipulate political views. In recognition of current racial issues, IRADs attempted to uproot and magnify these issues. IRADs did this by rehashing historical events and hardships that involved race and depicting them in such a way that they relayed polarizing sides of good and evil” (Grant et al. 9-10).
“Through both myths and elements of demagoguery, it becomes apparent how serious the messaging through the IRA posts may be in American culture. Through the myth that “patriotism must be defended”, these posts encouraged the U.S. to use power against those threatening patriotism. This myth also worked to target specific groups of people … and frame them as a threat…. This, in turn, means that there are many people who either agree in the alienation of groups or were persuaded through these posts to believe that the U.S. would be better without these groups. This reveals not only that Americans are easily persuaded, but that this persuasion has the power to drive hatred” (Balzer et al. 10).
Through the trends in the coded dataset and close textual analysis of specific images, students made clear how the public was manipulated online in the lead-up to the 2016 election. And as Roberts-Miller asserts in her book, the process of countering and unmasking misinformation is a meaningful mode of combatting demagoguery (104). Ultimately, if we want the public to understand its own manipulation, we need students to be able to critically interrogate the posts and information they encounter online.
Lessons Learned
Throughout the project, we managed and discovered the challenges of facilitating the project's logistics and helping students make connections for rich analyses. Logistically, the messiest element of the project was harmonizing the tags. We gave students time to carefully code the images during a series of in-class discussion sections, but the Omeka interface did not account for basic human error. If a student accidentally misspelled a word, added an extra space between words in the tag, forgot to add a comma between their tags, or made some other formatting error, they would generate a faulty tag. For example, incorrect tags included six different versions of “U.S. Flag,” more than one keyword, and common misspellings of words like “millennial.” Once faulty tags were created, the autocomplete feature of the site would cause other groups to add their image to the incorrect tag. To fix these errors, an individual with administrative status in Omeka had to go through and delete the incorrect tag and subsequently add the correct one—a process that required correcting the tags one image at a time. Students were able to help identify incorrect tags through the process of checking their peers’ tags, but ultimately the labor of editing these images’ codes fell to the instructional team.
Caption: One of many images tagged “U.S. Flag” in the IRADs database. IRADs image 7776.
To address this issue, we encouraged students to copy the tags directly from the codebook. We also provided them with a comprehensive list of the incorrect tags as a backend solution for identifying images whose tags needed to be edited. Individuals looking to implement a large-scale coding project in their course will want to anticipate these challenges, particularly in terms of knowing the affordances and constraints of the coding system they select. We did not expect the technical challenges we encountered because Omeka was a new system for the instructional team. Notably, we had to complete the work to reconcile the tags before our colleagues at MITH could sort the dataset. The process of coding the images, and subsequently checking and cleaning the tags, took more time than we originally anticipated. As a result, students had less time to work with the cleaned and sorted dataset generated through MITH’s tools for processing the tags. While we initially anticipated showing students a series of trends in the dataset, the timing required students to explore the dataset on their own.
The length of time it took to code the images also limited the instructional time spent helping students think about how to make the transition from the insights of the dataset to judgments made possible by a rhetorical sensibility. While the students asked insightful questions that yielded meaningful discussions, their criticism did not always produce the rich analyses we hoped for. Pedagogically, we could have spent more time modeling the kinds of criticism we hoped to see. Crafting exemplar thesis statements based on trends in the dataset could have helped students navigate the move from computational insights to a rhetorical orientation. Of course, now that we have a model database, dataset, and student papers, future students on allied projects will, in the spirit of imitatio, be able to draw on their predecessors’ work.
The Daunting Challenge of Digital Demagoguery
This project revealed possibilities for providing publics with the sensibility to critically evaluate digital demagoguery alongside the intensification of computational propaganda. However, we were left questioning how individuals could possibly be expected to rebuke or even notice the extent of digital demagoguery without the benefit of large-scale efforts like ours. Throughout this project, we were steeped in the messiness of sorting through the informational abundance of a targeted propaganda campaign. Even with a system for processing this information and 160 students working together to make sense of these ads, there is still much about the IRA campaign left unexamined. For individual people using digital platforms, the level of critical scrutiny that would be required to detect and interrogate all of the different propaganda efforts underway would be seemingly impossible—an unpropitious takeaway from this project. However, this takeaway also speaks to the importance of engaging in these types of projects in the classroom. If the lone Facebook user cannot make sense of all the propaganda they encounter, our classrooms have a responsibility to help students navigate the challenges that live between campus and planet. As Rebecca Alt and Rosa Eberly explain, a revised paideia appropriate to contemporary challenges like digital demagoguery and computational propaganda requires that students be able to “identify, explain, collaboratively discuss, and evaluate the communicative processes (including the material, digital, technological, and affective) that construct knowledge and shape social and political action” (100). This project allowed students to contribute to a public good and a wider conversation about the threats and manipulation we face in digital spaces.
This project—and the lessons we’ve learned—speak to the ongoing salience of rhetorical criticism of fascist rhetoric. Even as the medium changes, many of the same basic tropes and rhetorical strategies are at play in digital propaganda. As we have demonstrated with this project, large datasets (which are common when dealing with digital platforms) need human perspective at some level in order to organize abundant information (e.g., by identifying themes or negotiating ambiguities across tags). To make sense of massive campaigns like the IRA’s computational propaganda, students needed not only digital tools and coding processes for sorting data, but also the sensibilities of a rhetorical critic. As students put these two skills into conversation, they used the dataset to identify trends in the IRA campaign while employing close textual analysis to interpret the content those trends highlighted. Collaboration—a core value of the digital humanities (Spiro)—aligns with Roberts-Miller’s call for cultivating a culture of democracy (93). Because developing the database and coding the advertisements encouraged metareflection about propagandistic messaging, our project facilitated a microculture of democratic sensibilities that Roberts-Miller posits as necessary for resisting demagoguery.
As a final lesson, this project exhibits ways of moving the microculture of democracy from the classroom to wider sites of debate and deliberation. Situating the database’s creation as a public humanities project modeled how pedagogical activities can migrate to public venues. Creating a tool for public use helped heighten the stakes of the work students were doing in the classroom, encouraging more care and attention in the coding process. More importantly, it helped those involved see the work of rhetorical criticism as unbound by classroom doors. Public humanities work oriented them to the ways in which the sensibilities of a rhetorical critic can and should be taken out into the world. Empowering students to see themselves as critics of the rhetoric they encounter in everyday digital ecologies is an essential component of what Roberts-Miller advises for a culture that combats demagoguery. At each stage of the project, students were required to collaboratively grapple with the challenges of interpreting, categorizing, and analyzing their images—all with a sense of how a wider public might engage, scrutinize, or misapply their work. If, as Roberts-Miller argues, “democracy is about disagreement, uncertainty, complexity and making mistakes,” then our project modeled the kind of democratic processes necessary for resisting digital demagoguery (128).
Thus, we call for courses in rhetorical criticism to embrace public digital humanities work as an orientation to the fundamental communication challenges that lie ahead. Digital environments will only get more rhetorically complicated with the advent of deepfakes, artificial intelligence, and virtual reality, and the rhetorical criticism classroom has the capability to instill the skills and sensibilities students need to make sense of these complicated digital ecologies. Equipping our students to manage overabundant digital information is a vital educational outcome for a contemporary rhetorical criticism course. Classrooms that stay committed to the traditional bounds of rhetorical criticism cannot fully prepare students for the online environments in which they exist. Moreover, they ultimately fall short of showing students what rhetorical criticism can accomplish in contemporary public life. In the end, our project makes clear how demagoguery thrives in online spaces and why a combination of distant- and close-reading skills can help students navigate those spaces. It is our responsibility as educators and critics to help students develop the sensibilities they need to identify and stifle the circulation of demagogic discourses online.
Acknowledgement:
The authors would like to thank the students in COMM 401 for their patience as we collaboratively took on this project and for the critical insights they shared in their analyses.
Works Cited
Alt, Rebecca A., and Rosa A. Eberly. “Between Campus and Planet: Toward a Posthumanist Paideia.” Review of Communication, vol. 19, no. 2, 2019, pp. 94-110.
Balzer, Julia, Julia Carey, Albert Isenhour, and Nisha Seebachan. “The Persuasion of Patriotism.” 10 Dec. 2018. Communication 401, University of Maryland, College Park, student paper.
Berry, David M. “Introduction: Understanding the Digital Humanities.” Understanding Digital Humanities, edited by David M. Berry, Palgrave Macmillan, 2012, pp. 1-20.
Burke, Kenneth. “The Rhetoric of Hitler’s ‘Battle.’” Philosophy of Literary Form, Vintage Books, 1957, pp. 164-89.
Coles, Katherine, and Julie Gonnering Lein. “Solitary Mind, Collaborative Mind: Close Reading and Interdisciplinary Research.” Digital Humanities 2013: Conference Abstracts, Center for Digital Research in the Humanities, 2013, pp. 150-153.
Connolly, Olivia, Jennifer Serarols, Dongyan Tan, and Carly Tomes. “Computational Analysis: Culture and Gender in Facebook Propaganda Ads.” 10 Dec. 2018. Communication 401, University of Maryland, College Park, student paper.
Cooper, Renee, Carly Roesen, Madison Roshkoff, and Lindsay Shapiro. “2016 Election Swayed by Voters’ Emotions Linked to Political Issues.” 10 Dec. 2018. Communication 401, University of Maryland, College Park, student paper.
“For the first time in American history.” Internet Research Agency Ads, https://archive.mith.umd.edu/irads/items/show/7776.html. Accessed 10 Feb. 2021.
Grant, Mikaylah, Emily Hilliard, Allison Regan, and Sichen Yang. “Computational Analysis.” 10 Dec. 2018. Communication 401, University of Maryland, College Park, student paper.
Hart, Kristina, Zoë May, Regan Shanahan, and Mary-Kate Shirley. “Platform Analysis of Russian Propaganda.” 10 Dec. 2018. Communication 401, University of Maryland, College Park, student paper.
Hayles, N. Katherine. “How We Think: Transforming Power and Digital Technologies.” Understanding Digital Humanities, edited by David M. Berry, Palgrave Macmillan, 2012, pp. 42-66.
“In spite of the prevailing stereotypes.” Internet Research Agency Ads, https://mith.umd.edu/irads/items/show/9342.html. Accessed 10 Feb. 2021.
Jamieson, Kathleen Hall. Cyberwar: How Russian Hackers and Trolls Helped Elect a President. Oxford UP, 2018.
Leff, Michael, and Andrew Sachs. “Words the Most like Things: Iconicity and the Rhetorical Text.” Western Journal of Speech Communication, vol. 54, no. 3, 1990, pp. 252-73.
McGee, Michael Calvin. “The ‘Ideograph’: A Link Between Rhetoric and Ideology.” Quarterly Journal of Speech, vol. 66, no. 1, 1980, pp. 1-16.
Lindblad, Purdom, Nora Murphy, Damien Smith Pfister, Meridith Styer, Ed Summers, and Misti Yang. “Internet Research Agency Ads Dataset” (data file). https://mith.umd.edu/irads/data.zip. Accessed 10 Feb. 2021.
Posner, Miriam. “How Did They Make That? Reverse-Engineering Digital Projects.” Internet Archive, 14 Apr. 2014, https://archive.org/details/howdidtheymakethat. Accessed 15 Dec. 2020.
Ramsay, Stephen. “On Building.” Stephen Ramsay’s Blog, 11 Jan. 2011, https://web.archive.org/web/20111016144401/http:/lenz.unl.edu/papers/2011/01/11/on-building.html. Accessed 15 Dec. 2020.
---. Reading Machines: Toward an Algorithmic Criticism. U of Illinois P, 2011.
Ridolfo, Jim, and William Hart-Davidson, editors. Rhetoric and the Digital Humanities. U of Chicago P, 2014.
Roberts-Miller, Patricia. Demagoguery and Democracy. The Experiment, 2017.
Spiro, Lisa. “‘This Is Why We Fight’: Defining the Values of the Digital Humanities.” Debates in the Digital Humanities, edited by Matthew K. Gold, U of Minnesota P, 2012, dhdebates.gc.cuny.edu/debates/text/13. Accessed 15 Dec. 2020.
Sproule, Michael J. Propaganda and Democracy: The American Experience of Media and Mass Persuasion. Cambridge UP, 1997.
Ullyot, Michael. “TEI for Close-Readings.” Michael Ullyot, 10 Nov. 2017, http://ullyot.ucalgaryblogs.ca/2017/11/10/tei2017/. Accessed 15 Dec. 2020.
“United We Stand!” Internet Research Agency Ads, https://archive.mith.umd.edu/irads/items/show/8953.html. Accessed 10 Feb. 2021.
Woolley, Samuel C., and Philip N. Howard. Computational Propaganda: Political Parties, Politicians, and Political Manipulation on Social Media. Oxford UP, 2019.
Notes:
Based on conversations with colleagues, we decided that it would be best for us to create the codes. They advised us to create the codebook rather than having students come up with codes because that process requires more knowledge about qualitative coding than the students had. Students were able to refine and contribute to the codebook by offering suggestions after reviewing the completed set of codes. ↩
Students made the distinction between small- and big-picture issues based on a poll about the top voting issues in the 2016 election. ↩