RECOMMENDED: What I Read This Summer: Caddie Alford

This is a guest post with Dr. Caddie Alford, who summarizes several readings that she encountered over the summer that explore current realities of technopessimism and technofascism in digital rhetoric studies. We hope you’ll appreciate her thoughtful insights!

– The dh+lib Review Editors

 

What I Read This Summer: Caddie Alford

I’m Associate Professor of Rhetoric and Writing at Virginia Commonwealth University where I co-direct the critical AI Futures Lab with Dr. Jennifer Rhee. I specialize in digital studies, rhetorical theory, and critical AI. I’m the author of Entitled Opinions: Doxa After Digitality (U of Alabama P, 2024) and I’m currently co-editing a collection on “post-truth” rhetorics. I’m in the early stages of conceptualizing a book project that remediates the 5Vs of big data for parsing how digitality affects information.

This particular chunk of summer reading was meant to animate my 5Vs project, but I’ve also been motivated to address my increasingly intense feelings of techno-pessimism. My research inquiries can broadly be described as inquiries into the power that Big Tech wields and how that power disenfranchises the most vulnerable, which is to say that I’ve been doing this work long enough that the shift to technofascism in the US felt right on schedule. Similarly, it’s made a sad sort of sense that “AI” companies are enacting Big Tech platformization 2.0, with all the attendant harms that only seem to get worse over time. I got a revise and resubmit back from a journal last semester and one of the readers said in so many words that the article was polemic. They were right. I’ve been doomscrolling as a hermeneutics.

Below are a diverse range of texts that have encouraged me to (doom)scroll otherwise. In fact, even before reading the DISCO author collective’s book Technoskepticism, I think I was reaching toward exactly what they were: a position vis-à-vis digitality that “mediates between the two poles of optimism and refusal” (8)—a position that “makes space for ambivalence, for the paradoxical cohabitation of joy and doubt, curiosity and caution” (8). Significantly, technoskepticism is built “as an ethic of care” (185), which centers the kind of ongoing, community-forward reparative approaches that digitality calls for right now.

 

Adelman, David et al. Technoskepticism: Between Possibility and Refusal. Stanford: Stanford UP, 2025.

Technoskepticism was published by a collective of fourteen authors—the Digital Inquiry Speculation Collaboration Optimism Network (DISCO)—comprised of technologists, researchers, practitioners, artists, and policymakers. DH+Lib readers will recognize names like Lisa Nakamura and AndrĂ© Brock and as a rhetorical studies scholar I was excited to see that M. Remi Yergeau and Catherine Knight Steele were involved. The authors’ expertise ranges from disability studies to critical AI, history of computing, critical race studies
the list goes on. This powerhouse of multivocality is what makes their framework of technoskepticism so salient for the current moment. Right when it feels like the book’s statements about digital nostalgia for pre-Web 2.0 or wellness begin to settle into claims, another voice takes the wheel and we’re on to thinking about Asian racial identity or the exigence of received eugenicist rhetorics. You don’t land often or easily in this book, but that’s what I wanted: an invitation to dwell in the intersectional complexities of digitality.

I was particularly struck by the “Desiring Diagnosis” and “Blackness and AI” chapters. “Desiring Diagnosis” models technoskepticism as a method, taking what could easily be (and often is!) judged or even moralized—self-diagnosis, especially through the lens of platform technics—and affirms its potential “as a nourishing refusal for the digital clinic” (28). After all, as the authors assert, “diagnosis isn’t permanent” (37)—diagnosis could be considered a temporary space for healing and growth. But this is technoskepticism, so even when we have fresh and inspiring modes of thinking, they are tempered with stark realities, including the awareness that “psychiatric diagnosis and big data mining share an epistemological orientation” that eschews “a search for ground truth in favor of reliable correlations” (19). That commitment to, on the one hand, desire and belonging and on the other hand indignities and suspicion therein also runs through the “Blackness and AI” chapter, with its equally complex recognition that Black technology receptivity can lead to joy even as the machinery is an expression of colonial white supremacy. At the end, though, “Blackness and AI” offers refusal as a necessary response to the rhetorics of inevitability and hype cycles that package AI technologies, thereby joining a growing chorus of science and technology studies and critical humanities scholars who know too much about the history of how, say, conversational AIs reproduce white care (Rhee) or how algorithmic bias was always a feature (Hicks; Noble). In a fiery moment, the authors write, “When considering the possibilities for Blackness and AI, Afro-skepticism begins from refusal, arguing that neither the context nor the moment demonstrates a need for a technical solution to long-standing social inequities” (152). I felt this sentence deep in my bones: finally, a firm and informed articulation that what we are told about these technologies has no real salience—no real problem that they’re solving so much as exacerbating. 

 

Rhetoric of/with AI, special issue of Rhetoric Society Quarterly, edited by S. Scott Graham and Zoltan P. Majdik, vol. 54, no. 3, 2024.

The “no real salience” point is a great transition to this next text—the Rhetoric of/with AI special issue of Rhetoric Society Quarterly, edited by S. Scott Graham and Zoltan P. Majdik—because if there are any scholars who know how to critique hype, it’s rhetoricians. Rhetorical studies—an undervalued branch of the humanities—has a special purchase on the question of AI and its technosocial interventions, replications, and inheritances. As Atilla Hallsby confirms, “it is because certain features of AI are so evidently rhetorical that rhetoricians are well-equipped to speak to its many exigencies” (233). Rhetoric has long been an audience-centered study, practice, and art of identifying the available means and contingencies of persuasion and identification in each situation, so our body of technical vocabulary and scholarship has been built over millennia on the tensions and overlaps between “truth” and opinion, artifice and authenticity, invention and instruction, and so on. The widespread AI hype and subsequent (enforced) adoptions, buy-ins, fantasies, and effects has led to a strange moment for rhetoric, one in which as a field we’re “faced with technologies that model our central object of study—rhetoric—in ways that are simultaneously recognizable yet unfamiliar, potentially productive yet also deeply problematic” (229) and we know that we’re faced with these models precisely because of the power of rhetoric. I’m thinking about all the ad copy since 2022 to “harness the power of AI.” Power was getting harnessed, alright. 

The Hallsby article is incredibly smart and I would also recommend Kem-Laurin Lubin and Randy Allen Harris’ “Sex After Technology: The Rhetoric of Health Monitoring Apps and the Reversal of Roe v. Wade,” which interrogates what they call “algorithmic ethopoeia,” or “the mathematizing of human data for the digital representation of people in subjugation to algorithmic processes” (249)—in this case, through women’s health data—toward surveillance, control, and alignment with Christo-fascist misogyny. 

While I appreciate the editors’ emphasis on how computational methods can address larger scale research inquiries, I personally don’t agree that academic disciplines must “advance through robust dissoi logoi” (229), to explore all dominant options that this technology presents. I don’t think it’s necessary or ethical to encourage integrating AI technologies alongside critiquing them. With this technology, a dissoi logoi approach is at odds with what rhetoric and rhetorical methods communicates about AI and demands from its practitioners. The editors admit that “AI represents a clear and present danger to society,” but in the same breath they write, “yet it may also catalyze exciting new ways of conducting research” (225). I’m reminded of art professor Sonja Drimmer’s scathing point: “Postmortems of biases and errors, recognition of the limitations and misalignment of CVs’ capabilities with matters of relevance to current art-historical research and teaching raise the very question that few people seem to be asking: What good is this? Or, more importantly, for whom is it good? Cui bono?” (“Machine”). That question doesn’t cozy up to “exciting new ways of conducting research.” 

At this juncture, the refusal orientation that Technoskepticism extends feels like the only approach that my rhetorical training and years of examining Big Tech has prepared me to take with both earnestness and integrity. In Hallsby’s words, “studying rhetoric is like gaining a glimpse at the source code, helping us to understand what AI can, cannot, and must not do, even as it promises to make our lives better, easier, and less burdened by mechanical tasks” (242). Refusal has been generative for other scholars at the intersection of rhetoric and writing studies. Contrary to what some academics on LinkedIn would have you think, refusal is an incredibly productive place for research, teaching, and information sciences. There’s more than enough humanities inquiry to go around with secondhand outputs and studies, subreddit discourse communities, grounded theory collection from tech workers, affordance theory and AI-generated content, and so on. You don’t have to contribute to the noise or the metrics or the waste or the violence to critique and make meaning out of the noise, the metrics, the waste, and the violence. Remember, only companies like NVIDIA are benefiting here—only people like Peter Thiel want this “future.”

 

Wynn-Williams, Sarah. Careless People: A Cautionary Tale of Power, Greed, and Lost Idealism. New York: Flatiron, 2025.

So if the goal was to stay with technoskepticism to avoid straight techno-pessimism, it might not make sense why I read Careless People, Sarah Wynn-Williams’ memoir of her time as the Director of Public Policy at Facebook. It’s not as if evidence about the evils of Meta is hard to come by, from whistleblowers like Frances Haugen and Sophie Zheng to journalists’ accounts like Max Fisher’s The Chaos Machine (2022). Admittedly, the fact that Meta obtained an arbitrator’s ruling that prohibited Wynn-Williams from promoting the book herself piqued my interest. And, indeed, the book includes wild findings that range from explosive documents related to the project to get into China such as detailed technical explanations—the likes of which were “things Facebook has said are simply impossible when Congress and its own government have asked” (313)—to predictable character profiles, such as when Zuckerberg asks if it’s a bad thing to be compared to “a modern-day William Randolph Hearst” (286) or when he gushes about his favorite president, Andrew Jackson (142). Overall, though, one consensus on tech memoirs is that they don’t “help us dismantle the Silicon Valley system” because they fail “to provide alternatives to the stagnant monopoly model driven by venture capital” (Lovink 20). There is definitely class solidarity on display in this memoir: more than a few times, Wynn-Williams justifies sticking with the company for the health insurance or her family. I’m a Jennifer Egan fan, so let’s just say these moments felt a little “Selling the General” wherein a publicist takes a gig rebuilding the reputation of a genocidal dictator all so that her daughter can attend private school
and then it felt a lot like “Selling the General” when we get to the end and Wynn-Williams tells us she’s been drawn to AI and is now making a living working on AI policy (378). 

Tech memoirs often include the motif of falling, hard, from the myths of techno-optimism, which taken together can reveal slight changes over time to internet and tech industry imaginaries, but those moments have almost become generic and are often the most simplistic attempts at resistance. As Tamara Kneese’s “Our Silicon Valley, Ourselves” echoes, it remains a question whether the “hidden stories of Silicon Valley” can “translate into changing collective class consciousness”—“Is an awareness of mutual fuckedness enough to form a coalition?” These falls from optimism more often than not operationalize lifting the veil to appeal to a broad readership. 

And still, these memoirs, particularly by women, do interject fascinating complexities that are in line with technoskepticism. These subtle choices and interjections throw the tech world into relief by presenting fuller depictions of humanity. Kneese writes, for example, that that in “femme accounts of life in code, embodiment is inescapable.” Wynn-Williams’ visceral accounts, therefore, of being in places like Columbia and Turkey and desperately needing to pump breastmilk or the sexual harassment she endured from top executive Joel Kaplan provide salient demonstrations that bodies cannot hew to tech’s ideology of immateriality, especially those bodies that were never meant to be incorporated in the first place.

 

Asparouhova, Nadia. Antimemetics: Why Some Ideas Resist Spreading. The Dark Forest Collective, 2025.

Hang tight: I would not necessarily say this is a recommendation.

Antimemetics promises to explore the circulation economy of ideas especially as they spread online. In particular, Asparouhova is interested in “antimemetics,” or the economy and logics of ideas that resist being remembered, captured, or embraced. She maps the dynamic between more contagious ideas and quiet, untimely, perhaps suppressed ideas onto a retelling of the now cannon distinction between public adtech web and the “cozyweb,” or Venkatesh Rao’s extension of Yancey Strickler’s “Dark Forest Theory of the Internet.” That theory has grown in application since its publication, partly because social media users are migrating more and more to private chats, niche hobbyist communities, and the fediverse. The theory and its spinoffs have also grown intentionally: Yancey Strickler and six others founded Metalabel in 2021 to become a collaborative publishing hub and in 2024, they published The Dark Forest Anthology of the Internet—a superb collection organized around the shift in what we want from the internet, from honing status to “seeking safety and context.” It gets even more complicated from there with Kristoffer Tjalve and the Naïve Weekly crowd who believes in and practices a poetic phonebook of websites.

I had high hopes for Antimemetics. I’ve been loosely tracing platform migration patterns, user-generated interventions and demands, with an eye toward how that was shaping information circulation and scales of acceptability, so this book seemed right up my alley. I also wanted to reference it to nudge DH+Lib readers interested in the internet and information to take seriously intermediary spaces of meaning-making, whether that’s independent publishing collectives and platforms or smart public cultural critics of internet and technology trends (I’m thinking of Aidan Walker or artists like Anna Zhang). I made note of some intriguing phrasing and potentially useful citations. I appreciated that toward the end she reminds us of our agency in how ideas circulate, offering a praxis of roles like idea “champions” and “truth-tellers.”

Ultimately, though, the book was not as generative as I hoped it would be
or, I should say, it was generative all right, just in a completely unexpected way. Kevin Munger pointed out that “the value of Asparouhova’s thesis would be enhanced by engaging with media theory. This is generally true for the Rationalist/Bay Area/Tech writing scene of which she is a part. I engage with this scene a lot, I find them useful on many topics—but I find them extremely unappreciative of medium-is-the-message style arguments. Although self-critical to a fault, they are all too happy to retain the mistaken belief that information is virtual rather than physical.” My friend and colleague Collin Brooke was similarly surprised that twenty-five years of internet culture goes uncited in this book, referencing—at the very least—Richard Brodie’s Virus of the Mind (1996), Susan Blackmore’s The Meme Machine (1999), and Aaron Lynch’s Thought Contagion (1996). In the acknowledgment Asparouhova shares that she didn’t tell many people about the book. And yet, in theorizing idea filtering she claims that “group chats are a place to build trust with likeminded people, who eventually amplify each other’s ideas in public settings. Memetic and antimemetic cities depend on each other: the stronger memes become, the more we need private spaces to refine them” (46). The book feels, in a word, isolated.

I want to linger on Munger’s claim that this writer is part of the “Rationalist” writing scene. The main example Asparouhova gives of an antimeme is Curtis Yarvin, the founder of the neoreaction “Dark Enlightenment” movement. At least three other citations are people connected to the TESCREAL bundle, which is Timnit Gebru and Émile P. Torres’ acronym for the collection of ideologies motivating the pursuit of AGI—ideologies that draw from and extend eugenics argumentation (some are invested in AI hybridity to optimize human “stock” and others argue that humanitarian goals in the present like disease prevention are much less important than funding so-called “AI alignment”). Asparouhova cites philosopher Nick Bostrom whose incredibly racist email from the 90s is publicly available and Eliezer Yudkowsky, once a proponent of singularitarianism and the person responsible for rationalism. The book ends with Robert Moses, “though his legacy is controversial” (158). These are not, in fact, the people who are saying what we’re all thinking before we’re ready to hear it
they’re not saying what I’m thinking, that’s for sure. These are not the best citations to tell the rich story of antimemetics. More than that, these citations all lit the way to a twisted network of Peter Thiel, Sam Altman, Curtis Yarvin, JD Vance
.I could go on. And so a book supposedly about antimemetics turned into, for me, a really dark week spiraling into the sycophantic “LessWrong” blog and texts about the anti-democratic neoreactionaries, or the “new right.” In 2014, Thiel wrote to Yarvin: “one of our hidden advantages is that these people”—progressives—“wouldn’t believe in a conspiracy if it hit them over the head (this is perhaps the best measure of the decline of the Left). Linkages make them sound really crazy” (see, Kofman, Ava, “Curtis Yarvin’s Plot Against America,” The New Yorker, June 2, 2024).

I’ll leave it at that.

dh+lib Review

This post was produced through a cooperation between the author, Caddie Alford, Nickoal Eichmann-Kalwara and Pamella Lach (Editors for the week), Ruth Carpenter, Caitlin Christian-Lamb, Molly McGuire, Christine Salek, and Rachel Starry (dh+lib Review Editors), and Tom Lee (Technical Editor).