Speaker
Description
This study analyzes public discourse on conspiracy theories related to artificial intelligence (AI) by examining four Reddit threads from the r/conspiracy subreddit, posted between mid-2023 and mid-2024. The dataset includes over 200 user comments discussing the role of AI in surveillance, media manipulation, labor displacement, and political influence. We apply a generative thematic analysis methodology, using a large language model (LLM) to identify recurring themes, supported by human oversight to ensure accuracy and interpretive validity. Our approach combines the scalability of automated analysis with close reading and thematic refinement by researchers.
The analysis reveals a typology of perceived control mechanisms, including cognitive, behavioral, discursive, emotional, epistemic, and symbolic control, attributed to state agencies, technology corporations, and political elites. Reddit users describe AI as a tool for shaping truth, eroding autonomy, simulating public consensus, and restructuring societal norms. While some claims reflect extreme or speculative views, most participants articulate structural critiques of centralized AI deployment, data governance, and algorithmic influence.
This research highlights how public interpretations of AI, whether accurate or not, shape trust, adoption, and resistance. Our findings show the importance of transparency in AI design and deployment, and indicate that perceived opacity in AI development can amplify systemic mistrust. This generative-human hybrid content analysis method also illustrates the potential of LLMs in computational social science, for example in mapping complex public narratives about fast changing technologies.