Södertörn University
Select theme: all | valuations | algorithms | infrastructures | disease surveillance | artificial intelligence
Computational universalism, or, Attending to relationalities at scale
Feb 2025 | » Get text | Keywords: algorithms, social theory, artificial intelligence
The social sciences and humanities have increasingly adopted computational terminology as the organizing categories for inquiry. By organizing research around vernacular computational objects (e.g., data, algorithms, or AI) and divided worldly domains (e.g., finance, health, and governance), scholars risk obscuring the universalizing practices and ambitions of computation. These practices seek to establish new relationalities at unprecedented scales, connecting disparate domains, circulating resources across boundaries, and positioning computational interventions as universally applicable. Drawing on intellectual traditions that inspect the fixity of universalizing claims, we problematize the easy adoption of computational categories and argue that they serve as epistemic traps that naturalize the expanding reach of computational universalism. Instead of accepting the hardened categories of our interlocutors, we propose attending to the partial, effortful, and often contested work of translation and commensuration that enables computational actors to position themselves as obligatory passage points across all domains.
Lee, Francis, and David Ribes. 2025. "Computational universalism, or, Attending to relationalities at scale." Social Studies of Science. https://doi.org/10.1177/03063127251345089
The ontological politics of synthetic data: Normalities, outliers, and intersectional hallucinations
Jan 2025 | » Get text | Keywords: algorithms, social theory, artificial intelligence
Synthetic data is increasingly used as a substitute for real data due to ethical, legal, and logistical reasons. However, the rise of synthetic data also raises critical questions about its entanglement with the politics of classification and the reproduction of social norms and categories. This paper aims to problematize the use of synthetic data by examining how its production is intertwined with the maintenance of certain worldviews and classifications. We argue that synthetic data, like real data, is embedded with societal biases and power structures, leading to the reproduction of existing social inequalities. Synthetic data can have crucial ontological consequences and contribute to the reproduction of social facts and categories, such as class, race, gender, or age. Through empirical examples, we demonstrate how synthetic data tends to highlight majority elements as the 'normal' and minimize minority elements, and that changes to data structures that create synthetic data will inevitably result in what we term 'intersectional hallucinations.'
Lee, Francis, Saghi Hajisharif, and Ericka Johnson. 2025. "The ontological politics of synthetic data: Normalities, outliers, and intersectional hallucinations." Big Data & Society 12(1). https://doi.org/10.1177/20539517251318289
Reassembling Agency: Epistemic Practices in the Age of Artificial Intelligence
Jan 2025 | » Get text | Keywords: algorithms, social theory, artificial intelligence, actor-network theory
This article reflects on how sociology can analyse the role of artificial intelligence (AI) in scientific practice without buying into the current AI hype. Drawing on sensibilities developed in actor-network theory (ANT), it introduces the concept of 'agencing' (agency as a verb) which refers to how scientists debate and configure human and machine agency. It suggests that we can come to a more nuanced understanding of the effects of AI in science by attending to actors' agencing practices. By discussing three ideal types of agencing, the article argues that AI should not be regarded as a rupture in the tooling and practices of science, but rather as a continuation of long-standing patterns of practice.
Lee, Francis. 2025. "Reassembling Agency: Epistemic Practices in the Age of Artificial Intelligence." Sociologisk Forskning 62(1-2): 43-58. https://doi.org/10.37062/sf.62.27824.
Caring for the Monstrous Algorithm: Attending to Wrinkly Worlds and Relationalities in an Algorithmic Society
Mar 2024 | » Get text | Keywords: algorithms, social theory, artificial intelligence
This text proposes that we, social analysts of algorithms, need to develop a split vision for the algorithm-as-technological-object and the algorithm-as-assemblage in order to effectively attend to, analyze, and critique algorithms in society. The point of departure is that we need to distance ourselves from a simplified and reductive understanding of algorithms-as-objects, and care for them as part of a relational algorithmic assemblage. A simplified notion of algorithms is problematic for two reasons: First, it produces a reductive notion of the world where decision-makers point to algorithms-as-objects to simplify decisions about the world. Second, by taking a simplified and delineated object called 'algorithm' as the point of departure for analysis and critique in an algorithmic society, we risk producing technologically deterministic understandings of complex phenomena.
Lee, Francis. 2024. "Caring for the Monstrous Algorithm: Attending to Wrinkly Worlds and Relationalities in an Algorithmic Society." In Care in Times of (AI) Crisis, edited by Karl Palmås. Springer. https://doi.org/10.1007/978-3-031-52049-5_5
Publications updated in Jan 2026