About

Hi! I’m Stuart, an Assistant Professor at the University of California, San Diego with appointments in the Department of Communication and the Halıcıoğlu Data Science Institute. I am also affiliate faculty in Science Studies, Computer Science & Engineering, Computational Social Science, and the Institute for Practical Ethics, where I lead IPE’s Working Group on Data Governance & Accountability.

My work is on the social, cultural, political, and economic aspects of quantification, data science, automation, and AI. I have long focused on the role of AI in the governance and moderation of social media platforms and online communities, but I also care about issues like fairness, privacy, accountability, and labor in many application domains where AI is being deployed. Finally, I take the institutions of scientific and technological research as my object of study, asking how the disciplines and professions are changing around or by quantification, data science, automation, and AI.

New work in 2026: Most of my current research is around societal risks, impacts, alignment, and deployment of small and large pretrained language models, where I am working to imagine how AI could be otherwise. I am particularly interested in: 1) auditing for discrimination and bias, and 2) open-source/weight models and sovereign infrastructure. Much of this work is about building societal capacity for critical practice around AI, with tools and exhibitions that help everyday people without coding expertise interrogate models for their concerns, as well as experience how AIs can be trained and deployed in quite different ways than the dominant corporate, proprietary, centralized, opaque, extractive, surveilled, and sycophantic products on offer:

  • Sentimentomatic is a very quick, easy, and free way to work with small language models used in sentiment analysis and content moderation in your browser, no coding required.
  • Auditomatic is a free web & desktop app for designing and running single-task evaluations of large language models, no coding required. You can easily run the same prompt on dozens of different models, or design an experiment that varies some key text in a prompt (e.g. varying a name for a resume screener task to measure bias, varying instructions for prompt engineering). Auditomatic integrates with dozens of different model providers like OpenAI and Anthropic via API keys (most of which require payment).
  • bartlebyGPT is a critical making project to fine-tune pretrained LLMs to invert their usual sycophantic helpfulness by refusing to perform any task, with domain-specific critical reasoning about the harms of outsourcing that task to AI.
  • An ongoing project on distributed and sovereign compute infrastructure, from locally-hosted small LLMs that run on old consumer hardware to distributed supercomputing commons like the NSF-funded National Research Platform.
  • An ongoing project on more human- and community-centered approaches to AI evaluation, in which we work with people and communities to understand their concerns and uses of AI, then help co-design evaluations that respond to their specific concerns.

My work and research

I am an interpretive social scientist trained as an ethnographer, with a broad background in the humanities — but I have just enough expertise in computer science and data science to make trouble. I consider myself a methodological and disciplinary pluralist, as I draw from and contribute to many different academic disciplines. I use a broad range of qualitative, quantitative, and computational methods to holistically investigate the role of science and technology in our society, culture, politics, and economy. I have a particular focus on decentralized communities and institutions, such as open source software, scientific research, peer production platforms (like Wikipedia), and social media sites.

Most of my previous work has focused on Wikipedia, where I’ve studied the people and algorithms that produce and maintain an open encyclopedia. I’ve also studied scientific research networks and projects, including the Long-Term Ecological Research Network, the Open Science Grid, and the Moore-Sloan Data Science Environments. I study topics including newcomer socialization, cooperation and conflict, community governance, specialization and professionalization, information verification and quality control, hackathons and community workshops, the roles of support staff and technicians, bias and discrimination, and diversity and inclusion. I also often focus on how these issues all intersect with and are embedded in the design of software and automated systems.

My background and history

I received my Ph.D from the UC-Berkeley School of Information, my M.A. from the Communication, Culture, and Technology program at Georgetown University, and my B.A. in the Humanities program at the University of Texas at Austin. For just under five years after receiving my Ph.D, I was at the Berkeley Institute for Data Science as a staff ethnographer. At BIDS, I was first a postdoctoral scholar, then became a principal investigator and led several research and education efforts, including the institute’s Data Science Studies efforts and the Best Practices in Data Science series.

My intellectual communities

I’m a disciplinary nomad, integrating disciplines like computer science, information science, social psychology, and organization/management science with fields like philosophy, sociology, anthropology, and history of science and technology. In terms of academic specialties, I spend a lot of my time in the fields of Science and Technology Studies, Computer-Supported Cooperative Work, and new media / internet studies. Methodologically, while I am trained as a qualitative ethnographer, I also rely on other qualitative, quantitative, and computational methods. I often use more statistical forms of analysis to contextualize and further support more qualitative approaches, frequently collaborating with people from other disciplines. I frequently speak at conferences and events, and I also consult with various groups, organizations, and companies about a wide range of topics.