About

The Sloane Lab is an interdisciplinary research group led by sociologist Mona Sloane. It conducts empirical research on the implications of technology for the organization of social life. The focus lies on artificial intelligence (AI) as a social phenomenon that intersects with wider cultural, economic, material, and political conditions. The lab spearheads social science leadership in applied work on responsible AI, public scholarship, and technology policy. Dr. Sloane’s and Sloane Lab’s past and present supporters include the Association for Computing Machinery, Data&Society, Ford Foundation, German Federal Ministry of Education and Research, Google, Institute of Electrical and Electronics Engineers, National Science Foundation, New America, New_Public, NYU Center for Responsible AI, NYU Libraries, NYU Tandon School of Engineering, NYU Office of Sustainability, Patrick J. McGovern Foundation, Pivotal Ventures, Pulitzer Center, Technical University Munich, Future Imagination Collaboratory Fund at NYU Tisch School of the Arts, University of Notre Dame, UVA Darden-Data Science Collaboratory for Applied Data Science, UVA Karsh Institute of Democracy, and the Weizenbaum Institute Berlin.

Contact

Email: monasloanelab [at] gmail [dot] com

Focus Areas

Sloane Lab currently focuses on four intersecting areas of research and engagement: AI and the professions; applied AI fairness, accountability, and transparency; AI policy and governance; and public scholarship.

AI Policy and Governance

The landscape of AI regulations, governance measures, and policies is rapidly expanding, as are government-led funding initiatives on AI. Sloane Lab is tracking and analyzing the global landscape of national AI strategies and AI regulation, producing systematic reviews of the governance mandates and compliance requirements. Related work focuses on developing new frameworks and tools for meaningful AI compliance. New work in the AI policy and governance area focuses on student-led AI governance on university campuses, specifically with regards to establishing student technology councils.

AI and the Professions

As AI systems become baked into the organization of social life, they also affect how people do their jobs and gain meaning and identity from it. Often described as “automated decision systems”, AI systems tend to automate not only tasks, but discretionary decision-making in the professions—ranging from if and how rideshare drivers can pick clients and routes, to how doctors arrive at a diagnosis, or judges at a decision about bail. Sloane Lab’s current research focuses on how discretionary decision-making, work processes, and identity are shaped by AI, particularly in HR and recruitment and in professional journalism. Past work has focused on ethics, AI, and professional practice through a three-year study of how AI start-ups in Europe’s largest economy–Germany–conceive of and operationalize ethics.

Public Scholarship

Sloane Lab is dedicated to the advancement of public scholarship and engagement. Dr. Sloane is a regular advisor and public speaker on AI and society issues. She frequently works with practitioners, policymakers, civil servants, and collaborators from different fields to advance critical thinking and innovation in the technology space. Public scholarship projects include work with the UVA Karsh Institute of Democracy as faculty lead on issues related to technology and democracy, the Co-Opting AI public speaker series and the Co-Opting AI book series with the University of California Press, the editorship of the Technology section of Public Books, the A BETTER TECH project on public interest technology careers, or the *This Is Not A Drill* program on art, technology, equity, and the climate emergency.

Applied AI Accountability and Transparency

Mounting evidence that AI systems can exacerbate inequality and inflict harm is bringing questions of AI accountability to the fore. When addressed through a purely technical lens, these approaches to these questions often yield inadequate solutions. Sloane Lab tackles this by creating interdisciplinary frameworks for meaningful AI accountability and transparency. This work includes expanding AI audit frameworks to consider ground-truthing and conducting social science-driven AI audits, such as on AI-driven personality assessment tools, speech-to-text transcription tools, and motion capture technology. It also includes creating meaningful AI transparency, particularly through contextual transparency for specific professions and stakeholders, and considering the role of participation in AI. Collaboratively, Sloane Lab develops technical tools to enhance AI transparency. The Gumshoe tool, with Hilke Schellmann (NYU) and Michael Morisy (MuckRock), uses NLP to help journalists analyze large FOIA datasets in the MuckRock database.

Team

Mona Sloane

Michael Amadi

Ava Birdwell

Mack Brumbaugh

Celia Calhoun

Ella Duus

Emma Harvey

Desiree Ho

Owen Kitzmann

Katelyn Mei

Ploy Pruekcharoen

Emilia Ruzicka

Hauke Sandhaus

Ellen Simpson

Megan Wiessner

Collaborators

David Danks

Ekkehard Ernst

Abigail Jacobs

Khari Johnson

Allison Koenecke

Antonios Mamalakis

Emanuel Moss

Charity Nyelele

Hilke Schellmann

Matt Statler

Theresa Veer

Stefaan Verhulst

Elena Wüllhorst

Alumni

Alina Constantin

Adio Tichafara Dinika

Tanvi Sharma

Janina Zakrzewski