Sessions
What’s Coming
Founder and Chief Data Scientist, Open Augments
Accelerating Rigorous Quantitative Research with AI Agents: An Introduction to the Data Analyst Augmentation Framework (DAAF)
AI agents can now autonomously plan, write, review, and execute analytic code, raising urgent questions about their role in research given known risks like hallucinations and inaccuracies. This session introduces DAAF, an open-source framework for Claude Code, to help researchers leverage these tools for data analysis while maintaining transparency, reproducibility, and rigor.
Postdoctoral Researcher, National Center for AI Research (CENIA)
Principles of Agentic Research
This session aims to demystify working with AI agents. Drawing on six months of daily experience, Mitchell will walk through practical patterns for getting started, strategies for steering agents effectively, and the critical skill of stochastic verification—sampling from traces, outputs, and intermediate results to maintain quality without reviewing every line. The key takeaway: you are the creative force directing the work, and learning when and how to check what agents produce is the most important part of the enterprise.
Assistant Professor of Data Analytics / Political Science, Tulane University
Silicon Sampling Under Different Sources of Uncertainty
The majority of work using language models to test survey questions and treatments has been done on well known and existing datasets. I test the ability of models to replicate human samples as compared to unpublished human data given: common vs new questions and treatments, different country contexts, and as varies over time. Results should speak to where models can help develop new surveys and experiments versus where traditional pilots are needed.
Associate Professor, Keough School of Global Affairs, University of Notre Dame
TBD
TBD
Postdoctoral Researcher, Nonviolent Action Lab, Harvard Kennedy School
TBD
TBD
