I’m an assistant professor at the University of Utah’s Kahlert School of Computing, where I co-lead UtahNLP and run the ANANAS research group.
I study how to build AI technologies that support human decision-making, communication, and creativity. I study where such assistance is valuable and develop benchmarks to test how reliably AI performs on related tasks involving language, images, and audio. A central thread of my work also explores how to translate what AI models “know” into reasoning that people can follow and act on. My most notable contributions in this area examine whether a model’s verbal explanations are faithful to its internal computations. I’m increasingly focused on collaboration with agents and applications in math research, coding, and education through my role as an RAI faculty fellow.
You can hear me talk about our work on the WiAIR podcast.
We’re grateful to Coefficient Giving, Martian, the One-U RAI Initiative, Google, and the Allen Institute for AI for supporting the group’s work. 💚
Recent Talks
- University of Arizona, Faithfulness of LLM Reasoning & Its Emerging Questions
- UCLA, If You Want Reasoning, Look Inside
- Workshop on the Application of LLM Explainability to Reasoning & Planning @ CoLM 2025, If You Want Reasoning, Look Inside
- RepL4NLP @ NAACL 2025, If You Want Reasoning, Look Inside
Paper Awards & Honors
- EMNLP 2025 Outstanding Paper, FUR
- CoLM 2025 Spotlight, MixAssist
- ACL 2023 Best Paper Award, Do Androids Laugh at Electric Sheep?
- SoCal NLP 2022 Best Paper Award, CondaQA
- ACL 2020 Best Paper Honorable Mention, Don't Stop Pretraining