I’m an assistant professor at the University of Utah’s Kahlert School of Computing, where I co-lead UtahNLP and run the ANANAS research group.
I study how to build AI technologies that support human decision-making, communication, and creativity. I study where such assistance is valuable and develop benchmarks to test how reliably AI performs on related tasks involving language, images, and audio. A central thread of my work also explores how to translate what AI models “know” into reasoning that people can follow and act on. My most notable contributions in this area examine whether a model’s verbal explanations are faithful to its internal computations. I’m increasingly focused on collaboration with agents and applications in higher education through my role as an RAI faculty fellow.
News
| Oct 2025 | I was a guest on the WiAIR podcast! Here's the episode if you'd like to hear me talk about what we've been studying lately. |
| Oct 2025 | 🎵 MixAssist is selected as a CoLM spotlight! 🎵 |
| Oct 2025 | Speaking at the CoLM Workshop on the Application of LLM Explainability to Reasoning and Planning where I'll give a talk: If You Want Reasoning, Look Inside: CoT vs Internal Reasoning with Unlearning and Feature Circuits. |
| Oct 2025 | Our paper on measuring parametric faithfulness with FUR will be presented at the CoLM Workshop on the Interplay of Model Behavior and Model Internals. |
| Aug 2025 | Our work on measuring CoT parametric faithfulness is accepted to EMNLP, and our work analyzing synthetic benchmarks to EMNLP-Findings! |