Please check out Contact/FAQ before reaching out, especially if you're a Utah student looking for research opportunities or TA-ing. If your inquiry is addressed there, I might prioritize other emails in my ever-growing inbox.
Fall'25 prospective PhD students: If you're interested in doing a PhD with me, please apply & explicitly mention your interest in working with me. The committee carefully considers every application. I'm not responding to long emails sharing your application profile.

I’m an assistant professor in the Kahlert School of Computing at the University of Utah. My research interests broadly fall into NLP, human-centered AI, and intrepretability. The problems I’m currently most excited about are:

  • Empowering individuals and groups by improving AI-assisted decision making, communication, and creativity.
  • Radically changing evaluation protocols for studying human behavior under AI assistance.
  • Creating new high-quality training and evaluation data and understanding the role of professional dataset creators, domain experts, and annotators in this process in the era of generative AI.
  • Connecting explanations generated in plain language with internal computations.

Previously, I was a Young Investigator at the Allen Institute for AI (2019–2022), working with Noah A. Smith and Yejin Choi, and held a courtesy appointment in the Paul G. Allen School of Computer Science & Engineering at the University of Washington. I earned my PhD from the Heidelberg University NLP Group, where I was advised by Anette Frank. Prior to receiving my PhD in 2019, I completed B.Sc. (2013) and M.Sc. (2015) in Mathematics at the University of Zagreb.

I grew up in Omiš, Croatia. One would think I was always appreciative of mountains living in a place like that, but it took moving to Seattle to realize that free time is best spent outdoors.


News

Jun 2024 Our work on measuring chain-of-thought faithfulness is accepted to TMLR.
Jun 2024 I prepared a session on data influence for NAACL Tutorial: Explanations in the Era of Large Language Models.
Apr 2024 I'm leading a session on trust and explainability at Bellairs Invitational Workshop on Contemporary, Foreseeable and Catastrophic Risks of Large Language Models 🌴
Mar 2024 Our paper where we reflect on the longstanding robustness issues in NLP is accepted to NAACL!
Feb 2024 I am senior area chairing again for ACL'24.

Older news.