Please read Contact/FAQ before reaching out. If your inquiry is addressed there, I might prioritized other emails in my evergrowing inbox. Utah MS/BS students interested in research opportunities, please ensure to review the contact/FAQ section before visiting my office without an appointment.


I am an Assistant Professor in the Kahlert School of Computing at the University of Utah. My primary research interests are at the confluence of natural language processing (NLP), explainable artificial intelligence (XAI), and multimodality. I am interested in projects that (1) rigorously validate AI technologies, and (2) make human interaction with AI more intuitive.

For an example of robust validation check out our work on carefully designing benchmarks to validate robustness of QA models in the presence of common linguistic phenomena such as negation or coreference. On the other hand, to help people create a mental model about how to interact with AI, I have contributed to building models that self-explain their predictions in a way that is easily understandable to people. For example by saying why did the model give this answer instead of another one (contrastive explanations) or by telling in plain language the gist of its reasoning (free-text explanations). Moving forward, I am excited to evaluate and improve such models with application-grounded, human-subject evaluations.

Previously, I was a Young Investigator at the Allen Institute for AI from 2019–2022 where I worked with Noah A. Smith and Yejin Choi. During that time I also had a courtesy appointment in the Paul G. Allen School of Computer Science & Engineering at the University of Washington. I did my PhD in the Heidelberg University NLP Group, where I was advised by Anette Frank. Prior to receiving my PhD in 2019, I completed B.Sc. (2013) and M.Sc. (2015) in Mathematics at the University of Zagreb.

I grew up in Omiš, Croatia. One would think I was always appreciative of mountains living in a place like that, but it took moving to Seattle to realize that free time is best spent outdoors. You can see some of my outings here.


News

Apr 2024 I'm leading a session on trust and explainability at Bellairs Invitational Workshop on Contemporary, Foreseeable and Catastrophic Risks of Large Language Models 🌴
March 2024 Our paper where we reflect on the longstanding robustness issues in NLP is accepted to NAACL!
Feb 2024 I am senior area chairing again for ACL'24.
Jul 2023 Do Androids Laugh at Electric Sheep? Humor "Understanding" Benchmarks from The New Yorker Caption Contest was awarded with Best Paper Award at ACL'23. 🎉
Jul 2023 I am a Best Senior Area Chair at ACL'23. 😊

Older news.