News

Jul 2023 Do Androids Laugh at Electric Sheep? Humor "Understanding" Benchmarks from The New Yorker Caption Contest was awarded with Best Paper Award at ACL'23. 🎉
Jul 2023 I am a Best Senior Area Chair at ACL'23. 😊
May 2023 Our work on humor "understanding" benchmark is accepted to ACL'23.
May 2023 Attending EACL.
Apr 2023 I gave a talk at the University of Maryland on what does it mean to increase the trust of users in AI with explainability. Thanks Marine Carpuat for the invitation!
Dec 2022 I am teaching Data Mining this spring.
Dec 2022 I am a Senior Area Chair for the ACL'23 interpretability track.
Nov 2022 CondaQA was presented at the SoCal NLP Symposium where it won a Best Paper Award.
Oct 2022 Three papers accepted to EMNLP. Kudos to Abhilasha, Alexis, and Shruti! 🙌
Sept 2022 Excited to serve as a panelist after the movie After Yang that is screened as a part of the STEM Education Film Series organized by Park City Film.
Aug 2022 I designed a course on local explainability that I'm thrilled to teach this semester.
July 2022 I am a Senior Area Chair for the EMNLP'22 interpretability track.
May 2022 Excited to give a talk at NSF AI Institute for Research on Trustworthy AI in Weather, Climate, and Coastal Oceanography (AI2ES).
April 2022 Our work on few-shot self-rationalization is accepted to Findings of NAACL 2022.
April 2022 Talk at CMU.
April 2022 Talk at NEC Labs Europe.
April 2022 Talk at the University of Lisabon / Unbabel Seminar.
March 2022 Guest lecture at UW.
March 2022 Guest Lecture at the University of British Columbia.
March 2022 Talk at UC Irvine.
Feb 2022 Talk at John Hopkins University.
Feb 2022 Talk at Brown University.
Feb 2022 Talk at the University of Chicago.
Feb 2022 Talk at Ohio State University.
Feb 2022 Talk at Purdue University.
Feb 2022 Talk at the University of Utah.
Dec 2021 Invited talk at the USC Machine Learning Symposium: Explanation Selection Through The Lens of Free-Text and Contrastive Explanations (Slides).
Dec 2021 Invited talk at UCL NLP Meetup: Self-Explainability for Intuitive and Controllable Interaction: On Reducing Human-Authored Free-Text Explanations for Training (Slides).
Nov 2021 New preprint: Few-Shot Self-Rationalization with Natural Language Prompts.
Nov 2021 Keynote at BlackboxNLP @ EMNLP'21: Contrastive Explanations of NLP Models (Slides).
Nov 2021 Invited talk at the Georgia Tech NLP Seminar: Explanation Selection Through The Lens of Free-Text and Contrastive Explanations (Slides).
Nov 2021 Invited talk at the University of Sheffield Seminar: Explanation Selection Through The Lens of Free-Text and Contrastive Explanations (Slides).
Sep 2021 You can find me at AllenNLP Hacks!
Aug 2021 Two papers accepted to EMNLP'21: about evaluating free-text explanations & documenting enormous webtext corpora.
Aug 2021 NLP Highlights Podcast: I talked with Nanna Inie and Leon Derczynski about opportunities and barriers between HCI and NLP.
Aug 2021 Our review of datasets for explainable NLP is accepted to NeurIPS:.
Jun 2021 NLP Highlights Podcast: I talked with Tosin Adewumi and Perez Ogayo about "lowresourcedness" and how Masakhane is using participatory research to spur NLP in African languages.
May 2021 NLP Highlights Podcast: A new episode with Lisa Li about her work on prefix-tuning.
May 2021 Three papers accepted to Findings of ACL, about effective attention, linearized graph-to-text generation, and contrastive explanations. Kudos to Kaiser, Alexander, and Alexis! 🙌
May 2021 I joined NLP Highlights Podcast as a co-host! My first episode is with Danna Gurari who joined Pradeep and me to talk about VQA that supports real users. Listen to this episode here.
April 2021 We released a documentation of C4 accompanied with an interactive demo and a github repository.
Mar 2021 I'm a panelist at the SoCal ML/NLP Symposium.
Feb 2021 We released a review of datasets for explainable NLP accompanied with a website.
Dec 2020 A new preprint on graph awareness in linearized graph-to-text generation is available on arXiv.
Dec 2020 A new preprint on generating contrastive explanations is available on arXiv.
Dec 2020 I'm one of outstanding reviewers for EMNLP 2020.
Dec 2020 A paper on formalization of human-AI trust is accepted to FAccT 2021.
Dec 2020 Updated my website, woo.