I am a final year Ph.D. student at the University of Copenhagen, CopeNLU group, supervised by Isabelle Augenstein and co-supervised by Christina Lioma, and Jakob Grue Simonsen. My current research focus is explainability for machine learning models, encompassing natural language explanations, post-hoc explainability methods, and adversarial attacks as well as the principled evaluation of existing explainability techniques. My work is currently centered on the application area of knowledge-intensive and complex reasoning natural language tasks, such as fact checking and question answering.


September 2021: New pre-print on extractive explanations for complex reasoning tasks guided by diagnostic properties!
September 2021: I gave an invited talk at FAIR's AI and Society talk series about explaining automated fact checking predictions and current vulnerabilities of such models.
August 2021: A co-supervised (with Isabelle Augenstein) Bachelor's student successfully defended his thesis on evaluating the robustness of explainability techniques.
June 2021: Paper on joint emotion label space modelling for affect lexica accepted at the Computer Speech & Language journal.
May 2021: A paper accepted at the Finding of ACL'2021 on a semi-supervised dataset for offensive language identification!
April 2021: The thesis of a co-supervised (with Isabelle Augenstein) Master's student student on multi-hop fact checking of political claims accepted as a long paper to IJCAI 2021!
January 2021: Excited to organise and present the lab on explainable AI at the ALPS 2021 winter school!
December 2020: Co-organising a shared task at SemEval'2020 on multilingual offensive language identification in social media (OffensEval 2020).
November 2020: Presenting two papers at EMNLP'2020! The first paper is a diagnostic study of post-hoc explainability techniques for text classification tasks. The second paper studies the generation of well-formed and label cohesive adversarial attacks for fact checking.
September 2020: A co-supervised (with Isabelle Augenstein) Master's student successfully defended his thesis on multi-hop fact checking of political claims!
July 2020: Excited to present the first paper of my PhD program on generating fact checking explanations at ACL'2020!
March 2020: Excited to start my research internship at Google Research working on adversarial fact checking evidence extraction!