I am a final year Ph.D. student at the University of Copenhagen, CopeNLU group, supervised by Isabelle Augenstein and co-supervised by Christina Lioma, and Jakob Grue Simonsen. My current research focus is explainability for machine learning models, encompassing natural language explanations, post-hoc explainability methods, and adversarial attacks as well as the principled evaluation of existing explainability techniques. My work is currently centered on the application area of knowledge-intensive and complex reasoning natural language tasks, such as fact checking and question answering.
News
April 2022: Our work "Fact Checking with Insufficient Evidence" has been accepted to TACL! We propose a new diagnostic dataset, SufficientFacts, and a novel data augmentation strategy for contrastive self-learning of missing evidence.
April 2022: I have been invited to give a talk at the Responsible Data Science and AI Speaker Series". at the University of Illinois at Urbana-Champaign on October 7th, 2022!
March 2022: I took part in the panel "When Research Goes Wrong: Deepfakes!", part of the Legal Tech Research Talks at the University of Copenhagen's Faculty of Law.
February 2022: I gave an invited talk on "Explainable and Accountable Automatic Fact Checking" for the NLP group at Oxford.
December 2021: Our paper on extractive explanations for complex reasoning tasks guided by diagnostic properties was accepted to AAAI-2022 (acceptance rate of 15%)!
October 2021: I'll be visiting FAIR for a research internship starting from January 2022!
September 2021: New pre-print on extractive explanations for complex reasoning tasks guided by diagnostic properties!
September 2021: I gave an invited talk at FAIR's AI and Society talk series about explaining automated fact checking predictions and current vulnerabilities of such models.
August 2021: A co-supervised (with Isabelle Augenstein) Bachelor's student successfully defended his thesis on evaluating the robustness of explainability techniques.
June 2021: Paper on joint emotion label space modelling for affect lexica accepted at the Computer Speech & Language journal.
May 2021: A paper accepted at the Finding of ACL'2021 on a semi-supervised dataset for offensive language identification! [Dataset].
April 2021: The thesis of a co-supervised (with Isabelle Augenstein) Master's student on multi-hop fact checking of political claims accepted as a long paper to IJCAI 2021! [Dataset]
January 2021: Excited to organise and present the lab on explainable AI at the ALPS 2021 winter school!
December 2020: Co-organising a shared task at SemEval'2020 on multilingual offensive language identification in social media (OffensEval 2020).
November 2020: Presenting two papers at EMNLP'2020! The first paper is a diagnostic study of post-hoc explainability techniques for text classification tasks. The second paper studies the generation of well-formed and label cohesive adversarial attacks for fact checking.
September 2020: A co-supervised (with Isabelle Augenstein) Master's student successfully defended his thesis on multi-hop fact checking of political claims!
July 2020: Excited to present the first paper of my PhD program on generating fact checking explanations at ACL'2020!
March 2020: Excited to start my research internship at Google Research working on adversarial fact checking evidence extraction!