I am a postdoc researcher at the University of Copenhagen, CopeNLU group, supervised by Isabelle Augenstein . My current research focus is explainability for machine learning models, encompassing natural language explanations, post-hoc explainability methods, and adversarial attacks as well as the principled evaluation of existing explainability techniques. My work is currently centered on the application area of knowledge-intensive and complex reasoning natural language tasks, such as fact checking and question answering.

News

January 2024: I gave an oral talk titled "Exploring the Explainability Landscape: Testing and Enhancing Explainability Techniques." at Chalmers University.

December 2023: I was honored to receive one of the two ELLIS best PhD thesis awards!

November 2023: I'm excited to have a paper accepted at EMNLP 2023's main conference! The title of the paper is: "Explaining Interactions Between Text Spans".

November 2023: I was honored to give an invited talk at MISDOOM 2023 titled "From Opacity to Clarity: Embracing Transparent and Accountable Fact Verification".

October 2023: I am happy to share that I will be on the organising team of Repl4NLP workshop that will be co-located with ACL'2024.

October 2023: I was honored to receive the Informatics Europe (IE) best dissertation award for my dissertation Accountable and Explainable Methods for Complex Reasoning over Text". The award was sponsered by Springer who also provided the opportunity to publish my dissertation in a dedicated Springer series!

September 2023: I was happy to give an invited guest lecture on the topic of "Generating and Evaluating Explainability Techniques" at the "Seminars in Data Science" Master's course at ITU.

September 2023: On 1 September, the ERC Starting Grant project ExplainYourself on "Explainable and Robust Automatic Fact Checking" has officially started. The project is supported by an ERC Starting Grant awarded to Isabelle Augenstein. I am excited to be part of the project as a co-supervisor to two PhD students.

July 2023: I am serving as an Area Chair for the "Interpretability, Interactivity, and Analysis of Models for NLP" track at EMNLP 2023!

July 2023: I am looking forward to giving an invited talk on the accountability and explainability of machine learning models for mis- and disinformation at MISDOOM 2023!

July 2023: I will give an oral talk of our paper "Faithfulness Tests for Natural Language Explanations" at ACL 2023! It can be found here. I will also have the opportunity to give a talk on the paper at the Nordic AI Meet 2023.

May 2023: I'm excited to have two papers accepted at ACL 2023's main conference! The titles of the papers are: "Faithfulness Tests for Natural Language Explanations" (first author) and "bgGLUE: A Bulgarian General Language Understanding Evaluation Benchmark ".

April 2023: I gave an invited talk at The University of Massachusetts' NLP group on explainable and accountable fact checking.

January 2023: CopeNLU has new open PhD and postdoc positions. Among them is a PhD position on explainable fact checking, which will be co-supervised by me. See [link] for more details!

...

November 2022: My PhD thesis is now online!

October 2022: Our paper Generating Fluent Fact Checking Explanations with Unsupervised Post-Editing" was accepted to the Information Journal, for the special issue Advances in Explainable Artificial Intelligence.

October 2022: I gave a talk at the Responsible Data Science and AI Speaker Series". at the University of Illinois at Urbana-Champaign on the topic of "Methods for Accountable and Explainable Complex Reasoning Tasks"!

September 2022: I am starting on a new position as a postdoctoral researcher at CopeNLU! I'll be working with Isabelle Augenstein on a project titled "Understanding the Effects of Natural Language Processing-Based Trading Algorithms", which was funded by a Villum Synergy Initiator Grant!

September 2022: I submitted my Ph.D. thesis titled "Accountable and Explainable Methods for Complex Reasoning over Text"!

August 2022: I will be serving as a website chair for the 17th Conference of the European Chapter of the Association for Computational Linguistics (EACL'2023).

April 2022: Our work "Fact Checking with Insufficient Evidence" has been accepted to TACL! We propose a new diagnostic dataset, SufficientFacts, and a novel data augmentation strategy for contrastive self-learning of missing evidence.

March 2022: I took part in the panel "When Research Goes Wrong: Deepfakes!", part of the Legal Tech Research Talks at the University of Copenhagen's Faculty of Law.

February 2022: I gave an invited talk on "Explainable and Accountable Automatic Fact Checking" for the NLP group at Oxford.

December 2021: Our paper on extractive explanations for complex reasoning tasks guided by diagnostic properties was accepted to AAAI-2022 (acceptance rate of 15%)!

October 2021: I'll be visiting FAIR for a research internship starting from January 2022!

September 2021: New pre-print on extractive explanations for complex reasoning tasks guided by diagnostic properties!

September 2021: I gave an invited talk at FAIR's AI and Society talk series about explaining automated fact checking predictions and current vulnerabilities of such models.

August 2021: A co-supervised (with Isabelle Augenstein) Bachelor's student successfully defended his thesis on evaluating the robustness of explainability techniques.

June 2021: Paper on joint emotion label space modelling for affect lexica accepted at the Computer Speech & Language journal.

May 2021: A paper accepted at the Finding of ACL'2021 on a semi-supervised dataset for offensive language identification! [Dataset].

April 2021: The thesis of a co-supervised (with Isabelle Augenstein) Master's student on multi-hop fact checking of political claims accepted as a long paper to IJCAI 2021! [Dataset]

January 2021: Excited to organise and present the lab on explainable AI at the ALPS 2021 winter school!

December 2020: Co-organising a shared task at SemEval'2020 on multilingual offensive language identification in social media (OffensEval 2020).

November 2020: Presenting two papers at EMNLP'2020! The first paper is a diagnostic study of post-hoc explainability techniques for text classification tasks. The second paper studies the generation of well-formed and label cohesive adversarial attacks for fact checking.

September 2020: A co-supervised (with Isabelle Augenstein) Master's student successfully defended his thesis on multi-hop fact checking of political claims!

July 2020: Excited to present the first paper of my PhD program on generating fact checking explanations at ACL'2020!

March 2020: Excited to start my research internship at Google Research working on adversarial fact checking evidence extraction!