I am an Assistant Professor (Tenure Track) in the Department of Computer Science, NLP Section at the University of Copenhagen. I am currenly employed at the University of Copenhagen with specific teaching duties that would be dedicated to teaching courses to industry practitioners. I am also co-leading the CopeNLU group with Isabelle Augenstein. My research centers on advancing the interpretability of language models (LMs) with key interests in:
- Interpreting the parametric and contextual knowledge mechanisms of LLMs, e.g., how LMs encode, retrieve, and utilize knowledge to make predictions.
- Factuality in LMs -- addressing the challenge of maintaining factual accuracy in LMs to improve reliability in practical applications.
- Explainability method development -- designing robust and user-aligned explainability techniques that enhance the interpretability of complex models.
- Analysis and evaluation of explainability techniques -- developing metrics to rigorously assess the effectiveness of existing explainability methods.
For more details and a list of recent publications, see Publications
I completed my PhD at the University of Copenhagen under the supervision of Isabelle Augenstein, Jakob Grue Simonsen, and Christina Lioma. My PhD was funded by a Marie Skłodowska-Curie Fellowship and the PhD Thesis titled "Accountable and Explainable Methods for Complex Reasoning over Text" was awarded two best PhD Thesis Awards by ELLIS and Informatics Europe. I also gained industrial experience during my PhD with two internships at Meta and Google. Furthermore, I conducted an interdisciplinary Postdoctoral fellowship at the University of Copenhagen with Isabelle Augenstein and Christian Borch, where we employed LM explanations to simulate trading behaviors.
For more details, see my publications.
If you are interested in collaborating or being supervised by me, please see Contact.
I am looking for a PhD student on Interpretable NLP with a start date in September 2025! The successful candidate will co-supervised by Prof. Isabelle Augenstein and will join our CopeNLU group. For more details see the APPLICATION LINK. See reasons to apply here.
News
November 2024: I gave a talk for the Sheffield NLP Group (Slides)
October 2024: Our paper "DYNAMICQA: Tracing Internal Knowledge Conflicts in Language Models" was accepted at Findings of EMNLP 2024!
September 2024: I gave an invited talk on "Facts Unveiled: Navigating Factuality in the Era of Generative Models" at Romanian AI days 2024!
September 2024: I gave an opening keynote on "Facts Unveiled: Navigating Factuality in the Era of Generative Models" at iGeLU 2024!
September 2024: I am excited to start on a new position as a Tenure-track Assistant Professor at the University of Copenhagen, Department of Computer Science. See the news piece here!
May 2024: Our paper "Revealing the Parametric Knowledge of Language Models: A Unified Framework for Attribution Methods" was accepted to ACL'2024!
January 2024: I gave an oral talk titled "Exploring the Explainability Landscape: Testing and Enhancing Explainability Techniques." at Chalmers University.
...December 2023: I was honored to receive one of the two ELLIS best PhD thesis awards!
November 2023: I'm excited to have a paper accepted at EMNLP 2023's main conference! The title of the paper is: "Explaining Interactions Between Text Spans".
November 2023: I was honored to give an invited talk at MISDOOM 2023 titled "From Opacity to Clarity: Embracing Transparent and Accountable Fact Verification".
October 2023: I am happy to share that I will be on the organising team of Repl4NLP workshop that will be co-located with ACL'2024.
October 2023: I was honored to receive the Informatics Europe (IE) best dissertation award for my dissertation Accountable and Explainable Methods for Complex Reasoning over Text". The award was sponsered by Springer who also provided the opportunity to publish my dissertation in a dedicated Springer series!
September 2023: I was happy to give an invited guest lecture on the topic of "Generating and Evaluating Explainability Techniques" at the "Seminars in Data Science" Master's course at ITU.
September 2023: On 1 September, the ERC Starting Grant project ExplainYourself on "Explainable and Robust Automatic Fact Checking" has officially started. The project is supported by an ERC Starting Grant awarded to Isabelle Augenstein. I am excited to be part of the project as a co-supervisor to two PhD students.
July 2023: I am serving as an Area Chair for the "Interpretability, Interactivity, and Analysis of Models for NLP" track at EMNLP 2023!
July 2023: I am looking forward to giving an invited talk on the accountability and explainability of machine learning models for mis- and disinformation at MISDOOM 2023!
July 2023: I will give an oral talk of our paper "Faithfulness Tests for Natural Language Explanations" at ACL 2023! It can be found here. I will also have the opportunity to give a talk on the paper at the Nordic AI Meet 2023.
May 2023: I'm excited to have two papers accepted at ACL 2023's main conference! The titles of the papers are: "Faithfulness Tests for Natural Language Explanations" (first author) and "bgGLUE: A Bulgarian General Language Understanding Evaluation Benchmark ".
April 2023: I gave an invited talk at The University of Massachusetts' NLP group on explainable and accountable fact checking.
January 2023: CopeNLU has new open PhD and postdoc positions. Among them is a PhD position on explainable fact checking, which will be co-supervised by me. See [link] for more details!
November 2022: My PhD thesis is now online!
October 2022: Our paper Generating Fluent Fact Checking Explanations with Unsupervised Post-Editing" was accepted to the Information Journal, for the special issue Advances in Explainable Artificial Intelligence.
October 2022: I gave a talk at the Responsible Data Science and AI Speaker Series". at the University of Illinois at Urbana-Champaign on the topic of "Methods for Accountable and Explainable Complex Reasoning Tasks"!
September 2022: I am starting on a new position as a postdoctoral researcher at CopeNLU! I'll be working with Isabelle Augenstein on a project titled "Understanding the Effects of Natural Language Processing-Based Trading Algorithms", which was funded by a Villum Synergy Initiator Grant!
September 2022: I submitted my Ph.D. thesis titled "Accountable and Explainable Methods for Complex Reasoning over Text"!
August 2022: I will be serving as a website chair for the 17th Conference of the European Chapter of the Association for Computational Linguistics (EACL'2023).
April 2022: Our work "Fact Checking with Insufficient Evidence" has been accepted to TACL! We propose a new diagnostic dataset, SufficientFacts, and a novel data augmentation strategy for contrastive self-learning of missing evidence.
March 2022: I took part in the panel "When Research Goes Wrong: Deepfakes!", part of the Legal Tech Research Talks at the University of Copenhagen's Faculty of Law.
February 2022: I gave an invited talk on "Explainable and Accountable Automatic Fact Checking" for the NLP group at Oxford.
December 2021: Our paper on extractive explanations for complex reasoning tasks guided by diagnostic properties was accepted to AAAI-2022 (acceptance rate of 15%)!
October 2021: I'll be visiting FAIR for a research internship starting from January 2022!
September 2021: New pre-print on extractive explanations for complex reasoning tasks guided by diagnostic properties!
September 2021: I gave an invited talk at FAIR's AI and Society talk series about explaining automated fact checking predictions and current vulnerabilities of such models.
August 2021: A co-supervised (with Isabelle Augenstein) Bachelor's student successfully defended his thesis on evaluating the robustness of explainability techniques.
June 2021: Paper on joint emotion label space modelling for affect lexica accepted at the Computer Speech & Language journal.
May 2021: A paper accepted at the Finding of ACL'2021 on a semi-supervised dataset for offensive language identification! [Dataset].
April 2021: The thesis of a co-supervised (with Isabelle Augenstein) Master's student on multi-hop fact checking of political claims accepted as a long paper to IJCAI 2021! [Dataset]
January 2021: Excited to organise and present the lab on explainable AI at the ALPS 2021 winter school!
December 2020: Co-organising a shared task at SemEval'2020 on multilingual offensive language identification in social media (OffensEval 2020).
November 2020: Presenting two papers at EMNLP'2020! The first paper is a diagnostic study of post-hoc explainability techniques for text classification tasks. The second paper studies the generation of well-formed and label cohesive adversarial attacks for fact checking.
September 2020: A co-supervised (with Isabelle Augenstein) Master's student successfully defended his thesis on multi-hop fact checking of political claims!
July 2020: Excited to present the first paper of my PhD program on generating fact checking explanations at ACL'2020!
March 2020: Excited to start my research internship at Google Research working on adversarial fact checking evidence extraction!