Presentation

Implicit arguments are omnipresent in everyday communication, across various domains — private, professional, and political. However, current methods for handling arguments do not efficiently address implicitness.

Argument mining (Lawrence & Reed, 2020) is a subfield of natural language processing aimed at:

  • Identifying the premises and claims of arguments,
  • And detecting the relationships between arguments, such as support and attack links — i.e., building argumentation graphs from texts.

Yet, learning-based methods have limited capabilities for reasoning and explainability (Barocas, Hardt & Narayanan, 2023). One promising solution is to instantiate extracted argumentation graphs with logical representations.

Once arguments are represented logically, automated reasoning can be applied to:
  • Check the consistency of premises,
  • Assess the validity of claims,
  • Verify the correct labelling of argumentative relations (Besnard & Hunter, 2008).

However, as previously mentioned, arguments exchanged by humans are often incomplete — some premises or claims are left implicit. These are known as enthymemes.

Enthymemes may arise due to:
  • Imprecision or lack of knowledge — the speaker argues without knowing all relevant facts;
  • Intentional omission — assuming some information is commonly known and need not be stated;
  • Rhetorical strategy — enthymemes have been known since Aristotle (Faure, 2010) as a powerful tool in persuasion and audience engagement.

Therefore, to properly instantiate, understand, and analyse arguments, it is crucial to decode enthymemes.

This is a highly challenging task, as it involves dealing with implicit content. This project will be the first systematic attempt to tackle this problem in the context of argumentation.

The ultimate benefit is to enhance the explainability of textual arguments by reconstructing the implicit knowledge they rely on.

Together Victor David with Anthony Hunter, co-lead the EXPLAINER Inria International Associated Team, in collaboration with University College London (UCL). The aim of this team is to analyze, understand, explain, and decode implicit textual arguments, bridging symbolic reasoning with natural language understanding.

List of members involved in the EXPLAINER team:
Victor David (ISFP researcher at Inria, Inria team representative),
Anthony Hunter (Professor at UCL, UCL team representative),
Serena Villata (Research Director, CNRS),
Elena Cabrio (Professor, Université Côte d’Azur),
Pierre Monnin (Research Scientist, Inria),
Ameer Saadat-Yazdi (Postdoctoral researcher, Inria),
Elfia Bezou Vrakatseli (Postdoctoral researcher, Université Côte d’Azur),
Nino Pireaud (Research Assistant Engineer, Université Côte d’Azur).

Activity of the Team

  • The EXPLAINER team was officially validated in April 2025. However, due to complex administrative procedures at UCL, we are still waiting for the signature of the legal agreement.
  • April to September 2025 (6 months): Internship of Nino Pireaud (funded by a personal project of Victor David), supervised by Victor David and Anthony Hunter. The internship focuses on building a dataset and a baseline system that integrates LLMs and argumentation theory to predict the persuasiveness of arguments involving frequent use of enthymemes.
  • January to June 2026 (6 months): Research Assistant Engineer position of Nino Pireaud (also funded by a personal project of Victor David), supervised by Victor David, Anthony Hunter, Pierre Monnin, and Elena Cabrio. The aim is to continue and expand on the internship work, in preparation for starting a PhD in October 2026.
  • June–July 2025: One-month visit by Anthony Hunter (UCL) to the MARIANNE group (INRIA / Université Côte d’Azur), funded by the I3S Laboratory. Objectives include:
    • Collaborating with Victor David and Nino Pireaud on the internship project;
    • Working with Victor David on three papers related to enthymemes and argument quality (including collaborations with Francesco Santini, University of Perugia, and Nico Potyka, Cardiff University);
    • Meeting with Serena Villata, Elena Cabrio, and their PhD students to discuss NLP and neuro-symbolic methods for enthymeme understanding.
  • January 2026: Recruitment of Ameer Saadat-Yazdi for a 2-year postdoctoral position, supervised by Victor David, Anthony Hunter, and Serena Villata. The project focuses on using LLMs and argumentation schemes to decode enthymemes. This postdoc is funded by INRIA in support of the Inria–UCL EXPLAINER associated team and will be based at INRIA / Université Côte d’Azur.
  • February 2026: Recruitment of Elfia Bezou Vrakatseli for a 1-year postdoctoral position, supervised by Victor David, Anthony Hunter, and Serena Villata. The objective is to define a dataset for evaluating the quality of decoded enthymemes. This postdoc is funded by the Idex project AMI Idées (Université Côte d’Azur), submitted by Victor David. The position will be based at INRIA / Université Côte d’Azur.

Scientific Production

The paper "A Logic-based Framework for Decoding Enthymemes in Argument Maps involving Implicitness in Premises and Claims" by Victor David and Anthony Hunter has been accepted at IJCAI 2025 (a leading international conference in Artificial Intelligence). This work presents a theoretical framework for representing natural language arguments extracted via NLP as logical arguments.
Link: https://www.ijcai.org/proceedings/2025/0495.pdf

The paper "An Axiomatic Study of a Modular Evaluation of Enthymeme Decoding in Weighted Structured Argumentation" by Jonathan Ben-Naim, Victor David, and Anthony Hunter has been accepted at KR 2025 (a leading international conference on Knowledge Representation and Reasoning). This work proposes a theoretical framework for evaluating the quality of enthymeme decodings in structured argumentation.
Link: https://proceedings.kr.org/2025/11/kr2025-0011-ben-naim-et-al.pdf