Explainable NLLP: Advancements in Explainable AI for Natural Legal Language Processing

Works categorized by the suggested taxonomy of Section 3.2: explanation type, explainability technique, and NLP task. Works with mentions to ethics are in italic, and works with mentions to ethics in XAI are in bold.
Publication Details
- Venue
- Automated Semantic Analysis of Information in Legal Text
- Year
- 2025
- Publication Date
- May 16, 2025
Materials
Abstract
Despite the increasing application of machine learning and NLP methods in the legal domain, there has been limited effort to enhance the understanding and transparency of these algorithms. This paper addresses this gap by presenting a survey on Explainable AI (XAI) applied to Natural Legal Language Processing (NLLP). To our knowledge, this survey represents the first comprehensive examination at the intersection of XAI, Law, and NLP. Building upon prior surveys focused on partial intersections of these domains, we propose a taxonomy for classifying papers based on the NLLP task, explanation type, and technique employed. Additionally, we delve into discussions surrounding Explainable NLLP, considering perspectives related to ethics, current open issues, and future work. Our analysis reveals that the categorized papers generally do not thoroughly examine the ethical implications of the explainability principle in NLP within the legal field. Furthermore, they do not discuss the role and value of explanations neither effectively utilize their respective XAI techniques to offer insights into the limitations of NLP systems.
Cite this publication (BIBTEX)
@article{2025-ExplainableNLLP, title={Explainable NLLP: Advancements in Explainable AI for Natural Legal Language Processing}, author={Lucas Resck and Felipe A. Moreno and Tobias Veiga and Gerardo Paucar and Ezequiel Fajreldines and Guilherme Klafke and Luis Gustavo Nonato and Jorge Poco}, journal={Automated Semantic Analysis of Information in Legal Text}, year={2025}, url={}, date={2025-05-16} }