2023
Conference article  Open Access

Fairness auditing, explanation and debiasing in linguistic data and language models

Marchiori Manerba M.

Responsible NLP  Explainability  Interpretability  Fairness 

This research proposal is framed in the interdisciplinary exploration of the socio-cultural implications that AI exerts on individual and groups. The focus concerns contexts where models can amplify discriminations through algorithmic biases, e.g., in recommendation and ranking systems or abusive language detection classifiers, and the debiasing of their automated decisions to become beneficial and just for everyone. To address these issues, the main objective of the proposed research project is to develop a framework to perform fairness auditing and debiasing of both classifiers and datasets, starting with, but not limited to, abusive language detection, thus broadening the approach toward other NLP tasks. Ultimately, by questioning the effectiveness of adjusting and debiasing existing resources, the project aims at developing truly inclusive, fair, and explainable models by design.

Source: xAI-2023 - 1st World Conference on eXplainable Artificial, pp. 241–248, Lisbon, Portugal, 26-28/07/2023



Back to previous page
BibTeX entry
@inproceedings{oai:it.cnr:prodotti:490206,
	title = {Fairness auditing, explanation and debiasing in linguistic data and language models},
	author = {Marchiori Manerba M.},
	booktitle = {xAI-2023 - 1st World Conference on eXplainable Artificial, pp. 241–248, Lisbon, Portugal, 26-28/07/2023},
	year = {2023}
}
CNR ExploRA

Bibliographic record

ISTI Repository

Published version Open Access

Also available from

ceur-ws.orgOpen Access

XAI
Science and technology for the explanation of AI decision making


OpenAIRE