Manerba M. M., Morini V.
Algorithmic bias NLP Fairness in ML Discrimination Interpretability Bias discovery ML Evaluation Data awareness Algorithmic auditing Explainability ML
Biases can arise and be introduced during each phase of a supervised learning pipeline, eventually leading to harm. Within the task of automatic abusive language detection, this matter becomes particularly severe since unintended bias towards sensitive topics such as gender, sexual orientation, or ethnicity can harm underrepresented groups. The role of the datasets used to train these models is crucial to address these challenges. In this contribution, we investigate whether explainability methods can expose racial dialect bias attested within a popular dataset for abusive language detection. Through preliminary experiments, we found that pure explainability techniques cannot effectively uncover biases within the dataset under analysis: the rooted stereotypes are often more implicit and complex to retrieve.
Source: ECML PKDD 2022 - Joint European Conference on Machine Learning and Knowledge Discovery in Databases, pp. 483–497, Grenoble, France, 19-23/09/2022
Publisher: Springer, Heidelberg ;, Germania
@inproceedings{oai:it.cnr:prodotti:479349, title = {Exposing racial dialect bias in abusive language detection: can explainability play a role?}, author = {Manerba M. M. and Morini V.}, publisher = {Springer, Heidelberg ;, Germania}, doi = {10.1007/978-3-031-23618-1_32}, booktitle = {ECML PKDD 2022 - Joint European Conference on Machine Learning and Knowledge Discovery in Databases, pp. 483–497, Grenoble, France, 19-23/09/2022}, year = {2023} }
TAILOR
Foundations of Trustworthy AI - Integrating Reasoning, Learning and Optimization
HumanE-AI-Net
HumanE AI Network
SoBigData-PlusPlus
SoBigData++: European Integrated Infrastructure for Social Mining and Big Data Analytics