2022
Conference article  Open Access

AIMH Lab for Trustworthy AI

Messina N., Carrara F., Coccomini D., Falchi F., Gennaro C., Amato G.

Artificial Intelligence  Deep Learning  Adversarial Machine Learning  Attention  Transformer  Computer vision 

In this short paper, we report the activities of the Artificial Intelligence for Media and Humanities (AIMH) laboratory of the ISTI-CNR related to Trustworthy AI. Artificial Intelligence is becoming more and more pervasive in our society, controlling recommendation systems in social platforms as well as safety-critical systems like autonomous vehicles. In order to be safe and trustworthy, these systems require to be easily interpretable and transparent. On the other hand, it is important to spot fake examples forged by malicious AI generative models to fool humans (through fake news or deep-fakes) or other AI systems (through adversarial examples). This is required to enforce an ethical use of these powerful new technologies. Driven by these concerns, this paper presents three crucial research directions contributing to the study and the development of techniques for reliable, resilient, and explainable deep learning methods. Namely, we report the laboratory activities on the detection of adversarial examples, the use of attentive models as a way towards explainable deep learning, and the detection of deepfakes in social platforms.

Source: Ital-IA 2020 - Workshop su AI Responsabile ed Affidabile, Online conference, 10/02/2022



Back to previous page
BibTeX entry
@inproceedings{oai:it.cnr:prodotti:463969,
	title = {AIMH Lab for Trustworthy AI},
	author = {Messina N. and Carrara F. and Coccomini D. and Falchi F. and Gennaro C. and Amato G.},
	booktitle = {Ital-IA 2020 - Workshop su AI Responsabile ed Affidabile, Online conference, 10/02/2022},
	year = {2022}
}
CNR ExploRA

Bibliographic record

ISTI Repository

Published version Open Access

Also available from

www.ital-ia2022.itOpen Access