Messina N., Carrara F., Coccomini D., Falchi F., Gennaro C., Amato G.
Artificial Intelligence Deep Learning Adversarial Machine Learning Attention Transformer Computer vision
In this short paper, we report the activities of the Artificial Intelligence for Media and Humanities (AIMH) laboratory of the ISTI-CNR related to Trustworthy AI. Artificial Intelligence is becoming more and more pervasive in our society, controlling recommendation systems in social platforms as well as safety-critical systems like autonomous vehicles. In order to be safe and trustworthy, these systems require to be easily interpretable and transparent. On the other hand, it is important to spot fake examples forged by malicious AI generative models to fool humans (through fake news or deep-fakes) or other AI systems (through adversarial examples). This is required to enforce an ethical use of these powerful new technologies. Driven by these concerns, this paper presents three crucial research directions contributing to the study and the development of techniques for reliable, resilient, and explainable deep learning methods. Namely, we report the laboratory activities on the detection of adversarial examples, the use of attentive models as a way towards explainable deep learning, and the detection of deepfakes in social platforms.
Source: Ital-IA 2020 - Workshop su AI Responsabile ed Affidabile, Online conference, 10/02/2022
@inproceedings{oai:it.cnr:prodotti:463969, title = {AIMH Lab for Trustworthy AI}, author = {Messina N. and Carrara F. and Coccomini D. and Falchi F. and Gennaro C. and Amato G.}, booktitle = {Ital-IA 2020 - Workshop su AI Responsabile ed Affidabile, Online conference, 10/02/2022}, year = {2022} }
Amato, Giuseppe
0000-0003-0171-4315
Carrara, Fabio
0000-0001-5014-5089
Coccomini, Davide Alessandro
0000-0002-0755-6154
Falchi, Fabrizio
0000-0001-6258-5313
Gennaro, Claudio
0000-0002-3715-149X
Messina, Nicola
0000-0003-3011-2487
Artificial Intelligence for Media and Humanities (2021-ongoing)