2023
Conference article  Open Access

Explain and interpret few-shot learning

Fedele A.

Few-shot learning  Explainable Artificial Intelligence  Interpretable Machine Learning  Siamese networks 

Recent advancements in Artificial Intelligence have been fueled by vast datasets, powerful computing resources, and sophisticated algorithms. However, traditional Machine Learning models face limitations in handling scarce data. Few-Shot Learning (FSL) offers a promising solution by training models on a small number of examples per class. This manuscript introduces FXI-FSL, a framework for eXplainability and Interpretability in FSL, which aims to develop post-hoc explainability algorithms and interpretableby- design alternatives. A noteworthy contribution is the SIamese Network EXplainer (SINEX), a post-hoc approach shedding light on Siamese Network behavior. The proposed framework seeks to unveil the rationale behind FSL models, instilling trust in their real-world applications. Moreover, it emerges as a safeguard for developers, facilitating models fine-tuning prior to deployment, and as a guide for end users navigating the decisions of these models.

Source: xAI-2023 - 1st World Conference on eXplainable Artificial, pp. 233–240, Lisbon, Portugal, 26-28/06/2023



Back to previous page
BibTeX entry
@inproceedings{oai:it.cnr:prodotti:490202,
	title = {Explain and interpret few-shot learning},
	author = {Fedele A.},
	booktitle = {xAI-2023 - 1st World Conference on eXplainable Artificial, pp. 233–240, Lisbon, Portugal, 26-28/06/2023},
	year = {2023}
}
CNR ExploRA

Bibliographic record

ISTI Repository

Published version Open Access

Also available from

ceur-ws.orgOpen Access

HumanE-AI-Net
HumanE AI Network

XAI
Science and technology for the explanation of AI decision making

SoBigData-PlusPlus
SoBigData++: European Integrated Infrastructure for Social Mining and Big Data Analytics


OpenAIRE