2023
Conference article  Open Access

Revisiting ensembling for improving the performance of deep learning models

Bruno A, Moroni D, Martinelli M

Ensembling  Bagging  Machine learning  Deep learning  Image classification  Convolutional neural networks 

Ensembling is a very well-known strategy consisting in fusing several different models to achieve a new model for classification or regression tasks. Over the years, ensembling has been proven to provide superior performance in various contexts related to pattern recognition and artificial intelligence. Moreover, the basic ideas that are at the basis of ensembling have been a source of inspiration for the design of the most recent deep learning architectures. Indeed, a close analysis of those architectures shows that some connections among layers and groups of layers achieve effects similar to those obtainable by bagging, boosting and stacking, which are the well-known three basic approaches to ensembling. However, we argue that research has not fully leveraged the potential offered by ensembling. Indeed, this paper investigates some possible approaches to the combination of weak learners, or sub-components of weak learners, in the context of bagging. Based on previous results obtained in specific domains, we extend the approach to a reference dataset obtaining encouraging results.

Publisher: Springer



Back to previous page
BibTeX entry
@inproceedings{oai:it.cnr:prodotti:471427,
	title = {Revisiting ensembling for improving the performance of deep learning models},
	author = {Bruno A and Moroni D and Martinelli M},
	publisher = {Springer},
	year = {2023}
}