2023
Report  Open Access

Are we using autoencoders in a wrong way?

Martino G., Moroni D., Martinelli M.

Artificial Intelligence  Computer Vision  Pattern Recognition 

Autoencoders are certainly among the most studied and used Deep Learning models: the idea behind them is to train a model in order to reconstruct the same input data. The peculiarity of these models is to compress the information through a bottleneck, creating what is called Latent Space. Autoencoders are generally used for dimensionality reduction, anomaly detection and feature extraction. These models have been extensively studied and updated, given their high simplicity and power. Examples are (i) the Denoising Autoencoder, where the model is trained to reconstruct an image from a noisy one; (ii) Sparse Autoencoder, where the bottleneck is created by a regularization term in the loss function; (iii) Variational Autoencoder, where the latent space is used to generate new consistent data. In this article, we revisited the standard training for the undercomplete Autoencoder modifying the shape of the latent space without using any explicit regularization term in the loss function. We forced the model to reconstruct not the same observation in input, but another one sampled from the same class distribution. We also explored the behaviour of the latent space in the case of reconstruction of a random sample from the whole dataset.

Source: ISTI Working papers, 2023


Metrics



Back to previous page
BibTeX entry
@techreport{oai:it.cnr:prodotti:486160,
	title = {Are we using autoencoders in a wrong way?},
	author = {Martino G. and Moroni D. and Martinelli M.},
	doi = {10.48550/arxiv.2309.01532},
	institution = {ISTI Working papers, 2023},
	year = {2023}
}