18 result(s)
Page Size: 10, 20, 50
Export: bibtex, xml, json, csv
Order by:

CNR Author operator: and / or
more
Typology operator: and / or
Language operator: and / or
Date operator: and / or
Rights operator: and / or
2022 Conference article Open Access OPEN
Recent advancements on bio-inspired Hebbian learning for deep neural networks
Lagani G
Deep learning is becoming more and more popular to extract information from multimedia data for indexing and query processing. In recent contributions, we have explored a biologically inspired strategy for Deep Neural Network (DNN) training, based on the Hebbian principle in neuroscience. We studied hybrid approaches in which unsupervised Hebbian learning was used for a pre-training stage, followed by supervised fine-tuning based on Stochastic Gradient Descent (SGD). The resulting semi-supervised strategy exhibited encouraging results on computer vision datasets, motivating further interest towards applications in the domain of large scale multimedia content based retrieval.Source: CEUR WORKSHOP PROCEEDINGS, pp. 610-615. Pisa, Italy, 2022
Project(s): AI4EU via OpenAIRE, AI4Media via OpenAIRE

See at: ceur-ws.org Open Access | CNR IRIS Open Access | ISTI Repository Open Access | CNR IRIS Restricted


2021 Software Metadata Only Access
Hebbian Learning GitHub repository
Lagani G
Pytorch implementation of Hebbian learning algorithms to train deep convolutional neural networks.Project(s): AI4Media via OpenAIRE

See at: github.com Restricted | CNR IRIS Restricted


2022 Other Embargo
Bio-inspired approaches for Deep Learning: from spiking neural networks to Hebbian plasticity
Lagani G
In the past few years, Deep Neural Network (DNN) architectures have achieved outstanding results in several Artificial Intelligence (AI) domains. Even though DNNs draw inspiration from biology, the training methods based on the backpropagation algorithm (\textit{backprop}) lack neuroscientific plausibility. The goal of this dissertation is to explore biologically-inspired solutions for the learning task. These are interesting because they can help to reproduce features of the human brain, for example, the ability to learn from a little experience. The investigation is divided into three phases: first, I explore a novel AI solution based on simulating neuronal biological cultures with a high level of detail, using biologically faithful Spiking Neural Network (SNN) models; second, I investigate neuroscientifically grounded \textit{Hebbian} learning rules, applied to traditional DNNs in combination with backprop, using computer vision as a case study; third, I consider a more applicative perspective, using neural features derived from Hebbian learning for multimedia content retrieval tasks. I validate the proposed methods on different benchmarks, including MNIST, CIFAR, and ImageNet, obtaining promising results, especially in learning scenarios with scarce data. Moreover, to the best of my knowledge, for the first time, I am able to bring bio-inspired Hebbian methods to ImageNet scale, consisting of over 1 million images.Project(s): AI4Media via OpenAIRE

See at: etd.adm.unipi.it Restricted | CNR IRIS Restricted | CNR IRIS Restricted


2021 Journal article Open Access OPEN
Hebbian semi-supervised learning in a sample efficiency setting
Lagani G, Falchi F, Gennaro C, Amato G
We propose to address the issue of sample efficiency, in Deep Convolutional Neural Networks (DCNN), with a semi-supervised training strategy that combines Hebbian learning with gradient descent: all internal layers (both convolutional and fully connected) are pre-trained using an unsupervised approach based on Hebbian learning, and the last fully connected layer (the classification layer) is trained using Stochastic Gradient Descent (SGD). In fact, as Hebbian learning is an unsupervised learning method, its potential lies in the possibility of training the internal layers of a DCNN without labels. Only the final fully connected layer has to be trained with labeled examples. We performed experiments on various object recognition datasets, in different regimes of sample efficiency, comparing our semi-supervised (Hebbian for internal layers + SGD for the final fully connected layer) approach with end-to-end supervised backprop training, and with semi-supervised learning based on Variational Auto-Encoder (VAE). The results show that, in regimes where the number of available labeled samples is low, our semi-supervised approach outperforms the other approaches in almost all the cases.Source: NEURAL NETWORKS, vol. 143, pp. 719-731
DOI: 10.1016/j.neunet.2021.08.003
DOI: 10.48550/arxiv.2103.09002
Project(s): AI4EU via OpenAIRE, AI4Media via OpenAIRE
Metrics:


See at: arXiv.org e-Print Archive Open Access | Neural Networks Open Access | CNR IRIS Open Access | ISTI Repository Open Access | www.sciencedirect.com Open Access | ZENODO Open Access | Neural Networks Restricted | doi.org Restricted | CNR IRIS Restricted | CNR IRIS Restricted | CNR IRIS Restricted


2022 Journal article Open Access OPEN
Comparing the performance of Hebbian against backpropagation learning using convolutional neural networks
Lagani G, Falchi F, Gennaro C, Amato G
In this paper, we investigate Hebbian learning strategies applied to Convolutional Neural Network (CNN) training. We consider two unsupervised learning approaches, Hebbian Winner-Takes-All (HWTA), and Hebbian Principal Component Analysis (HPCA). The Hebbian learning rules are used to train the layers of a CNN in order to extract features that are then used for classification, without requiring backpropagation (backprop). Experimental comparisons are made with state-of-the-art unsupervised (but backprop-based) Variational Auto-Encoder (VAE) training. For completeness,we consider two supervised Hebbian learning variants (Supervised Hebbian Classifiers--SHC, and Contrastive Hebbian Learning--CHL), for training the final classification layer, which are compared to Stochastic Gradient Descent training. We also investigate hybrid learning methodologies, where some network layers are trained following the Hebbian approach, and others are trained by backprop. We tested our approaches on MNIST, CIFAR10, and CIFAR100 datasets. Our results suggest that Hebbian learning is generally suitable for training early feature extraction layers, or to retrain higher network layers in fewer training epochs than backprop. Moreover, our experiments show that Hebbian learning outperforms VAE training, with HPCA performing generally better than HWTA.Source: NEURAL COMPUTING & APPLICATIONS
DOI: 10.1007/s00521-021-06701-4
Project(s): AI4EU via OpenAIRE, AI4Media via OpenAIRE
Metrics:


See at: CNR IRIS Open Access | link.springer.com Open Access | ISTI Repository Open Access | ISTI Repository Open Access | CNR IRIS Restricted | CNR IRIS Restricted | CNR IRIS Restricted


2022 Conference article Open Access OPEN
FastHebb: scaling hebbian training of deep neural networks to ImageNet level
Lagani G, Gennaro C, Fassold H, Amato G
Learning algorithms for Deep Neural Networks are typically based on supervised end-to-end Stochastic Gradient Descent (SGD) training with error backpropagation (backprop). Backprop algorithms require a large number of labelled training samples to achieve high performance. However, in many realistic applications, even if there is plenty of image samples, very few of them are labelled, and semi-supervised sample-efficient training strategies have to be used. Hebbian learning represents a possible approach towards sample efficient training; however, in current solutions, it does not scale well to large datasets. In this paper, we present FastHebb, an efficient and scalable solution for Hebbian learning which achieves higher efficiency by 1) merging together update computation and aggregation over a batch of inputs, and 2) leveraging efficient matrix multiplication algorithms on GPU. We validate our approach on different computer vision benchmarks, in a semi-supervised learning scenario. FastHebb outperforms previous solutions by up to 50 times in terms of training speed, and notably, for the first time, we are able to bring Hebbian algorithms to ImageNet scale.DOI: 10.1007/978-3-031-17849-8_20
Project(s): AI4Media via OpenAIRE
Metrics:


See at: CNR IRIS Open Access | link.springer.com Open Access | doi.org Restricted | CNR IRIS Restricted | CNR IRIS Restricted


2023 Other Restricted
THE D.8.8.1 - State of the art for digital models of cultured neural networks
Lagani G, Falchi F, Amato G
THE deliverable 8.8.1 is a technical report about current state-of-the-art approaches in the field of bio-inspired technologies for Artificial Intelligence (AI)

See at: CNR IRIS Restricted | CNR IRIS Restricted


2023 Conference article Open Access OPEN
AIMH Lab for a susteinable bio-inspired AI
Lagani G, Falchi F, Gennaro C, Amato G
In this short paper, we report the activities of the Artificial Intelligence for Media and Humanities (AIMH) laboratory of the ISTI-CNR related to Sustainable AI. In particular, we discuss the problem of the environmental impact of AI research, and we discuss a research direction aimed at creating effective intelligent systems with a reduced ecological footprint. The proposal is based on bio-inspired learning, which takes inspiration from the biological processes underlying human intelligence in order to produce more energy-efficient AI systems. In fact, biological brains are able to perform complex computations, with a power consumption which is orders of magnitude smaller than that of traditional AI. The ability to control and replicate these biological processes reveals promising results towards the realization of sustainable AISource: CEUR WORKSHOP PROCEEDINGS, pp. 575-584. Pisa, Italy, 29-30/05/2023

See at: ceur-ws.org Open Access | CNR IRIS Open Access | ISTI Repository Open Access | CNR IRIS Restricted


2021 Conference article Open Access OPEN
Assessing pattern recognition performance of neuronal cultures through accurate simulation
Lagani G, Mazziotti R, Falchi F, Gennaro C, Cicchini Gm, Pizzorusso T, Cremisi F, Amato G
Previous work has shown that it is possible to train neuronal cultures on Multi-Electrode Arrays (MEAs), to recognize very simple patterns. However, this work was mainly focused to demonstrate that it is possible to induce plasticity in cultures, rather than performing a rigorous assessment of their pattern recognition performance. In this paper, we address this gap by developing a methodology that allows us to assess the performance of neuronal cultures on a learning task. Specifically, we propose a digital model of the real cultured neuronal networks; we identify biologically plausible simulation parameters that allow us to reliably reproduce the behavior of real cultures; we use the simulated culture to perform handwritten digit recognition and rigorously evaluate its performance; we also show that it is possible to find improved simulation parameters for the specific task, which can guide the creation of real cultures.Source: INTERNATIONAL IEEE/EMBS CONFERENCE ON NEURAL ENGINEERING, pp. 726-729. Online, 4-6/05/2021
DOI: 10.1109/ner49283.2021.9441166
DOI: 10.48550/arxiv.2012.10355
Project(s): AI4Media via OpenAIRE
Metrics:


See at: arXiv.org e-Print Archive Open Access | arxiv.org Open Access | ZENODO Open Access | IRIS Cnr Open Access | IRIS Cnr Open Access | Software Heritage Restricted | Software Heritage Restricted | dblp.uni-trier.de Restricted | doi.org Restricted | doi.org Restricted | GitHub Restricted | GitHub Restricted | Flore (Florence Research Repository) Restricted | CNR IRIS Restricted | CNR IRIS Restricted


2022 Conference article Open Access OPEN
Deep features for CBIR with scarce data using Hebbian learning
Lagani G., Bacciu D., Gallicchio C., Falchi F., Gennaro C., Amato G.
Features extracted from Deep Neural Networks (DNNs) have proven to be very effective in the context of Content Based Image Retrieval (CBIR). Recently, biologically inspired Hebbian learning algorithms have shown promises for DNN training. In this contribution, we study the performance of such algorithms in the development of feature extractors for CBIR tasks. Specifically, we consider a semi-supervised learning strategy in two steps: first, an unsupervised pre-training stage is performed using Hebbian learning on the image dataset; second, the network is fine-tuned using supervised Stochastic Gradient Descent (SGD) training. For the unsupervised pre-training stage, we explore the nonlinear Hebbian Principal Component Analysis (HPCA) learning rule. For the supervised fine-tuning stage, we assume sample efficiency scenarios, in which the amount of labeled samples is just a small fraction of the whole dataset. Our experimental analysis, conducted on the CIFAR10 and CIFAR100 datasets, shows that, when few labeled samples are available, our Hebbian approach provides relevant improvements compared to various alternative methods.DOI: 10.1145/3549555.3549587
DOI: 10.48550/arxiv.2205.08935
Project(s): AI4Media via OpenAIRE
Metrics:


See at: arXiv.org e-Print Archive Open Access | dl.acm.org Open Access | ZENODO Open Access | CNR IRIS Open Access | IRIS Cnr Restricted | doi.org Restricted | doi.org Restricted | Archivio della Ricerca - Università di Pisa Restricted | IRIS Cnr Restricted | CNR IRIS Restricted | CNR IRIS Restricted


2024 Journal article Open Access OPEN
Scalable bio-inspired training of Deep Neural Networks with FastHebb
Lagani G., Falchi F., Gennaro C., Fassold H., Amato G.
Recent work on sample efficient training of Deep Neural Networks (DNNs) proposed a semi-supervised methodology based on biologically inspired Hebbian learning, combined with traditional backprop-based training. Promising results were achieved on various computer vision benchmarks, in scenarios of scarce labeled data availability. However, current Hebbian learning solutions can hardly address large-scale scenarios due to their demanding computational cost. In order to tackle this limitation, in this contribution, we investigate a novel solution, named FastHebb (FH), based on the reformulation of Hebbian learning rules in terms of matrix multiplications, which can be executed more efficiently on GPU. Starting from Soft-Winner-Takes-All (SWTA) and Hebbian Principal Component Analysis (HPCA) learning rules, we formulate their improved FH versions: SWTA-FH and HPCA-FH. We experimentally show that the proposed approach accelerates training speed up to 70 times, allowing us to gracefully scale Hebbian learning experiments on large datasets and network architectures such as ImageNet and VGG.Source: NEUROCOMPUTING, vol. 595
DOI: 10.1016/j.neucom.2024.127867
Metrics:


See at: CNR IRIS Open Access | www.sciencedirect.com Open Access | CNR IRIS Restricted | CNR IRIS Restricted


2019 Conference article Open Access OPEN
Hebbian learning meets deep convolutional neural networks
Amato G, Carrara F, Falchi F, Gennaro C, Lagani G
Neural networks are said to be biologically inspired since they mimic the behavior of real neurons. However, several processes in state-of-the-art neural networks, including Deep Convolutional Neural Networks (DCNN), are far from the ones found in animal brains. One relevant difference is the training process. In state-of-the-art artificial neural networks, the training process is based on backpropagation and Stochastic Gradient Descent (SGD) optimization. However, studies in neuroscience strongly suggest that this kind of processes does not occur in the biological brain. Rather, learning methods based on Spike-Timing-Dependent Plasticity (STDP) or the Hebbian learning rule seem to be more plausible, according to neuroscientists. In this paper, we investigate the use of the Hebbian learning rule when training Deep Neural Networks for image classification by proposing a novel weight update rule for shared kernels in DCNNs. We perform experiments using the CIFAR-10 dataset in which we employ Hebbian learning, along with SGD, to train parts of the model or whole networks for the task of image classification, and we discuss their performance thoroughly considering both effectiveness and efficiency aspects.DOI: 10.1007/978-3-030-30642-7_29
Project(s): AI4EU via OpenAIRE
Metrics:


See at: CNR IRIS Open Access | link.springer.com Open Access | ISTI Repository Open Access | Lecture Notes in Computer Science Restricted | CNR IRIS Restricted | CNR IRIS Restricted


2019 Other Open Access OPEN
AIMIR 2019 Research Activities
Amato G, Bolettieri P, Carrara F, Ciampi L, Di Benedetto M, Debole F, Falchi F, Gennaro C, Lagani G, Massoli Fv, Messina N, Rabitti F, Savino P, Vadicamo L, Vairo C
Multimedia Information Retrieval (AIMIR) research group is part of the NeMIS laboratory of the Information Science and Technologies Institute "A. Faedo" (ISTI) of the Italian National Research Council (CNR). The AIMIR group has a long experience in topics related to: Artificial Intelligence, Multimedia Information Retrieval, Computer Vision and Similarity search on a large scale. We aim at investigating the use of Artificial Intelligence and Deep Learning, for Multimedia Information Retrieval, addressing both effectiveness and efficiency. Multimedia information retrieval techniques should be able to provide users with pertinent results, fast, on huge amount of multimedia data. Application areas of our research results range from cultural heritage to smart tourism, from security to smart cities, from mobile visual search to augmented reality. This report summarize the 2019 activities of the research group.

See at: CNR IRIS Open Access | ISTI Repository Open Access | CNR IRIS Restricted


2023 Other Restricted
THE D.3.2.1 - AA@THE User needs, technical requirements and specifications
Pratali L, Campana M G, Delmastro F, Di Martino F, Pescosolido L, Barsocchi P, Broccia G, Ciancia V, Gennaro C, Girolami M, Lagani G, La Rosa D, Latella D, Magrini M, Manca M, Massink M, Mattioli A, Moroni D, Palumbo F, Paradisi P, Paternò F, Santoro C, Sebastiani L, Vairo C
Deliverable D3.2.1 del progetto PNRR Ecosistemi ed innovazione - THE

See at: CNR IRIS Restricted | CNR IRIS Restricted


2020 Other Open Access OPEN
AIMH research activities 2020
Aloia N., Amato G., Bartalesi Lenzi V., Benedetti F., Bolettieri P., Carrara F., Casarosa V., Ciampi L., Concordia C., Corbara S., Esuli A., Falchi F., Gennaro C., Lagani G., Massoli F. V., Meghini C., Messina N., Metilli D., Molinari A., Moreo Fernandez A., Nardi A., Pedrotti A., Pratelli N., Rabitti F., Savino P., Sebastiani F., Thanos C., Trupiano L., Vadicamo L., Vairo C.
Annual Report of the Artificial Intelligence for Media and Humanities laboratory (AIMH) research activities in 2020.DOI: 10.32079/isti-ar-2020/001
Metrics:


See at: CNR IRIS Open Access | ISTI Repository Open Access | CNR IRIS Restricted


2021 Other Open Access OPEN
AIMH research activities 2021
Aloia N., Amato G., Bartalesi Lenzi V., Benedetti F., Bolettieri P., Cafarelli D., Carrara F., Casarosa V., Coccomini D., Ciampi L., Concordia C., Corbara S., Di Benedetto M., Esuli A., Falchi F., Gennaro C., Lagani G., Massoli F. V., Meghini C., Messina N., Metilli D., Molinari A., Moreo Fernandez A., Nardi A., Pedrotti A., Pratelli N., Rabitti F., Savino P., Sebastiani F., Sperduti G., Thanos C., Trupiano L., Vadicamo L., Vairo C.
The Artificial Intelligence for Media and Humanities laboratory (AIMH) has the mission to investigate and advance the state of the art in the Artificial Intelligence field, specifically addressing applications to digital media and digital humanities, and taking also into account issues related to scalability. This report summarize the 2021 activities of the research group.DOI: 10.32079/isti-ar-2021/003
Metrics:


See at: CNR IRIS Open Access | ISTI Repository Open Access | CNR IRIS Restricted


2022 Other Open Access OPEN
AIMH research activities 2022
Aloia N., Amato G., Bartalesi Lenzi V., Benedetti F., Bolettieri P., Cafarelli D., Carrara F., Casarosa V., Ciampi L., Coccomini D. A., Concordia C., Corbara S., Di Benedetto M., Esuli A., Falchi F., Gennaro C., Lagani G., Lenzi E., Meghini C., Messina N., Metilli D., Molinari A., Moreo Fernandez A. D., Nardi A., Pedrotti A., Pratelli N., Rabitti F., Savino P., Sebastiani F., Sperduti G., Thanos C., Trupiano L., Vadicamo L., Vairo C.
The Artificial Intelligence for Media and Humanities laboratory (AIMH) has the mission to investigate and advance the state of the art in the Artificial Intelligence field, specifically addressing applications to digital media and digital humanities, and taking also into account issues related to scalability.This report summarize the 2022 activities of the research group.DOI: 10.32079/isti-ar-2022/002
Metrics:


See at: CNR IRIS Open Access | ISTI Repository Open Access | CNR IRIS Restricted


2023 Other Open Access OPEN
AIMH Research Activities 2023
Aloia N., Amato G., Bartalesi Lenzi V., Bianchi L., Bolettieri P., Bosio C., Carraglia M., Carrara F., Casarosa V., Ciampi L., Coccomini D. A., Concordia C., Corbara S., De Martino C., Di Benedetto M., Esuli A., Falchi F., Fazzari E., Gennaro C., Lagani G., Lenzi E., Meghini C., Messina N., Molinari A., Moreo Fernandez A., Nardi A., Pedrotti A., Pratelli N., Puccetti G., Rabitti F., Savino P., Sebastiani F., Sperduti G., Thanos C., Trupiano L., Vadicamo L., Vairo C., Versienti L.
The AIMH (Artificial Intelligence for Media and Humanities) laboratory is dedicated to exploring and pushing the boundaries in the field of Artificial Intelligence, with a particular focus on its application in digital media and humanities. This lab's objective is to enhance the current state of AI technology particularly on deep learning, text analysis, computer vision, multimedia information retrieval, multimedia content analysis, recognition, and retrieval. This report encapsulates the laboratory's progress and activities throughout the year 2023.DOI: 10.32079/isti-ar-2023/001
Metrics:


See at: CNR IRIS Open Access | ISTI Repository Open Access | CNR IRIS Restricted