22 result(s)
Page Size: 10, 20, 50
Export: bibtex, xml, json, csv
Order by:

CNR Author operator: and / or
more
Typology operator: and / or
Language operator: and / or
Date operator: and / or
Rights operator: and / or
2025 Conference article Restricted
A biologically-inspired approach to biomedical image segmentation
Ciampi L., Lagani G., Amato G., Falchi F.
We present a novel bio-inspired semi-supervised learning strategy for semantic segmentation architectures. It is based on the so-called Hebbian principle “neurons that fire together wire together” that closely mimics brain synaptic adaptations and provides a promising biologically-plausible local learning rule for updating neural network weights without needing supervision. Our approach includes two stages. In the first step, we exploit the Hebbian principle for unsupervised weights updating of both convolutional and, for the first time, transpose-convolutional layers characterizing downsampling-upsampling semantic segmentation architectures. Then, in the second stage, we fine-tune the model on a few labeled data samples. We assess our methodology through an experimental evaluation involving several collections of biomedical images, deeming that this context is of outstanding importance in computer vision and is particularly affected by data scarcity. Preliminary results demonstrate the effectiveness of our proposed method compared with SOTA under various labeled training data regimes. The code to reproduce our experiments is available at: https://tinyurl.com/ycywfjc2.Source: LECTURE NOTES IN COMPUTER SCIENCE, vol. 15636, pp. 158-171. Milan, Italy, September 29–October 4, 2024, 29/09-04/10/2024
DOI: 10.1007/978-3-031-91578-9_10
Project(s): SUN via OpenAIRE, Tuscany Health Ecosystem
Metrics:


See at: doi.org Restricted | CNR IRIS Restricted | CNR IRIS Restricted | link.springer.com Restricted


2025 Conference article Open Access OPEN
CA3D: Convolutional-Attentional 3D nets for efficient video activity recognition on the edge
Lagani G., Falchi F., Gennaro C., Amato G.
In this paper, we introduce a deep learning solution for video activity recognition that leverages an innovative combination of convolutional layers with a linear-complexity attention mechanism. Moreover, we introduce a novel quantization mechanism to further improve the efficiency of our model during both training and inference. Our model maintains a reduced computational cost, while preserving robust learning and generalization capabilities. Our approach addresses the issues related to the high computing requirements of current models, with the goal of achieving competitive accuracy on consumer and edge devices, enabling smart home and smart healthcare applications where efficiency and privacy issues are of concern. We experimentally validate our model on different established and publicly available video activity recognition benchmarks, improving accuracy over alternative models at a competitive computing cost.Source: LECTURE NOTES IN COMPUTER SCIENCE, vol. 15633, pp. 235-251. Milan, Italy, 29/09/2024
DOI: 10.1007/978-3-031-91979-4_18
DOI: 10.48550/arxiv.2505.19928
Project(s): AI4Media via OpenAIRE, SUN via OpenAIRE
Metrics:


See at: arXiv.org e-Print Archive Open Access | CNR IRIS Open Access | link.springer.com Open Access | doi.org Restricted | doi.org Restricted | CNR IRIS Restricted | CNR IRIS Restricted


2024 Other Open Access OPEN
AIMH Research Activities 2024
Aloia N., Amato G., Bartalesi Lenzi V., Bianchi L., Bolettieri P., Bosio C., Carraglia M., Carrara F., Casarosa V., Cassese M., Ciampi L., Coccomini D. A., Concordia C., Connor R., Corbara S., De Martino C., Di Benedetto M., Esuli A., Falchi F., Fazzari E., Gennaro C., Iannello L., Negi K., Lagani G., Lenzi E., Leocata M., Malvaldi M., Meghini C., Messina N., Moreo Fernandez A., Nardi A., Pacini G., Pedrotti A., Pratelli N., Puccetti G., Rabitti F., Savino P., Scotti F., Sebastiani F., Sperduti G., Thanos C., Trupiano L., Vadicamo L., Vairo C., Versienti L., Volpi L.
The AIMH (Artificial Intelligence for Media and Humanities) laboratory is committed to advancing the field of Artificial Intelligence, with a special emphasis on its applications in digital media and the humanities. The lab aims to improve AI technologies, particularly in areas such as deep learning, text analysis, computer vision, multimedia information retrieval, content analysis, recognition, and retrieval. This report summarizes the laboratory’s achievements and activities over the course of 2024.DOI: 10.32079/isti-ar-2024/001
Metrics:


See at: CNR IRIS Open Access | CNR IRIS Restricted


2024 Journal article Open Access OPEN
Scalable bio-inspired training of Deep Neural Networks with FastHebb
Lagani G., Falchi F., Gennaro C., Fassold H., Amato G.
Recent work on sample efficient training of Deep Neural Networks (DNNs) proposed a semi-supervised methodology based on biologically inspired Hebbian learning, combined with traditional backprop-based training. Promising results were achieved on various computer vision benchmarks, in scenarios of scarce labeled data availability. However, current Hebbian learning solutions can hardly address large-scale scenarios due to their demanding computational cost. In order to tackle this limitation, in this contribution, we investigate a novel solution, named FastHebb (FH), based on the reformulation of Hebbian learning rules in terms of matrix multiplications, which can be executed more efficiently on GPU. Starting from Soft-Winner-Takes-All (SWTA) and Hebbian Principal Component Analysis (HPCA) learning rules, we formulate their improved FH versions: SWTA-FH and HPCA-FH. We experimentally show that the proposed approach accelerates training speed up to 70 times, allowing us to gracefully scale Hebbian learning experiments on large datasets and network architectures such as ImageNet and VGG.Source: NEUROCOMPUTING, vol. 595
DOI: 10.1016/j.neucom.2024.127867
Metrics:


See at: CNR IRIS Open Access | www.sciencedirect.com Open Access | CNR IRIS Restricted | CNR IRIS Restricted


2023 Other Restricted
THE D.3.2.1 - AA@THE User needs, technical requirements and specifications
Pratali L, Campana M G, Delmastro F, Di Martino F, Pescosolido L, Barsocchi P, Broccia G, Ciancia V, Gennaro C, Girolami M, Lagani G, La Rosa D, Latella D, Magrini M, Manca M, Massink M, Mattioli A, Moroni D, Palumbo F, Paradisi P, Paternò F, Santoro C, Sebastiani L, Vairo C
Deliverable D3.2.1 del progetto PNRR Ecosistemi ed innovazione - THE

See at: CNR IRIS Restricted | CNR IRIS Restricted


2023 Other Restricted
THE D.8.8.1 - State of the art for digital models of cultured neural networks
Lagani G, Falchi F, Amato G
THE deliverable 8.8.1 is a technical report about current state-of-the-art approaches in the field of bio-inspired technologies for Artificial Intelligence (AI)

See at: CNR IRIS Restricted | CNR IRIS Restricted


2023 Conference article Open Access OPEN
AIMH Lab for a susteinable bio-inspired AI
Lagani G, Falchi F, Gennaro C, Amato G
In this short paper, we report the activities of the Artificial Intelligence for Media and Humanities (AIMH) laboratory of the ISTI-CNR related to Sustainable AI. In particular, we discuss the problem of the environmental impact of AI research, and we discuss a research direction aimed at creating effective intelligent systems with a reduced ecological footprint. The proposal is based on bio-inspired learning, which takes inspiration from the biological processes underlying human intelligence in order to produce more energy-efficient AI systems. In fact, biological brains are able to perform complex computations, with a power consumption which is orders of magnitude smaller than that of traditional AI. The ability to control and replicate these biological processes reveals promising results towards the realization of sustainable AISource: CEUR WORKSHOP PROCEEDINGS, pp. 575-584. Pisa, Italy, 29-30/05/2023

See at: ceur-ws.org Open Access | CNR IRIS Open Access | ISTI Repository Open Access | CNR IRIS Restricted


2023 Other Open Access OPEN
AIMH Research Activities 2023
Aloia N., Amato G., Bartalesi Lenzi V., Bianchi L., Bolettieri P., Bosio C., Carraglia M., Carrara F., Casarosa V., Ciampi L., Coccomini D. A., Concordia C., Corbara S., De Martino C., Di Benedetto M., Esuli A., Falchi F., Fazzari E., Gennaro C., Lagani G., Lenzi E., Meghini C., Messina N., Molinari A., Moreo Fernandez A., Nardi A., Pedrotti A., Pratelli N., Puccetti G., Rabitti F., Savino P., Sebastiani F., Sperduti G., Thanos C., Trupiano L., Vadicamo L., Vairo C., Versienti L.
The AIMH (Artificial Intelligence for Media and Humanities) laboratory is dedicated to exploring and pushing the boundaries in the field of Artificial Intelligence, with a particular focus on its application in digital media and humanities. This lab's objective is to enhance the current state of AI technology particularly on deep learning, text analysis, computer vision, multimedia information retrieval, multimedia content analysis, recognition, and retrieval. This report encapsulates the laboratory's progress and activities throughout the year 2023.DOI: 10.32079/isti-ar-2023/001
Metrics:


See at: CNR IRIS Open Access | ISTI Repository Open Access | CNR IRIS Restricted


2023 Other Open Access OPEN
D3.2.1: AA@THE User needs, technical requirements and specifications
Lorenza Pratali, Franca Delmastro, Mattia Campana, Flavio Di Martino, Loreto Pescosolido, Paolo Barsocchi, Giovanna Broccia, Vincenzo Ciancia, Claudio Gennaro, Michele Girolami, Gabriele Lagani, Diego Latella, Massimo Magrini, Marco Manca, Mieke Massink, Andrea Mattioli, Davide Moroni, Filippo Palumbo, Paolo Paradisi, Fabio Paternò, Laura Sebastiani, Claudio Vairo, Carmelina Santoro, Davide La Rosa
The objective of this deliverable is to compile a comprehensive report that describes the user needs, requirements, and technical specifications necessary to successfully implement the pilot study. To achieve this, it is crucial to establish contacts with specific associations and medical experts, which, collaboratively, will help to establish exclusion and inclusion criteria for the target population of healthy adults. Furthermore, another related objective is to define the different categories of users that will interact with the system and their specific needs. This holistic approach will ensure that the system is designed and developed to satisfy the diverse needs of the users and be aligned with the goals of the project. To achieve the milestone M3.2.1, we made significant progresses in the definition of the pilot study for the AA@THE subproject. One of our key achievements is the successful description of users’ needs, requirements, and technical specifications necessary for the study. We worked closely with both a specialized association of personal trainers for Adapted Physical Activity (APA) for older adults, already active in the area of Pisa, and the medical partner who played a crucial role in providing valuable insights and expertise to establish exclusion and inclusion criteria for the target population of healthy adults. In this milestone, we also defined the activities and services that we intend to offer. Specifically, we plan to provide technological systems aimed at monitoring physical and cognitive training processes, as well as stability evaluations, by instrumenting a gym dedicated to active and healthy ageing, which is located within the CNR research area in Pisa. Additionally, we will conduct sleep, nutrition, and sedentary assessments at the volunteers' homes. Furthermore, we successfully defined the different user categories involved in the study. To facilitate the recruitment process and people engagement, on January 17th 2023, we organized an open day in collaboration with the gym association where we presented the overall objectives of the project and we collected feedbacks from a group of healthy adults over 65, already involved in APA training. This allowed us to gain a comprehensive understanding of the users' specific needs in terms of system interactions, thus establishing the system requirements and technical specifications of the AA@THE ecosystem. In parallel, a specific action on “Automatic Support of Medical Image Analysis” has been initiated by members of the “Formal Methods and Tools” group at CNR-ISTI. Such an action aims at leveraging Formal Methods in Computer Science, Logic and Model Checking to augment state-of-the-art machine learning techniques for automatic medical image analysis, enabling end-users to make specific assumptions on the level of accountability and affordability of the system. The methodology is based on a strict intertwining between theory and experimentation, with the development of new theoretical foundations for model reduction and efficient model checking, and experimentation and finalization of a graphical user interface that is being evaluated from the points of view of usability and of cognitive load. Moreover, the design and implementation of a suitable GUI able to support the analysis of medical images has been conducted and tested with small groups of people derived from the hospital in Lucca. The proposed GUI prototype has been evaluated from a cognitive point of view in order to allow easy employment with little training, for general practitioners and caregivers who may lack the technical skills required to use fully-fledged medical imaging programs.Project(s): Tuscany Health Ecosystem

See at: CNR IRIS Open Access | CNR IRIS Restricted


2022 Journal article Open Access OPEN
Comparing the performance of Hebbian against backpropagation learning using convolutional neural networks
Lagani G, Falchi F, Gennaro C, Amato G
In this paper, we investigate Hebbian learning strategies applied to Convolutional Neural Network (CNN) training. We consider two unsupervised learning approaches, Hebbian Winner-Takes-All (HWTA), and Hebbian Principal Component Analysis (HPCA). The Hebbian learning rules are used to train the layers of a CNN in order to extract features that are then used for classification, without requiring backpropagation (backprop). Experimental comparisons are made with state-of-the-art unsupervised (but backprop-based) Variational Auto-Encoder (VAE) training. For completeness,we consider two supervised Hebbian learning variants (Supervised Hebbian Classifiers--SHC, and Contrastive Hebbian Learning--CHL), for training the final classification layer, which are compared to Stochastic Gradient Descent training. We also investigate hybrid learning methodologies, where some network layers are trained following the Hebbian approach, and others are trained by backprop. We tested our approaches on MNIST, CIFAR10, and CIFAR100 datasets. Our results suggest that Hebbian learning is generally suitable for training early feature extraction layers, or to retrain higher network layers in fewer training epochs than backprop. Moreover, our experiments show that Hebbian learning outperforms VAE training, with HPCA performing generally better than HWTA.Source: NEURAL COMPUTING & APPLICATIONS
DOI: 10.1007/s00521-021-06701-4
Project(s): AI4EU via OpenAIRE, AI4Media via OpenAIRE
Metrics:


See at: CNR IRIS Open Access | link.springer.com Open Access | ISTI Repository Open Access | ISTI Repository Open Access | CNR IRIS Restricted | CNR IRIS Restricted | CNR IRIS Restricted


2022 Other Open Access OPEN
AIMH research activities 2022
Aloia N., Amato G., Bartalesi Lenzi V., Benedetti F., Bolettieri P., Cafarelli D., Carrara F., Casarosa V., Ciampi L., Coccomini D. A., Concordia C., Corbara S., Di Benedetto M., Esuli A., Falchi F., Gennaro C., Lagani G., Lenzi E., Meghini C., Messina N., Metilli D., Molinari A., Moreo Fernandez A. D., Nardi A., Pedrotti A., Pratelli N., Rabitti F., Savino P., Sebastiani F., Sperduti G., Thanos C., Trupiano L., Vadicamo L., Vairo C.
The Artificial Intelligence for Media and Humanities laboratory (AIMH) has the mission to investigate and advance the state of the art in the Artificial Intelligence field, specifically addressing applications to digital media and digital humanities, and taking also into account issues related to scalability.This report summarize the 2022 activities of the research group.DOI: 10.32079/isti-ar-2022/002
Metrics:


See at: CNR IRIS Open Access | ISTI Repository Open Access | CNR IRIS Restricted


2022 Conference article Open Access OPEN
FastHebb: scaling hebbian training of deep neural networks to ImageNet level
Lagani G, Gennaro C, Fassold H, Amato G
Learning algorithms for Deep Neural Networks are typically based on supervised end-to-end Stochastic Gradient Descent (SGD) training with error backpropagation (backprop). Backprop algorithms require a large number of labelled training samples to achieve high performance. However, in many realistic applications, even if there is plenty of image samples, very few of them are labelled, and semi-supervised sample-efficient training strategies have to be used. Hebbian learning represents a possible approach towards sample efficient training; however, in current solutions, it does not scale well to large datasets. In this paper, we present FastHebb, an efficient and scalable solution for Hebbian learning which achieves higher efficiency by 1) merging together update computation and aggregation over a batch of inputs, and 2) leveraging efficient matrix multiplication algorithms on GPU. We validate our approach on different computer vision benchmarks, in a semi-supervised learning scenario. FastHebb outperforms previous solutions by up to 50 times in terms of training speed, and notably, for the first time, we are able to bring Hebbian algorithms to ImageNet scale.DOI: 10.1007/978-3-031-17849-8_20
Project(s): AI4Media via OpenAIRE
Metrics:


See at: CNR IRIS Open Access | link.springer.com Open Access | doi.org Restricted | CNR IRIS Restricted | CNR IRIS Restricted


2022 Conference article Open Access OPEN
Recent advancements on bio-inspired Hebbian learning for deep neural networks
Lagani G
Deep learning is becoming more and more popular to extract information from multimedia data for indexing and query processing. In recent contributions, we have explored a biologically inspired strategy for Deep Neural Network (DNN) training, based on the Hebbian principle in neuroscience. We studied hybrid approaches in which unsupervised Hebbian learning was used for a pre-training stage, followed by supervised fine-tuning based on Stochastic Gradient Descent (SGD). The resulting semi-supervised strategy exhibited encouraging results on computer vision datasets, motivating further interest towards applications in the domain of large scale multimedia content based retrieval.Source: CEUR WORKSHOP PROCEEDINGS, pp. 610-615. Pisa, Italy, 2022
Project(s): AI4EU via OpenAIRE, AI4Media via OpenAIRE

See at: ceur-ws.org Open Access | CNR IRIS Open Access | ISTI Repository Open Access | CNR IRIS Restricted


2022 Other Embargo
Bio-inspired approaches for Deep Learning: from spiking neural networks to Hebbian plasticity
Lagani G
In the past few years, Deep Neural Network (DNN) architectures have achieved outstanding results in several Artificial Intelligence (AI) domains. Even though DNNs draw inspiration from biology, the training methods based on the backpropagation algorithm (\textit{backprop}) lack neuroscientific plausibility. The goal of this dissertation is to explore biologically-inspired solutions for the learning task. These are interesting because they can help to reproduce features of the human brain, for example, the ability to learn from a little experience. The investigation is divided into three phases: first, I explore a novel AI solution based on simulating neuronal biological cultures with a high level of detail, using biologically faithful Spiking Neural Network (SNN) models; second, I investigate neuroscientifically grounded \textit{Hebbian} learning rules, applied to traditional DNNs in combination with backprop, using computer vision as a case study; third, I consider a more applicative perspective, using neural features derived from Hebbian learning for multimedia content retrieval tasks. I validate the proposed methods on different benchmarks, including MNIST, CIFAR, and ImageNet, obtaining promising results, especially in learning scenarios with scarce data. Moreover, to the best of my knowledge, for the first time, I am able to bring bio-inspired Hebbian methods to ImageNet scale, consisting of over 1 million images.Project(s): AI4Media via OpenAIRE

See at: etd.adm.unipi.it Restricted | CNR IRIS Restricted | CNR IRIS Restricted


2022 Conference article Open Access OPEN
Deep features for CBIR with scarce data using Hebbian learning
Lagani G., Bacciu D., Gallicchio C., Falchi F., Gennaro C., Amato G.
Features extracted from Deep Neural Networks (DNNs) have proven to be very effective in the context of Content Based Image Retrieval (CBIR). Recently, biologically inspired Hebbian learning algorithms have shown promises for DNN training. In this contribution, we study the performance of such algorithms in the development of feature extractors for CBIR tasks. Specifically, we consider a semi-supervised learning strategy in two steps: first, an unsupervised pre-training stage is performed using Hebbian learning on the image dataset; second, the network is fine-tuned using supervised Stochastic Gradient Descent (SGD) training. For the unsupervised pre-training stage, we explore the nonlinear Hebbian Principal Component Analysis (HPCA) learning rule. For the supervised fine-tuning stage, we assume sample efficiency scenarios, in which the amount of labeled samples is just a small fraction of the whole dataset. Our experimental analysis, conducted on the CIFAR10 and CIFAR100 datasets, shows that, when few labeled samples are available, our Hebbian approach provides relevant improvements compared to various alternative methods.DOI: 10.1145/3549555.3549587
DOI: 10.48550/arxiv.2205.08935
Project(s): AI4Media via OpenAIRE
Metrics:


See at: arXiv.org e-Print Archive Open Access | dl.acm.org Open Access | ZENODO Open Access | CNR IRIS Open Access | IRIS Cnr Restricted | doi.org Restricted | doi.org Restricted | Archivio della Ricerca - Università di Pisa Restricted | IRIS Cnr Restricted | CNR IRIS Restricted | CNR IRIS Restricted


2021 Journal article Open Access OPEN
Hebbian semi-supervised learning in a sample efficiency setting
Lagani G, Falchi F, Gennaro C, Amato G
We propose to address the issue of sample efficiency, in Deep Convolutional Neural Networks (DCNN), with a semi-supervised training strategy that combines Hebbian learning with gradient descent: all internal layers (both convolutional and fully connected) are pre-trained using an unsupervised approach based on Hebbian learning, and the last fully connected layer (the classification layer) is trained using Stochastic Gradient Descent (SGD). In fact, as Hebbian learning is an unsupervised learning method, its potential lies in the possibility of training the internal layers of a DCNN without labels. Only the final fully connected layer has to be trained with labeled examples. We performed experiments on various object recognition datasets, in different regimes of sample efficiency, comparing our semi-supervised (Hebbian for internal layers + SGD for the final fully connected layer) approach with end-to-end supervised backprop training, and with semi-supervised learning based on Variational Auto-Encoder (VAE). The results show that, in regimes where the number of available labeled samples is low, our semi-supervised approach outperforms the other approaches in almost all the cases.Source: NEURAL NETWORKS, vol. 143, pp. 719-731
DOI: 10.1016/j.neunet.2021.08.003
DOI: 10.48550/arxiv.2103.09002
Project(s): AI4EU via OpenAIRE, AI4Media via OpenAIRE
Metrics:


See at: arXiv.org e-Print Archive Open Access | Neural Networks Open Access | CNR IRIS Open Access | ISTI Repository Open Access | www.sciencedirect.com Open Access | ZENODO Open Access | Neural Networks Restricted | doi.org Restricted | CNR IRIS Restricted | CNR IRIS Restricted | CNR IRIS Restricted


2021 Other Open Access OPEN
AIMH research activities 2021
Aloia N., Amato G., Bartalesi Lenzi V., Benedetti F., Bolettieri P., Cafarelli D., Carrara F., Casarosa V., Coccomini D., Ciampi L., Concordia C., Corbara S., Di Benedetto M., Esuli A., Falchi F., Gennaro C., Lagani G., Massoli F. V., Meghini C., Messina N., Metilli D., Molinari A., Moreo Fernandez A., Nardi A., Pedrotti A., Pratelli N., Rabitti F., Savino P., Sebastiani F., Sperduti G., Thanos C., Trupiano L., Vadicamo L., Vairo C.
The Artificial Intelligence for Media and Humanities laboratory (AIMH) has the mission to investigate and advance the state of the art in the Artificial Intelligence field, specifically addressing applications to digital media and digital humanities, and taking also into account issues related to scalability. This report summarize the 2021 activities of the research group.DOI: 10.32079/isti-ar-2021/003
Metrics:


See at: CNR IRIS Open Access | ISTI Repository Open Access | CNR IRIS Restricted


2021 Software Metadata Only Access
Hebbian Learning GitHub repository
Lagani G
Pytorch implementation of Hebbian learning algorithms to train deep convolutional neural networks.Project(s): AI4Media via OpenAIRE

See at: github.com Restricted | CNR IRIS Restricted


2021 Conference article Open Access OPEN
Assessing pattern recognition performance of neuronal cultures through accurate simulation
Lagani G, Mazziotti R, Falchi F, Gennaro C, Cicchini Gm, Pizzorusso T, Cremisi F, Amato G
Previous work has shown that it is possible to train neuronal cultures on Multi-Electrode Arrays (MEAs), to recognize very simple patterns. However, this work was mainly focused to demonstrate that it is possible to induce plasticity in cultures, rather than performing a rigorous assessment of their pattern recognition performance. In this paper, we address this gap by developing a methodology that allows us to assess the performance of neuronal cultures on a learning task. Specifically, we propose a digital model of the real cultured neuronal networks; we identify biologically plausible simulation parameters that allow us to reliably reproduce the behavior of real cultures; we use the simulated culture to perform handwritten digit recognition and rigorously evaluate its performance; we also show that it is possible to find improved simulation parameters for the specific task, which can guide the creation of real cultures.Source: INTERNATIONAL IEEE/EMBS CONFERENCE ON NEURAL ENGINEERING, pp. 726-729. Online, 4-6/05/2021
DOI: 10.1109/ner49283.2021.9441166
DOI: 10.48550/arxiv.2012.10355
Project(s): AI4Media via OpenAIRE
Metrics:


See at: arXiv.org e-Print Archive Open Access | arxiv.org Open Access | ZENODO Open Access | IRIS Cnr Open Access | IRIS Cnr Open Access | Software Heritage Restricted | Software Heritage Restricted | dblp.uni-trier.de Restricted | doi.org Restricted | doi.org Restricted | GitHub Restricted | GitHub Restricted | Flore (Florence Research Repository) Restricted | CNR IRIS Restricted | CNR IRIS Restricted


2020 Other Open Access OPEN
AIMH research activities 2020
Aloia N., Amato G., Bartalesi Lenzi V., Benedetti F., Bolettieri P., Carrara F., Casarosa V., Ciampi L., Concordia C., Corbara S., Esuli A., Falchi F., Gennaro C., Lagani G., Massoli F. V., Meghini C., Messina N., Metilli D., Molinari A., Moreo Fernandez A., Nardi A., Pedrotti A., Pratelli N., Rabitti F., Savino P., Sebastiani F., Thanos C., Trupiano L., Vadicamo L., Vairo C.
Annual Report of the Artificial Intelligence for Media and Humanities laboratory (AIMH) research activities in 2020.DOI: 10.32079/isti-ar-2020/001
Metrics:


See at: CNR IRIS Open Access | ISTI Repository Open Access | CNR IRIS Restricted