357 result(s)
Page Size: 10, 20, 50
Export: bibtex, xml, json, csv
Order by:

CNR Author operator: and / or
more
Typology operator: and / or
more
Language operator: and / or
Date operator: and / or
more
Rights operator: and / or
2023 Conference article Open Access OPEN
SegmentCodeList: unsupervised representation learning for human skeleton data retrieval
Sedmidubsky J., Carrara F., Amato G.
Recent progress in pose-estimation methods enables the extraction of sufficiently-precise 3D human skeleton data from ordinary videos, which offers great opportunities for a wide range of applications. However, such spatio-temporal data are typically extracted in the form of a continuous skeleton sequence without any information about semantic segmentation or annotation. To make the extracted data reusable for further processing, there is a need to access them based on their content. In this paper, we introduce a universal retrieval approach that compares any two skeleton sequences based on temporal order and similarities of their underlying segments. The similarity of segments is determined by their content-preserving low-dimensional code representation that is learned using the Variational AutoEncoder principle in an unsupervised way. The quality of the proposed representation is validated in retrieval and classification scenarios; our proposal outperforms the state-of-the-art approaches in effectiveness and reaches speed-ups up to 64x on common skeleton sequence datasets.Source: ECIR 2023 - 45th European Conference on Information Retrieval, pp. 110–124, Dublin, Ireland, 2-6/4/2023
DOI: 10.1007/978-3-031-28238-6_8
Project(s): AI4Media via OpenAIRE
Metrics:


See at: ISTI Repository Open Access | link.springer.com Restricted | CNR ExploRA


2023 Conference article Open Access OPEN
Social and hUman ceNtered XR
Vairo C., Callieri M., Carrara F., Cignoni P., Di Benedetto M., Gennaro C., Giorgi D., Palma G., Vadicamo L., Amato G.
The Social and hUman ceNtered XR (SUN) project is focused on developing eXtended Reality (XR) solutions that integrate the physical and virtual world in a way that is convincing from a human and social perspective. In this paper, we outline the limitations that the SUN project aims to overcome, including the lack of scalable and cost-effective solutions for developing XR applications, limited solutions for mixing the virtual and physical environment, and barriers related to resource limitations of end-user devices. We also propose solutions to these limitations, including using artificial intelligence, computer vision, and sensor analysis to incrementally learn the visual and physical properties of real objects and generate convincing digital twins in the virtual environment. Additionally, the SUN project aims to provide wearable sensors and haptic interfaces to enhance natural interaction with the virtual environment and advanced solutions for user interaction. Finally, we describe three real-life scenarios in which we aim to demonstrate the proposed solutions.Source: Ital-IA 2023 - Workshop su AI per l'industria, Pisa, Italy, 29-31/05/2023

See at: ceur-ws.org Open Access | ISTI Repository Open Access | ISTI Repository Open Access | CNR ExploRA


2023 Report Unknown
SUN D1.1 - Management Website
Amato G., Bolettieri P., Gennaro C., Vadicamo L., Vairo C.
Report describing the online web accessible repository for all project-related documentation, which serves as the primary means for project partners to manage and share documents of the project. https://wiki.sun-xr-project.euSource: ISTI Project Report, SUN, D1.1, 2023

See at: CNR ExploRA


2023 Conference article Open Access OPEN
Unsupervised domain adaptation for video violence detection in the wild
Ciampi L., Santiago C., Costeira J. P., Falchi F. Gennaro C., Amato G.
Video violence detection is a subset of human action recognition aiming to detect violent behaviors in trimmed video clips. Current Computer Vision solutions based on Deep Learning approaches provide astonishing results. However, their success relies on large collections of labeled datasets for supervised learning to guarantee that they generalize well to diverse testing scenarios. Although plentiful annotated data may be available for some pre-specified domains, manual annotation is unfeasible for every ad-hoc target domain or task. As a result, in many real-world applications, there is a domain shift between the distributions of the train (source) and test (target) domains, causing a significant drop in performance at inference time. To tackle this problem, we propose an Unsupervised Domain Adaptation scheme for video violence detection based on single image classification that mitigates the domain gap between the two domains. We conduct experiments considering as the source labeled domain some datasets containing violent/non-violent clips in general contexts and, as the target domain, a collection of videos specific for detecting violent actions in public transport, showing that our proposed solution can improve the performance of the considered models.Source: IMPROVE 2023 - 3rd International Conference on Image Processing and Vision Engineering, pp. 37–46, Prague, Czech Republic, 21-23/04/2023
DOI: 10.5220/0011965300003497
Project(s): AI4Media via OpenAIRE
Metrics:


See at: ISTI Repository Open Access | www.scitepress.org Restricted | CNR ExploRA


2023 Report Unknown
SUN D1.3 - Data Management and IPR issues
Boi S., Amato G., Vairo C., Casarosa V.
This document presents the Data Management Plan (DMP) for the SUN project, outlining the methodology adopted to effectively manage all data collected, generated, or acquired during the project's lifecycle. The DMP encompasses the management of research and non-research data, covering aspects such as collection, storage, sharing, preservation, privacy, ethics, and data interoperability. The DMP also defines rules on intellectual property ownership, access rights to background and results, and the protection of intellectual property rights (IPRs).Source: ISTI Project Report, SUN, D1.3, 2023

See at: CNR ExploRA


2023 Journal article Open Access OPEN
A comprehensive atlas of perineuronal net distribution and colocalization with parvalbumin in the adult mouse brain
Lupori L., Totaro V., Cornuti S., Ciampi L., Carrara F., Grilli E., Viglione A., Tozzi F., Putignano E., Mazziotti R., Amato G., Gennaro G., Tognini P., Pizzorusso T.
Perineuronal nets (PNNs) surround specific neurons in the brain and are involved in various forms of plasticity and clinical conditions. However, our understanding of the PNN role in these phenomena is limited by the lack of highly quantitative maps of PNN distribution and association with specific cell types. Here, we present a comprehensive atlas of Wisteria floribunda agglutinin (WFA)-positive PNNs and colocalization with parvalbumin (PV) cells for over 600 regions of the adult mouse brain. Data analysis shows that PV expression is a good predictor of PNN aggregation. In the cortex, PNNs are dramatically enriched in layer 4 of all primary sensory areas in correlation with thalamocortical input density, and their distribution mirrors intracortical connectivity patterns. Gene expression analysis identifies many PNN-correlated genes. Strikingly, PNN-anticorrelated transcripts are enriched in synaptic plasticity genes, generalizing PNNs' role as circuit stability factors.Source: Cell reports 42 (2023). doi:10.1016/j.celrep.2023.112788
DOI: 10.1016/j.celrep.2023.112788
Project(s): AI4Media via OpenAIRE
Metrics:


See at: www.cell.com Open Access | CNR ExploRA


2023 Conference article Open Access OPEN
VISIONE: a large-scale video retrieval system with advanced search functionalities
Amato G., Bolettieri P., Carrara F., Falchi F., Gennaro C., Messina N., Vadicamo L., Vairo C.
VISIONE is a large-scale video retrieval system that integrates multiple search functionalities, including free text search, spatial color and object search, visual and semantic similarity search, and temporal search. The system leverages cutting-edge AI technology for visual analysis and advanced indexing techniques to ensure scalability. As demonstrated by its runner-up position in the 2023 Video Browser Showdown competition, VISIONE effectively integrates these capabilities to provide a comprehensive video retrieval solution. A system demo is available online, showcasing its capabilities on over 2300 hours of diverse video content (V3C1+V3C2 dataset) and 12 hours of highly redundant content (Marine dataset). The demo can be accessed at https://visione.isti.cnr.itSource: ICMR '23: International Conference on Multimedia Retrieval, pp. 649–653, Thessaloniki, Greece, 12-15/06/2023
DOI: 10.1145/3591106.3592226
Project(s): AI4Media via OpenAIRE
Metrics:


See at: ISTI Repository Open Access | CNR ExploRA


2023 Conference article Open Access OPEN
VISIONE at Video Browser Showdown 2023
Amato G., Bolettieri P., Carrara F., Falchi F., Gennaro C., Messina N., Vadicamo L., Vairo C.
In this paper, we present the fourth release of VISIONE, a tool for fast and effective video search on a large-scale dataset. It includes several search functionalities like text search, object and color-based search, semantic and visual similarity search, and temporal search. VISIONE uses ad-hoc textual encoding for indexing and searching video content, and it exploits a full-text search engine as search backend. In this new version of the system, we introduced some changes both to the current search techniques and to the user interface.Source: MMM 2023 - 29th International Conference on Multi Media Modeling, pp. 615–621, Bergen, Norway, 9-12/01/2023
DOI: 10.1007/978-3-031-27077-2_48
Project(s): AI4Media via OpenAIRE
Metrics:


See at: ISTI Repository Open Access | ZENODO Open Access | CNR ExploRA


2023 Report Open Access OPEN
CNR activity in the ESA Extension project
Vairo C., Bolettieri P., Gennaro C., Amato G.
The CNR activity within the ESA "EXTENSION" project aims to develop an advanced visual recognition system for cultural heritage objects in L'Aquila, using AI techniques such as classifiers. However, this task requires substantial computational resources due to the large amount of data and deep learning-based AI techniques involved. To overcome these challenges, a centralized approach has been adopted, with a central server providing the necessary computational power and storage capacity.Source: ISTI Technical Report, ISTI-TR-2023/010, pp.1–10, 2023
DOI: 10.32079/isti-tr-2023/010
Metrics:


See at: ISTI Repository Open Access | CNR ExploRA


2023 Journal article Open Access OPEN
Learning-based traffic scheduling in non-stationary multipath 5G non-terrestrial networks
Machumilane A., Gotta A., Cassarà P., Amato G., Gennaro C.
In non-terrestrial networks, where low Earth orbit satellites and user equipment move relative to each other, line-of-sight tracking and adapting to channel state variations due to endpoint movements are a major challenge. Therefore, continuous line-of-sight estimation and channel impairment compensation are crucial for user equipment to access a satellite and maintain connectivity. In this paper, we propose a framework based on actor-critic reinforcement learning for traffic scheduling in non-terrestrial networks scenario where the channel state is non-stationary due to the variability of the line of sight, which depends on the current satellite elevation. We deploy the framework as an agent in a multipath routing scheme where the user equipment can access more than one satellite simultaneously to improve link reliability and throughput. We investigate how the agent schedules traffic in multiple satellite links by adopting policies that are evaluated by an actor-critic reinforcement learning approach. The agent continuously trains its model based on variations in satellite elevation angles, handovers, and relative line-of-sight probabilities. We compare the agent's retraining time with the satellite visibility intervals to investigate the effectiveness of the agent's learning rate. We carry out performance analysis while considering the dense urban area of Paris, where high-rise buildings significantly affect the line of sight. The simulation results show how the learning agent selects the scheduling policy when it is connected to a pair of satellites. The results also show that the retraining time of the learning agent is up to 0.1times the satellite visibility time at given elevations, which guarantees efficient use of satellite visibility.Source: Remote sensing (Basel) 15 (2023). doi:10.3390/rs15071842
DOI: 10.3390/rs15071842
Metrics:


See at: Remote Sensing Open Access | ISTI Repository Open Access | www.mdpi.com Open Access | ZENODO Open Access | CNR ExploRA


2023 Conference article Open Access OPEN
AIMH Lab 2022 activities for Healthcare
Carrara F., Ciampi L., Di Benedetto M., Falchi F., Gennaro C., Amato G.
The application of Artificial Intelligence technologies in healthcare can enhance and optimize medical diagnosis, treatment, and patient care. Medical imaging, which involves Computer Vision to interpret and understand visual data, is one area of healthcare that shows great promise for AI, and it can lead to faster and more accurate diagnoses, such as detecting early signs of cancer or identifying abnormalities in the brain. This short paper provides an introduction to some of the activities of the Artificial Intelligence for Media and Humanities Laboratory of the ISTI-CNR that integrate AI and medical image analysis in healthcare. Specifically, the paper presents approaches that utilize 3D medical images to detect the behavior-variant of frontotemporal dementia, a neurodegenerative syndrome that can be diagnosed by analyzing brain scans. Furthermore, it illustrates some Deep Learning-based techniques for localizing and counting biological structures in microscopy images, such as cells and perineuronal nets. Lastly, the paper presents a practical and cost-effective AI-based tool for multi-species pupillometry (mice and humans), which has been validated in various scenarios.Source: Ital-IA 2023, pp. 128–133, Pisa, Italy, 29-31/05/2023

See at: ceur-ws.org Open Access | ISTI Repository Open Access | CNR ExploRA


2023 Conference article Open Access OPEN
MC-GTA: a synthetic benchmark for multi-camera vehicle tracking
Ciampi L., Messina N., Valenti G. E., Amato G., Falchi F., Gennaro C.
Multi-camera vehicle tracking (MCVT) aims to trace multiple vehicles among videos gathered from overlapping and non-overlapping city cameras. It is beneficial for city-scale traffic analysis and management as well as for security. However, developing MCVT systems is tricky, and their real-world applicability is dampened by the lack of data for training and testing computer vision deep learning-based solutions. Indeed, creating new annotated datasets is cumbersome as it requires great human effort and often has to face privacy concerns. To alleviate this problem, we introduce MC-GTA - Multi Camera Grand Tracking Auto, a synthetic collection of images gathered from the virtual world provided by the highly-realistic Grand Theft Auto 5 (GTA) video game. Our dataset has been recorded from several cameras recording urban scenes at various crossroads. The annotations, consisting of bounding boxes localizing the vehicles with associated unique IDs consistent across the video sources, have been automatically generated by interacting with the game engine. To assess this simulated scenario, we conduct a performance evaluation using an MCVT SOTA approach, showing that it can be a valuable benchmark that mitigates the need for real-world data. The MC-GTA dataset and the code for creating new ad-hoc custom scenarios are available at https://github.com/GaetanoV10/GT5-Vehicle-BB.Source: ICIAP 2023 - 22nd International Conference on Image Analysis and Processing, pp. 316–327, Udine, Italy, 11-15/09/2023
DOI: 10.1007/978-3-031-43148-7_27
Project(s): AI4Media via OpenAIRE
Metrics:


See at: ISTI Repository Open Access | link.springer.com Restricted | CNR ExploRA


2023 Conference article Open Access OPEN
AIMH Lab 2022 activities for Vision
Ciampi L., Amato G., Bolettieri P., Carrara F., Di Benedetto M., Falchi F., Gennaro C., Messina N., Vadicamo L., Vairo C.
The explosion of smartphones and cameras has led to a vast production of multimedia data. Consequently, Artificial Intelligence-based tools for automatically understanding and exploring these data have recently gained much attention. In this short paper, we report some activities of the Artificial Intelligence for Media and Humanities (AIMH) laboratory of the ISTI-CNR, tackling some challenges in the field of Computer Vision for the automatic understanding of visual data and for novel interactive tools aimed at multimedia data exploration. Specifically, we provide innovative solutions based on Deep Learning techniques carrying out typical vision tasks such as object detection and visual counting, with particular emphasis on scenarios characterized by scarcity of labeled data needed for the supervised training and on environments with limited power resources imposing miniaturization of the models. Furthermore, we describe VISIONE, our large-scale video search system designed to search extensive multimedia databases in an interactive and user-friendly manner.Source: Ital-IA 2023, pp. 538–543, Pisa, Italy, 29-31/05/2023
Project(s): AI4Media via OpenAIRE

See at: ceur-ws.org Open Access | ISTI Repository Open Access | CNR ExploRA


2023 Conference article Open Access OPEN
Vec2Doc: transforming dense vectors into sparse representations for efficient information retrieval
Carrara F., Gennaro C., Vadicamo L., Amato G.
Vec2Doc: Transforming Dense Vectors into Sparse Representations for Efficient Information RetrievalSource: SISAP 2023 - 16th International Conference on Similarity Search and Applications, pp. 215–222, A Coruña, Spain, 9-11/10/2023
DOI: 10.1007/978-3-031-46994-7_18
Project(s): AI4Media via OpenAIRE
Metrics:


See at: ISTI Repository Open Access | CNR ExploRA


2023 Report Unknown
THE D.8.8.1 - State of the art for digital models of cultured neural networks
Lagani G., Falchi F., Amato G.
THE deliverable 8.8.1 is a technical report about current state-of-the-art approaches in the field of bio-inspired technologies for Artificial Intelligence (AI)Source: ISTI Project Report, THE, D.8.8.1, 2023

See at: CNR ExploRA


2022 Journal article Open Access OPEN
Comparing the performance of Hebbian against backpropagation learning using convolutional neural networks
Lagani G., Falchi F., Gennaro C., Amato G.
In this paper, we investigate Hebbian learning strategies applied to Convolutional Neural Network (CNN) training. We consider two unsupervised learning approaches, Hebbian Winner-Takes-All (HWTA), and Hebbian Principal Component Analysis (HPCA). The Hebbian learning rules are used to train the layers of a CNN in order to extract features that are then used for classification, without requiring backpropagation (backprop). Experimental comparisons are made with state-of-the-art unsupervised (but backprop-based) Variational Auto-Encoder (VAE) training. For completeness,we consider two supervised Hebbian learning variants (Supervised Hebbian Classifiers--SHC, and Contrastive Hebbian Learning--CHL), for training the final classification layer, which are compared to Stochastic Gradient Descent training. We also investigate hybrid learning methodologies, where some network layers are trained following the Hebbian approach, and others are trained by backprop. We tested our approaches on MNIST, CIFAR10, and CIFAR100 datasets. Our results suggest that Hebbian learning is generally suitable for training early feature extraction layers, or to retrain higher network layers in fewer training epochs than backprop. Moreover, our experiments show that Hebbian learning outperforms VAE training, with HPCA performing generally better than HWTA.Source: Neural computing & applications (Print) (2022). doi:10.1007/s00521-021-06701-4
DOI: 10.1007/s00521-021-06701-4
Project(s): AI4EU via OpenAIRE, AI4Media via OpenAIRE
Metrics:


See at: ISTI Repository Open Access | ISTI Repository Open Access | link.springer.com Restricted | CNR ExploRA


2022 Conference article Open Access OPEN
AIMH Lab for Trustworthy AI
Messina N., Carrara F., Coccomini D., Falchi F., Gennaro C., Amato G.
In this short paper, we report the activities of the Artificial Intelligence for Media and Humanities (AIMH) laboratory of the ISTI-CNR related to Trustworthy AI. Artificial Intelligence is becoming more and more pervasive in our society, controlling recommendation systems in social platforms as well as safety-critical systems like autonomous vehicles. In order to be safe and trustworthy, these systems require to be easily interpretable and transparent. On the other hand, it is important to spot fake examples forged by malicious AI generative models to fool humans (through fake news or deep-fakes) or other AI systems (through adversarial examples). This is required to enforce an ethical use of these powerful new technologies. Driven by these concerns, this paper presents three crucial research directions contributing to the study and the development of techniques for reliable, resilient, and explainable deep learning methods. Namely, we report the laboratory activities on the detection of adversarial examples, the use of attentive models as a way towards explainable deep learning, and the detection of deepfakes in social platforms.Source: Ital-IA 2020 - Workshop su AI Responsabile ed Affidabile, Online conference, 10/02/2022

See at: ISTI Repository Open Access | www.ital-ia2022.it Open Access | CNR ExploRA


2022 Conference article Open Access OPEN
AIMH Lab for Cybersecurity
Vairo C., Coccomini D. A., Falchi F., Gennaro C., Massoli F. V., Messina N., Amato G.
In this short paper, we report the activities of the Artificial Intelligence for Media and Humanities (AIMH) laboratory of the ISTI-CNR related to Cy-bersecurity. We discuss about our active research fields, their applications and challenges. We focus on face recognition and detection of adversarial examples and deep fakes. We also present our activities on the detection of persuasion techniques combining image and text analysis.Source: Ital-IA 2022 - Workshop su AI per Cybersecurity, 10/02/2022

See at: ISTI Repository Open Access | www.ital-ia2022.it Open Access | CNR ExploRA


2022 Conference article Open Access OPEN
AIMH Lab for Healthcare and Wellbeing
Di Benedetto M., Carrara F., Ciampi L., Falchi F., Gennaro C., Amato G.
In this work we report the activities of the Artificial Intelligence for Media and Humanities (AIMH) laboratory of the ISTI-CNR related to Healthcare and Wellbeing. By exploiting the advances of recent machine learning methods and the compute power of desktop and mobile platforms, we will show how artificial intelligence tools can be used to improve healthcare systems in various parts of disease treatment. In particular we will see how deep neural networks can assist doctors from diagnosis (e.g., cell counting, pupil and brain analysis) to communication to patients with Augmented Reality .Source: Ital-IA 2022 - Workshop AI per la Medicina e la Salute, Online conference, 10/02/2022

See at: ISTI Repository Open Access | www.ital-ia2022.it Open Access | CNR ExploRA


2022 Conference article Open Access OPEN
AIMH Lab for the Industry
Carrara F., Ciampi L., Di Benedetto M., Falchi F., Gennaro C., Massoli F. V., Amato G.
In this short paper, we report the activities of the Artificial Intelligence for Media and Humanities (AIMH) laboratory of the ISTI-CNR related to Industry. The massive digitalization affecting all the stages of product design, production, and control calls for data-driven algorithms helping in the coordination of humans, machines, and digital resources in Industry 4.0. In this context, we developed AI-based Computer-Vision technologies of general interest in the emergent digital paradigm of the fourth industrial revolution, fo-cusing on anomaly detection and object counting for computer-assisted testing and quality control. Moreover, in the automotive sector, we explore the use of virtual worlds to develop AI systems in otherwise practically unfeasible scenarios, showing an application for accident avoidance in self-driving car AI agents.Source: Ital-IA 2022 - Workshop su AI per l'Industria, Online conference, 10/02/2022

See at: ISTI Repository Open Access | www.ital-ia2022.it Open Access | CNR ExploRA