36 result(s)
Page Size: 10, 20, 50
Export: bibtex, xml, json, csv
Order by:

CNR Author operator: and / or
more
Typology operator: and / or
Language operator: and / or
Date operator: and / or
Rights operator: and / or
2023 Conference article Open Access OPEN
CrowdSim2: an open synthetic benchmark for object detectors
Foszner P., Szczesna A., Ciampi L., Messina N., Cygan A., Bizon B., Cogiel M., Golba D., Macioszek E., Staniszewski M.
Data scarcity has become one of the main obstacles to developing supervised models based on Artificial Intelligence in Computer Vision. Indeed, Deep Learning-based models systematically struggle when applied in new scenarios never seen during training and may not be adequately tested in non-ordinary yet crucial real-world situations. This paper presents and publicly releases CrowdSim2, a new synthetic collection of images suitable for people and vehicle detection gathered from a simulator based on the Unity graphical engine. It consists of thousands of images gathered from various synthetic scenarios resembling the real world, where we varied some factors of interest, such as the weather conditions and the number of objects in the scenes. The labels are automatically collected and consist of bounding boxes that precisely localize objects belonging to the two object classes, leaving out humans from the annotation pipeline. We exploited this new benchmark as a testing ground for some state-of-the-art detectors, showing that our simulated scenarios can be a valuable tool for measuring their performances in a controlled environment.Source: VISIGRAPP 2023 - 18th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications, pp. 676–683, Lisbon, Portugal, 19-21/02/2023
DOI: 10.5220/0011692500003417
Project(s): AI4Media via OpenAIRE
Metrics:


See at: ISTI Repository Open Access | CNR ExploRA Restricted | www.scitepress.org Restricted


2023 Conference article Open Access OPEN
Development of a realistic crowd simulation environment for fine-grained validation of people tracking methods
Foszner P., Szczesna A., Ciampi L., Messina N., Cygan A., Bizon B., Cogiel M., Golba D., Macioszek E., Staniszewski M.
Generally, crowd datasets can be collected or generated from real or synthetic sources. Real data is generated by using infrastructure-based sensors (such as static cameras or other sensors). The use of simulation tools can significantly reduce the time required to generate scenario-specific crowd datasets, facilitate data-driven research, and next build functional machine learning models. The main goal of this work was to develop an extension of crowd simulation (named CrowdSim2) and prove its usability in the application of people-tracking algorithms. The simulator is developed using the very popular Unity 3D engine with particular emphasis on the aspects of realism in the environment, weather conditions, traffic, and the movement and models of individual agents. Finally, three methods of tracking were used to validate generated dataset: IOU-Tracker, Deep-Sort, and Deep-TAMA.Source: VISIGRAPP 2023 - 18th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications, pp. 222–229, Lisbon, Portugal, 19-21/02/2023
DOI: 10.5220/0011691500003417
Project(s): AI4Media via OpenAIRE
Metrics:


See at: ISTI Repository Open Access | CNR ExploRA Restricted | www.scitepress.org Restricted


2022 Conference article Open Access OPEN
AIMH Lab for Healthcare and Wellbeing
Di Benedetto M., Carrara F., Ciampi L., Falchi F., Gennaro C., Amato G.
In this work we report the activities of the Artificial Intelligence for Media and Humanities (AIMH) laboratory of the ISTI-CNR related to Healthcare and Wellbeing. By exploiting the advances of recent machine learning methods and the compute power of desktop and mobile platforms, we will show how artificial intelligence tools can be used to improve healthcare systems in various parts of disease treatment. In particular we will see how deep neural networks can assist doctors from diagnosis (e.g., cell counting, pupil and brain analysis) to communication to patients with Augmented Reality .Source: Ital-IA 2022 - Workshop AI per la Medicina e la Salute, Online conference, 10/02/2022

See at: ISTI Repository Open Access | CNR ExploRA Open Access | www.ital-ia2022.it Open Access


2022 Conference article Open Access OPEN
AIMH Lab for the Industry
Carrara F., Ciampi L., Di Benedetto M., Falchi F., Gennaro C., Massoli F. V., Amato G.
In this short paper, we report the activities of the Artificial Intelligence for Media and Humanities (AIMH) laboratory of the ISTI-CNR related to Industry. The massive digitalization affecting all the stages of product design, production, and control calls for data-driven algorithms helping in the coordination of humans, machines, and digital resources in Industry 4.0. In this context, we developed AI-based Computer-Vision technologies of general interest in the emergent digital paradigm of the fourth industrial revolution, fo-cusing on anomaly detection and object counting for computer-assisted testing and quality control. Moreover, in the automotive sector, we explore the use of virtual worlds to develop AI systems in otherwise practically unfeasible scenarios, showing an application for accident avoidance in self-driving car AI agents.Source: Ital-IA 2022 - Workshop su AI per l'Industria, Online conference, 10/02/2022

See at: CNR ExploRA Open Access | www.ital-ia2022.it Open Access


2022 Conference article Open Access OPEN
AIMH Lab: Smart Cameras for Public Administration
Ciampi L., Cafarelli D., Carrara F., Di Benedetto M., Falchi F., Gennaro C., Massoli F. V., Messina N., Amato G.
In this short paper, we report the activities of the Artificial Intelligence for Media and Humanities (AIMH) laboratory of the ISTI-CNR related to Public Administration. In particular, we present some AI-based public services serving the citizens that help achieve common goals beneficial to the society, putting humans at the epicenter. Through the automatic analysis of images gathered from city cameras, we provide AI applications ranging from smart parking and smart mobility to human activity monitoring.Source: Ital-IA 2022 - Workshop su AI per la Pubblica Amministrazione, Online conference, 10/02/2022

See at: ISTI Repository Open Access | CNR ExploRA Open Access | www.ital-ia2022.it Open Access


2022 Conference article Open Access OPEN
Counting or localizing? Evaluating cell counting and detection in microscopy images
Ciampi L., Carrara F., Amato G., Gennaro C.
Image-based automatic cell counting is an essential yet challenging task, crucial for the diagnosing of many diseases. Current solutions rely on Convolutional Neural Networks and provide astonishing results. However, their performance is often measured only considering counting errors, which can lead to masked mistaken estimations; a low counting error can be obtained with a high but equal number of false positives and false negatives. Consequently, it is hard to determine which solution truly performs best. In this work, we investigate three general counting approaches that have been successfully adopted in the literature for counting several different categories of objects. Through an experimental evaluation over three public collections of microscopy images containing marked cells, we assess not only their counting performance compared to several state-of-the-art methods but also their ability to correctly localize the counted cells. We show that commonly adopted counting metrics do not always agree with the localization performance of the tested models, and thus we suggest integrating the proposed evaluation protocol when developing novel cell counting solutions.Source: VISIGRAPP 2022 - 17th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications, pp. 887–897, Online conference, 6-8/2/2022
DOI: 10.5220/0010923000003124
Project(s): AI4Media via OpenAIRE
Metrics:


See at: ISTI Repository Open Access | CNR ExploRA Restricted | www.scitepress.org Restricted


2022 Journal article Open Access OPEN
An embedded toolset for human activity monitoring in critical environments
Di Benedetto M., Carrara F., Ciampi L., Falchi F., Gennaro C., Amato G.
In many working and recreational activities, there are scenarios where both individual and collective safety have to be constantly checked and properly signaled, as occurring in dangerous workplaces or during pandemic events like the recent COVID-19 disease. From wearing personal protective equipment to filling physical spaces with an adequate number of people, it is clear that a possibly automatic solution would help to check compliance with the established rules. Based on an off-the-shelf compact and low-cost hardware, we present a deployed real use-case embedded system capable of perceiving people's behavior and aggregations and supervising the appliance of a set of rules relying on a configurable plug-in framework. Working on indoor and outdoor environments, we show that our implementation of counting people aggregations, measuring their reciprocal physical distances, and checking the proper usage of protective equipment is an effective yet open framework for monitoring human activities in critical conditions.Source: Expert systems with applications 199 (2022). doi:10.1016/j.eswa.2022.117125
DOI: 10.1016/j.eswa.2022.117125
Project(s): AI4EU via OpenAIRE, AI4Media via OpenAIRE
Metrics:


See at: ISTI Repository Open Access | CNR ExploRA Restricted


2022 Doctoral thesis Open Access OPEN
Deep Learning techniques for visual counting
Ciampi L.
In this thesis, I investigated and enhanced Deep Learning (DL)-based techniques for the visual counting task, which automatically estimates the number of objects, such as people or vehicles, present in images and videos. Specifically, I tackled the problem related to the lack of data needed for training current DL-based solutions by exploiting synthetic data gathered from video games, employing Domain Adaptation strategies between different data distributions, and taking advantage of the redundant information characterizing datasets labeled by multiple annotators. Furthermore, I addressed the engineering challenges coming out of the adoption of DL-based techniques in environments with limited power resources, mainly due to the high computational budget the AI-based algorithms require.

See at: etd.adm.unipi.it Open Access | ISTI Repository Open Access | CNR ExploRA Open Access


2022 Dataset Open Access OPEN
Night and day instance segmented park (NDISPark) dataset: a collection of images taken by day and by night for vehicle detection, segmentation and counting in parking areas
Ciampi L., Santiago C., Costeira J. P., Gennaro C., Amato G.
NDIS Park is a collection of images of parking lots for vehicle detection, segmentation, and counting. Each image is manually labeled with pixel-wise masks and bounding boxes localizing vehicle instances. The dataset includes 259 images depicting several parking areas describing most of the problematic situations that we can find in a real scenario: seven different cameras capture the images under various weather conditions and viewing angles. Another challenging aspect is the presence of partial occlusion patterns in many scenes such as obstacles (trees, lampposts, other cars) and shadowed cars. The main peculiarity is that images are taken during the day and the night, showing utterly different lighting conditions.Project(s): AI4EU via OpenAIRE, AI4Media via OpenAIRE

See at: ISTI Repository Open Access | CNR ExploRA | zenodo.org


2022 Conference article Open Access OPEN
MOBDrone: a drone video dataset for Man OverBoard Rescue
Cafarelli D., Ciampi L., Vadicamo L., Gennaro C., Berton A., Paterni M., Benvenuti C., Passera M., Falchi F.
Modern Unmanned Aerial Vehicles (UAV) equipped with cameras can play an essential role in speeding up the identification and rescue of people who have fallen overboard, i.e., man overboard (MOB). To this end, Artificial Intelligence techniques can be leveraged for the automatic understanding of visual data acquired from drones. However, detecting people at sea in aerial imagery is challenging primarily due to the lack of specialized annotated datasets for training and testing detectors for this task. To fill this gap, we introduce and publicly release the MOBDrone benchmark, a collection of more than 125K drone-view images in a marine environment under several conditions, such as different altitudes, camera shooting angles, and illumination. We manually annotated more than 180K objects, of which about 113K man overboard, precisely localizing them with bounding boxes. Moreover, we conduct a thorough performance analysis of several state-of-the-art object detectors on the MOBDrone data, serving as baselines for further research.Source: ICIAP 2022 - 21st International Conference on Image Analysis and Processing, pp. 633–644, Lecce, Italia, 23-27/05/2022
DOI: 10.1007/978-3-031-06430-2_53
Metrics:


See at: ISTI Repository Open Access | link.springer.com Restricted | CNR ExploRA Restricted


2022 Dataset Open Access OPEN
MOBDrone: a large-scale drone-view dataset for man overboard detection
Cafarelli D., Ciampi L., Vadicamo L., Gennaro C., Berton A., Paterni M., Benvenuti C., Passera M., Falchi F.
The Man OverBoard Drone (MOBDrone) dataset is a large-scale collection of aerial footage images. It contains 126,170 frames extracted from 66 video clips gathered from one UAV flying at an altitude of 10 to 60 meters above the mean sea level. Images are manually annotated with more than 180K bounding boxes localizing objects belonging to 5 categories --- person, boat, lifebuoy, surfboard, wood. More than 113K of these bounding boxes belong to the person category and localize people in the water simulating the need to be rescued.

See at: ISTI Repository Open Access | CNR ExploRA | zenodo.org


2022 Journal article Open Access OPEN
Learning to count biological structures with raters' uncertainty
Ciampi L., Carrara F., Totaro V., Mazziotti R., Lupori L., Santiago C., Amato G., Pizzorusso T., Gennaro C.
Exploiting well-labeled training sets has led deep learning models to astonishing results for counting biological structures in microscopy images. However, dealing with weak multi-rater annotations, i.e., when multiple human raters disagree due to non-trivial patterns, remains a relatively unexplored problem. More reliable labels can be obtained by aggregating and averaging the decisions given by several raters to the same data. Still, the scale of the counting task and the limited budget for labeling prohibit this. As a result, making the most with small quantities of multi-rater data is crucial. To this end, we propose a two-stage counting strategy in a weakly labeled data scenario. First, we detect and count the biological structures; then, in the second step, we refine the predictions, increasing the correlation between the scores assigned to the samples and the raters' agreement on the annotations. We assess our methodology on a novel dataset comprising fluorescence microscopy images of mice brains containing extracellular matrix aggregates named perineuronal nets. We demonstrate that we significantly enhance counting performance, improving confidence calibration by taking advantage of the redundant information characterizing the small sets of available multi-rater data.Source: Medical image analysis (Print) 80 (2022). doi:10.1016/j.media.2022.102500
DOI: 10.1016/j.media.2022.102500
Project(s): AI4Media via OpenAIRE
Metrics:


See at: ISTI Repository Open Access | CNR ExploRA Restricted | www.sciencedirect.com Restricted


2022 Journal article Open Access OPEN
Multi-camera vehicle counting using edge-AI
Ciampi L., Gennaro C., Carrara F., Falchi F., Vairo C., Amato G.
This paper presents a novel solution to automatically count vehicles in a parking lot using images captured by smart cameras. Unlike most of the literature on this task, which focuses on the analysis of single images, this paper proposes the use of multiple visual sources to monitor a wider parking area from different perspectives. The proposed multi-camera system is capable of automatically estimating the number of cars present in the entire parking lot directly on board the edge devices. It comprises an on-device deep learning-based detector that locates and counts the vehicles from the captured images and a decentralized geometric-based approach that can analyze the inter-camera shared areas and merge the data acquired by all the devices. We conducted the experimental evaluation on an extended version of the CNRPark-EXT dataset, a collection of images taken from the parking lot on the campus of the National Research Council (CNR) in Pisa, Italy. We show that our system is robust and takes advantage of the redundant information deriving from the different cameras, improving the overall performance without requiring any extra geometrical information of the monitored scene.Source: Expert systems with applications (2022). doi:10.1016/j.eswa.2022.117929
DOI: 10.1016/j.eswa.2022.117929
Project(s): AI4EU via OpenAIRE, AI4Media via OpenAIRE
Metrics:


See at: ISTI Repository Open Access | CNR ExploRA Restricted | www.sciencedirect.com Restricted


2022 Conference article Open Access OPEN
A spatio-temporal attentive network for video-based crowd counting
Avvenuti M., Bongiovanni M., Ciampi L., Falchi F., Gennaro C., Messina N.
Automatic people counting from images has recently drawn attention for urban monitoring in modern Smart Cities due to the ubiquity of surveillance camera networks. Current computer vision techniques rely on deep learning-based algorithms that estimate pedestrian densities in still, individual images. Only a bunch of works take advantage of temporal consistency in video sequences. In this work, we propose a spatio-temporal attentive neural network to estimate the number of pedestrians from surveillance videos. By taking advantage of the temporal correlation between consecutive frames, we lowered state-of-the-art count error by 5% and localization error by 7.5% on the widely-used FDST benchmark.Source: ISCC 2022 - 27th IEEE Symposium on Computers and Communications, Rhodes Island, Greece, 30/06/2022-03/07/2022
DOI: 10.1109/iscc55528.2022.9913019
Project(s): AI4Media via OpenAIRE
Metrics:


See at: ISTI Repository Open Access | ieeexplore.ieee.org Restricted | CNR ExploRA Restricted


2022 Dataset Unknown
Bus Violence: a large-scale benchmark for video violence detection in public transport
Foszner P., Staniszewski M., Szczesna A., Cogiel M., Golba D., Ciampi L., Messina N., Gennaro C., Falchi F., Amato G., Serao G.
The Bus Violence dataset is a large-scale collection of videos depicting violent and non-violent situations in public transport environments. This benchmark was gathered from multiple cameras located inside a moving bus where several people simulated violent actions, such as stealing an object from another person, fighting between passengers, etc. It contains 1,400 video clips manually annotated as having or not violent scenes, making it one of the biggest benchmarks for video violence detection in the literature.Project(s): AI4Media via OpenAIRE

See at: CNR ExploRA | zenodo.org


2022 Contribution to conference Open Access OPEN
AI and computer vision for smart cities
Amato G., Carrara F., Ciampi L., Di Benedetto M., Gennaro C., Falchi F., Messina N., Vairo C.
Artificial Intelligence (AI) is increasingly employed to develop public services that make life easier for citizens. In this abstract, we present some research topics and applications carried out by the Artificial Intelligence for Media and Humanities (AIMH) laboratory of the ISTI-CNR of Pisa about the study and development of AI-based services for Smart Cities dedicated to the interaction with the physical world through the analysis of images gathered from city cameras. Like no other sensing mechanism, networks of city cameras can 'observe' the world and simultaneously provide visual data to AI systems to extract relevant information and make/suggest decisions helping to solve many real-world problems. Specifically, we discuss some solutions in the context of smart mobility, parking monitoring, infrastructure management, and surveillance systems.Source: I-CiTies 2022 - 8th Italian Conference on ICT for Smart Cities And Communities, Ascoli Piceno, Italy, 14-16/09/2022
Project(s): AI4Media via OpenAIRE

See at: icities2022.unicam.it Open Access | ISTI Repository Open Access | ISTI Repository Open Access | CNR ExploRA Open Access


2022 Contribution to conference Open Access OPEN
CrowdVisor: an embedded toolset for human activity monitoring in critical environments
Di Benedetto M., Carrara F., Ciampi L., Falchi F., Gennaro C., Amato G.
As evidenced during the recent COVID-19 pandemic, there are scenarios in which ensuring compliance to a set of guidelines (such as wearing medical masks and keeping a certain physical distance among people) becomes crucial to secure a safe living environment. However, human supervision could not always guarantee this task, especially in crowded scenes. This abstract presents CrowdVisor, an embedded modular Computer Vision-based and AI-assisted system that can carry out several tasks to help monitor individual and collective human safety rules. We strive for a real-time but low-cost system, thus complying with the compute- and storage-limited resources availability typical of off-the-shelves embedded devices, where images are captured and processed directly onboard. Our solution consists of multiple modules relying on well-researched neural network components, each responsible for specific functionalities that the user can easily enable and configure. In particular, by exploiting one of these modules or combining some of them, our framework makes available many capabilities. They range from the ability to estimate the so-called social distance to the estimation of the number of people present in the monitored scene, as well as the possibility to localize and classify Personal Protective Equipment (PPE) worn by people (such as helmets and face masks). To validate our solution, we test all the functionalities that our framework makes available over two novel datasets that we collected and annotated on purpose. Experiments show that our system provides a valuable asset to monitor compliance with safety rules automatically.Source: I-CiTies 2022 - 8th Italian Conference on ICT for Smart Cities And Communities, Ascoli Piceno, Italy, 14-16/09/2022
Project(s): AI4EU via OpenAIRE, AI4Media via OpenAIRE

See at: icities2022.unicam.it Open Access | ISTI Repository Open Access | ISTI Repository Open Access | CNR ExploRA Open Access


2022 Journal article Open Access OPEN
Bus violence: an open benchmark for video violence detection on public transport
Ciampi L., Foszner P., Messina N., Staniszewski M., Gennaro C., Falchi F., Serao G., Cogiel M., Golba D., Szczesna A., Amato G.
Automatic detection of violent actions in public places through video analysis is difficult because the employed Artificial Intelligence-based techniques often suffer from generalization problems. Indeed, these algorithms hinge on large quantities of annotated data and usually experience a drastic drop in performance when used in scenarios never seen during the supervised learning phase. In this paper, we introduce and publicly release the Bus Violence benchmark, the first large-scale collection of video clips for violence detection in public transport, where some actors simulated violent actions inside a moving bus in changing conditions such as background or light. Moreover, we conduct a performance analysis of several state-of-the-art video violence detectors pre-trained with general violence detection databases on this newly established use case. The achieved moderate performances reveal the difficulties in generalizing from these popular methods, indicating the need to have this new collection of labeled data beneficial to specialize them in this new scenario.Source: Sensors (Basel) 22 (2022). doi:10.3390/s22218345
DOI: 10.3390/s22218345
DOI: 10.3390%2fs22218345
Project(s): AI4Media via OpenAIRE
Metrics:


See at: ISTI Repository Open Access | CNR ExploRA Open Access


2021 Conference article Open Access OPEN
Domain adaptation for traffic density estimation
Ciampi L., Santiago C., Costeira J. P., Gennaro C., Amato G.
Convolutional Neural Networks have produced state-of-the-art results for a multitude of computer vision tasks under supervised learning. However, the crux of these methods is the need for a massive amount of labeled data to guarantee that they generalize well to diverse testing scenarios. In many real-world applications, there is indeed a large domain shift between the distributions of the train (source) and test (target) domains, leading to a significant drop in performance at inference time. Unsupervised Domain Adaptation (UDA) is a class of techniques that aims to mitigate this drawback without the need for labeled data in the target domain. This makes it particularly useful for the tasks in which acquiring new labeled data is very expensive, such as for semantic and instance segmentation. In this work, we propose an end-to-end CNN-based UDA algorithm for traffic density estimation and counting, based on adversarial learning in the output space. The density estimation is one of those tasks requiring per-pixel annotated labels and, therefore, needs a lot of human effort. We conduct experiments considering different types of domain shifts, and we make publicly available two new datasets for the vehicle counting task that were also used for our tests. One of them, the Grand Traffic Auto dataset, is a synthetic collection of images, obtained using the graphical engine of the Grand Theft Auto video game, automatically annotated with precise per-pixel labels. Experiments show a significant improvement using our UDA algorithm compared to the model's performance without domain adaptation.Source: VISIGRAPP 2021 - 16th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications, pp. 185–195, Online Conference, 08-10 February, 2021
DOI: 10.5220/0010303401850195
Project(s): AI4EU via OpenAIRE, AI4Media via OpenAIRE
Metrics:


See at: ISTI Repository Open Access | CNR ExploRA Open Access | www.scitepress.org Open Access


2021 Conference article Open Access OPEN
Traffic density estimation via unsupervised domain adaptation
Ciampi L., Santiago C., Costeira J. P., Gennaro C., Amato G.
Monitoring traffic flows in cities is crucial to improve urban mobility, and images are the best sensing modality to perceive and assess the flow of vehicles in large areas. However, current machine learning-based technologies using images hinge on large quantities of annotated data, preventing their scalability to city-scale as new cameras are added to the system. We propose a new methodology to design image-based vehicle density estimators with few labeled data via an unsupervised domain adaptation technique.Source: SEBD 2021 - Italian Symposium on Advanced Database Systems, pp. 442–449, Pizzo Calabro, Italy, 05-09/09/2021
Project(s): AI4EU via OpenAIRE

See at: ceur-ws.org Open Access | ISTI Repository Open Access | CNR ExploRA Open Access