52 result(s)
Page Size: 10, 20, 50
Export: bibtex, xml, json, csv
Order by:

CNR Author operator: and / or
more
Typology operator: and / or
Language operator: and / or
Date operator: and / or
Rights operator: and / or
2025 Conference article Open Access OPEN
From neurons to computation: biological reservoir computing for pattern recognition
Iannello Ludovico, Ciampi Luca, Lagani Gabriele, Tonelli Fabrizio, Crocco Eleonora, Calcagnile Lucio Maria, Di Garbo Angelo, Cremisi Federico, Amato Giuseppe
In this paper, we introduce a paradigm for reservoir computing (RC) that leverages a pool of cultured biological neurons as the reservoir substrate, creating a biological reservoir computing (BRC). This system operates similarly to an echo state network (ESN), with the key distinction that the neural activity is generated by a network of cultured neurons, rather than being modeled by traditional artificial computational units. The neuronal activity is recorded using a multi-electrode array (MEA), which enables high-throughput recording of neural signals. In our approach, inputs are introduced into the network through a subset of the MEA electrodes, while the remaining electrodes capture the resulting neural activity. This generates a nonlinear mapping of the input data to a high-dimensional biological feature space, where distinguishing between data becomes more efficient and straightforward, allowing a simple linear classifier to perform pattern recognition tasks effectively. To evaluate the performance of our proposed system, we present an experimental study that includes various input patterns, such as positional codes, bars with different orientations, and a digit recognition task. The results demonstrate the feasibility of using biological neural networks to perform tasks traditionally handled by artificial neural networks, paving the way for further exploration of biologically-inspired computing systems, with potential applications in neuromorphic engineering and bio-hybrid computing.Source: COMMUNICATIONS IN COMPUTER AND INFORMATION SCIENCE, vol. 2757, pp. 114-131. Okinawa, Japan, 20-24 november 2025
DOI: 10.1007/978-981-95-4100-3_9
DOI: 10.48550/arxiv.2505.03510
Project(s): AICult, Tuscany Health Ecosystem
Metrics:


See at: arXiv.org e-Print Archive Open Access | CNR IRIS Open Access | link.springer.com Open Access | doi.org Restricted | doi.org Restricted | CNR IRIS Restricted | CNR IRIS Restricted


2025 Conference article Restricted
A biologically-inspired approach to biomedical image segmentation
Ciampi L., Lagani G., Amato G., Falchi F.
We present a novel bio-inspired semi-supervised learning strategy for semantic segmentation architectures. It is based on the so-called Hebbian principle “neurons that fire together wire together” that closely mimics brain synaptic adaptations and provides a promising biologically-plausible local learning rule for updating neural network weights without needing supervision. Our approach includes two stages. In the first step, we exploit the Hebbian principle for unsupervised weights updating of both convolutional and, for the first time, transpose-convolutional layers characterizing downsampling-upsampling semantic segmentation architectures. Then, in the second stage, we fine-tune the model on a few labeled data samples. We assess our methodology through an experimental evaluation involving several collections of biomedical images, deeming that this context is of outstanding importance in computer vision and is particularly affected by data scarcity. Preliminary results demonstrate the effectiveness of our proposed method compared with SOTA under various labeled training data regimes. The code to reproduce our experiments is available at: https://tinyurl.com/ycywfjc2.Source: LECTURE NOTES IN COMPUTER SCIENCE, vol. 15636, pp. 158-171. Milan, Italy, September 29–October 4, 2024, 29/09-04/10/2024
DOI: 10.1007/978-3-031-91578-9_10
Project(s): SUN via OpenAIRE, Tuscany Health Ecosystem
Metrics:


See at: doi.org Restricted | CNR IRIS Restricted | CNR IRIS Restricted | link.springer.com Restricted


2025 Conference article Open Access OPEN
Mind the prompt: a novel benchmark for prompt-based class-agnostic counting
Ciampi L., Messina N., Pierucci M., Amato G., Avvenuti M., Falchi F.
Recently, object counting has shifted towards classagnostic counting (CAC), which counts instances of arbitrary object classes never seen during model training. With advancements in robust vision-and-language foundation models, there is a growing interest in prompt-based CAC, where object categories are specified using natural language. However, we identify significant limitations in current benchmarks for evaluating this task, which hinder both accurate assessment and the development of more effective solutions. Specifically, we argue that the current evaluation protocols do not measure the ability of the model to understand which object has to be counted. This is due to two main factors: (i) the shortcomings of CAC datasets, which primarily consist of images containing objects from a single class, and (ii) the limitations of current counting performance evaluators, which are based on traditional class-specific counting and focus solely on counting errors. To fill this gap, we introduce the Prompt-Aware Counting (PrACo) benchmark. It comprises two targeted tests coupled with evaluation metrics specifically designed to quantitatively measure the robustness and trustworthiness of existing prompt-based CAC models. We evaluate state-of-the-art methods and demonstrate that, although some achieve impressive results on standard class-specific counting metrics, they exhibit a significant deficiency in understanding the input prompt, indicating the need for more careful training procedures or revised designs. The code for reproducing our results is available at https://github.com/ciampluca/PrACo.DOI: 10.1109/wacv61041.2025.00774
DOI: 10.48550/arxiv.2409.15953
Project(s): SUN via OpenAIRE
Metrics:


See at: arXiv.org e-Print Archive Open Access | CNR IRIS Open Access | ieeexplore.ieee.org Open Access | doi.org Restricted | doi.org Restricted | Archivio della Ricerca - Università di Pisa Restricted | CNR IRIS Restricted | CNR IRIS Restricted


2025 Conference article Open Access OPEN
From neural activity to computation: biological reservoirs for pattern recognition in digit classification
Iannello Ludovico, Ciampi Luca, Tonelli Fabrizio, Lagani Gabriele, Calcagnile Lucio Maria, Cremisi Federico, Di Garbo Angelo, Amato Giuseppe
In this paper, we present a biologically grounded approach to reservoir computing (RC), in which a network of cultured biological neurons serves as the reservoir substrate. This system, referred to as biological reservoir computing (BRC), replaces artificial recurrent units with the spontaneous and evoked activity of living neurons. A multi-electrode array (MEA) enables simultaneous stimulation and readout across multiple sites: inputs are delivered through a subset of electrodes, while the remaining ones capture the resulting neural responses, mapping input patterns into a high-dimensional biological feature space. We evaluate the system through a case study on digit classification using a custom dataset. Input images are encoded and delivered to the biological reservoir via electrical stimulation, and the corresponding neural activity is used to train a simple linear classifier. To contextualize the performance of the biological system, we also include a comparison with a standard artificial reservoir trained on the same task. The results indicate that the biological reservoir can effectively support classification, highlighting its potential as a viable and interpretable computational substrate. We believe this work contributes to the broader effort of integrating biological principles into machine learning and aligns with the goals of human-inspired vision by exploring how living neural systems can inform the design of efficient and biologically plausible models.Project(s): AICult, Tuscany Health Ecosystem

See at: CNR IRIS Open Access | openaccess.thecvf.com Open Access | CNR IRIS Restricted | CNR IRIS Restricted


2024 Conference article Open Access OPEN
Teacher-student models for AI vision at the edge: a car parking case study
Molo M. J., Carlini E., Ciampi L., Gennaro C., Vadicamo L.
The surge of the Internet of Things has sparked a multitude of deep learning-based computer vision applications that extract relevant information from the deluge of data coming from Edge devices, such as smart cameras. Nevertheless, this promising approach introduces new obstacles, including the constraints posed by the limited computational resources on these devices and the challenges associated with the generalization capabilities of the AI-based models against novel scenarios never seen during the supervised training, a situation frequently encountered in this context. This work proposes an efficient approach for detecting vehicles in parking lot scenarios monitored by multiple smart cameras that train their underlying AI-based models by exploiting knowledge distillation. Specifically, we consider an architectural scheme comprising a powerful and large detector used as a teacher and several shallow models acting as students, more appropriate for computational-bounded devices and designed to run onboard the smart cameras. The teacher is pre-trained over general-context data and behaves like an oracle, transferring its knowledge to the smaller nodes; on the other hand, the students learn to localize cars in new specific scenarios without using further labeled data, relying solely on the distilled loss coming from the oracle. Preliminary results show that student models trained only with distillation loss increase their performances, sometimes even outperforming the results achieved by the same models supervised with the ground truth.DOI: 10.5220/0012376900003660
Project(s): AI4Media via OpenAIRE, National Centre for HPC, Big Data and Quantum Computing, Sustainable Mobility Center
Metrics:


See at: CNR IRIS Open Access | www.scitepress.org Open Access | CNR IRIS Restricted | CNR IRIS Restricted


2024 Other Open Access OPEN
AIMH Research Activities 2024
Aloia N., Amato G., Bartalesi Lenzi V., Bianchi L., Bolettieri P., Bosio C., Carraglia M., Carrara F., Casarosa V., Cassese M., Ciampi L., Coccomini D. A., Concordia C., Connor R., Corbara S., De Martino C., Di Benedetto M., Esuli A., Falchi F., Fazzari E., Gennaro C., Iannello L., Negi K., Lagani G., Lenzi E., Leocata M., Malvaldi M., Meghini C., Messina N., Moreo Fernandez A., Nardi A., Pacini G., Pedrotti A., Pratelli N., Puccetti G., Rabitti F., Savino P., Scotti F., Sebastiani F., Sperduti G., Thanos C., Trupiano L., Vadicamo L., Vairo C., Versienti L., Volpi L.
The AIMH (Artificial Intelligence for Media and Humanities) laboratory is committed to advancing the field of Artificial Intelligence, with a special emphasis on its applications in digital media and the humanities. The lab aims to improve AI technologies, particularly in areas such as deep learning, text analysis, computer vision, multimedia information retrieval, content analysis, recognition, and retrieval. This report summarizes the laboratory’s achievements and activities over the course of 2024.DOI: 10.32079/isti-ar-2024/001
Metrics:


See at: CNR IRIS Open Access | CNR IRIS Restricted


2024 Journal article Open Access OPEN
In the wild video violence detection: an unsupervised domain adaptation approach
Ciampi L., Santiago C., Falchi F., Gennaro C., Amato G.
This work addresses the challenge of video violence detection in data-scarce scenarios, focusing on bridging the domain gap that often hinders the performance of deep learning models when applied to unseen domains. We present a novel unsupervised domain adaptation (UDA) scheme designed to effectively mitigate this gap by combining supervised learning in the train (source) domain with unlabeled test (target) data. We employ single-image classification and multiple instance learning (MIL) to select frames with the highest classification scores, and, upon this, we exploit UDA techniques to adapt the model to unlabeled target domains. We perform an extensive experimental evaluation, using general-context data as the source domain and target domain datasets collected in specific environments, such as violent/non-violent actions in hockey matches and public transport. The results demonstrate that our UDA pipeline substantially enhances model performances, improving their generalization capabilities in novel scenarios without requiring additional labeled data.Source: SN COMPUTER SCIENCE, vol. 5 (issue 7)
DOI: 10.1007/s42979-024-03126-3
Project(s): "FAIR - Future Artificial Intelligence Research" - Spoke 1 "Human-centered AI", AI4Media via OpenAIRE, SUN via OpenAIRE
Metrics:


See at: CNR IRIS Open Access | link.springer.com Open Access | CNR IRIS Restricted


2023 Conference article Open Access OPEN
CrowdSim2: an open synthetic benchmark for object detectors
Foszner P, Szczesna A, Ciampi L, Messina N, Cygan A, Bizon B, Cogiel M, Golba D, Macioszek E, Staniszewski M
Data scarcity has become one of the main obstacles to developing supervised models based on Artificial Intelligence in Computer Vision. Indeed, Deep Learning-based models systematically struggle when applied in new scenarios never seen during training and may not be adequately tested in non-ordinary yet crucial real-world situations. This paper presents and publicly releases CrowdSim2, a new synthetic collection of images suitable for people and vehicle detection gathered from a simulator based on the Unity graphical engine. It consists of thousands of images gathered from various synthetic scenarios resembling the real world, where we varied some factors of interest, such as the weather conditions and the number of objects in the scenes. The labels are automatically collected and consist of bounding boxes that precisely localize objects belonging to the two object classes, leaving out humans from the annotation pipeline. We exploited this new benchmark as a testing ground for some state-of-the-art detectors, showing that our simulated scenarios can be a valuable tool for measuring their performances in a controlled environment.DOI: 10.5220/0011692500003417
Project(s): AI4Media via OpenAIRE
Metrics:


See at: CNR IRIS Open Access | ISTI Repository Open Access | www.scitepress.org Open Access | CNR IRIS Restricted | CNR IRIS Restricted


2023 Conference article Open Access OPEN
Development of a realistic crowd simulation environment for fine-grained validation of people tracking methods
Foszner P, Szczesna A, Ciampi L, Messina N, Cygan A, Bizon B, Cogiel M, Golba D, Macioszek E, Staniszewski M
Generally, crowd datasets can be collected or generated from real or synthetic sources. Real data is generated by using infrastructure-based sensors (such as static cameras or other sensors). The use of simulation tools can significantly reduce the time required to generate scenario-specific crowd datasets, facilitate data-driven research, and next build functional machine learning models. The main goal of this work was to develop an extension of crowd simulation (named CrowdSim2) and prove its usability in the application of people-tracking algorithms. The simulator is developed using the very popular Unity 3D engine with particular emphasis on the aspects of realism in the environment, weather conditions, traffic, and the movement and models of individual agents. Finally, three methods of tracking were used to validate generated dataset: IOU-Tracker, Deep-Sort, and Deep-TAMA.DOI: 10.5220/0011691500003417
Project(s): AI4Media via OpenAIRE
Metrics:


See at: CNR IRIS Open Access | ISTI Repository Open Access | www.scitepress.org Open Access | CNR IRIS Restricted | CNR IRIS Restricted


2023 Conference article Open Access OPEN
Unsupervised domain adaptation for video violence detection in the wild
Ciampi L, Santiago C, Costeira Jp, Falchi F Gennaro C, Amato G
Video violence detection is a subset of human action recognition aiming to detect violent behaviors in trimmed video clips. Current Computer Vision solutions based on Deep Learning approaches provide astonishing results. However, their success relies on large collections of labeled datasets for supervised learning to guarantee that they generalize well to diverse testing scenarios. Although plentiful annotated data may be available for some pre-specified domains, manual annotation is unfeasible for every ad-hoc target domain or task. As a result, in many real-world applications, there is a domain shift between the distributions of the train (source) and test (target) domains, causing a significant drop in performance at inference time. To tackle this problem, we propose an Unsupervised Domain Adaptation scheme for video violence detection based on single image classification that mitigates the domain gap between the two domains. We conduct experiments considering as the source labeled domain some datasets containing violent/non-violent clips in general contexts and, as the target domain, a collection of videos specific for detecting violent actions in public transport, showing that our proposed solution can improve the performance of the considered models.DOI: 10.5220/0011965300003497
Project(s): AI4Media via OpenAIRE
Metrics:


See at: CNR IRIS Open Access | ISTI Repository Open Access | www.scitepress.org Open Access | CNR IRIS Restricted | CNR IRIS Restricted


2023 Journal article Open Access OPEN
A comprehensive atlas of perineuronal net distribution and colocalization with parvalbumin in the adult mouse brain
Lupori L, Totaro V, Cornuti S, Ciampi L, Carrara F, Grilli E, Viglione A, Tozzi F, Putignano E, Mazziotti R, Amato G, Gennaro G, Tognini P, Pizzorusso T
Perineuronal nets (PNNs) surround specific neurons in the brain and are involved in various forms of plasticity and clinical conditions. However, our understanding of the PNN role in these phenomena is limited by the lack of highly quantitative maps of PNN distribution and association with specific cell types. Here, we present a comprehensive atlas of Wisteria floribunda agglutinin (WFA)-positive PNNs and colocalization with parvalbumin (PV) cells for over 600 regions of the adult mouse brain. Data analysis shows that PV expression is a good predictor of PNN aggregation. In the cortex, PNNs are dramatically enriched in layer 4 of all primary sensory areas in correlation with thalamocortical input density, and their distribution mirrors intracortical connectivity patterns. Gene expression analysis identifies many PNN-correlated genes. Strikingly, PNN-anticorrelated transcripts are enriched in synaptic plasticity genes, generalizing PNNs' role as circuit stability factors.Source: CELL REPORTS, vol. 42 (issue 7)
DOI: 10.1016/j.celrep.2023.112788
Project(s): AI4Media via OpenAIRE
Metrics:


See at: CNR IRIS Open Access | www.cell.com Open Access | CNR IRIS Restricted


2023 Conference article Open Access OPEN
AIMH Lab 2022 activities for Healthcare
Carrara F., Ciampi L., Di Benedetto M., Falchi F., Gennaro C., Amato G.
The application of Artificial Intelligence technologies in healthcare can enhance and optimize medical diagnosis, treatment, and patient care. Medical imaging, which involves Computer Vision to interpret and understand visual data, is one area of healthcare that shows great promise for AI, and it can lead to faster and more accurate diagnoses, such as detecting early signs of cancer or identifying abnormalities in the brain. This short paper provides an introduction to some of the activities of the Artificial Intelligence for Media and Humanities Laboratory of the ISTI-CNR that integrate AI and medical image analysis in healthcare. Specifically, the paper presents approaches that utilize 3D medical images to detect the behavior-variant of frontotemporal dementia, a neurodegenerative syndrome that can be diagnosed by analyzing brain scans. Furthermore, it illustrates some Deep Learning-based techniques for localizing and counting biological structures in microscopy images, such as cells and perineuronal nets. Lastly, the paper presents a practical and cost-effective AI-based tool for multi-species pupillometry (mice and humans), which has been validated in various scenarios.Source: Ital-IA 2023, pp. 128–133, Pisa, Italy, 29-31/05/2023

See at: ceur-ws.org Open Access | ISTI Repository Open Access | CNR ExploRA


2023 Conference article Open Access OPEN
MC-GTA: a synthetic benchmark for multi-camera vehicle tracking
Ciampi L, Messina N, Valenti Ge, Amato G, Falchi F, Gennaro C
Multi-camera vehicle tracking (MCVT) aims to trace multiple vehicles among videos gathered from overlapping and non-overlapping city cameras. It is beneficial for city-scale traffic analysis and management as well as for security. However, developing MCVT systems is tricky, and their real-world applicability is dampened by the lack of data for training and testing computer vision deep learning-based solutions. Indeed, creating new annotated datasets is cumbersome as it requires great human effort and often has to face privacy concerns. To alleviate this problem, we introduce MC-GTA - Multi Camera Grand Tracking Auto, a synthetic collection of images gathered from the virtual world provided by the highly-realistic Grand Theft Auto 5 (GTA) video game. Our dataset has been recorded from several cameras recording urban scenes at various crossroads. The annotations, consisting of bounding boxes localizing the vehicles with associated unique IDs consistent across the video sources, have been automatically generated by interacting with the game engine. To assess this simulated scenario, we conduct a performance evaluation using an MCVT SOTA approach, showing that it can be a valuable benchmark that mitigates the need for real-world data. The MC-GTA dataset and the code for creating new ad-hoc custom scenarios are available at https://github.com/GaetanoV10/GT5-Vehicle-BB.Source: LECTURE NOTES IN COMPUTER SCIENCE, vol. 14233, pp. 316-327. Udine, Italy, 11-15/09/2023
DOI: 10.1007/978-3-031-43148-7_27
Project(s): AI4Media via OpenAIRE
Metrics:


See at: CNR IRIS Open Access | link.springer.com Open Access | ISTI Repository Open Access | CNR IRIS Restricted | CNR IRIS Restricted


2023 Conference article Open Access OPEN
AIMH Lab 2022 activities for Vision
Ciampi L, Amato G, Bolettieri P, Carrara F, Di Benedetto M, Falchi F, Gennaro C, Messina N, Vadicamo L, Vairo C
The explosion of smartphones and cameras has led to a vast production of multimedia data. Consequently, Artificial Intelligence-based tools for automatically understanding and exploring these data have recently gained much attention. In this short paper, we report some activities of the Artificial Intelligence for Media and Humanities (AIMH) laboratory of the ISTI-CNR, tackling some challenges in the field of Computer Vision for the automatic understanding of visual data and for novel interactive tools aimed at multimedia data exploration. Specifically, we provide innovative solutions based on Deep Learning techniques carrying out typical vision tasks such as object detection and visual counting, with particular emphasis on scenarios characterized by scarcity of labeled data needed for the supervised training and on environments with limited power resources imposing miniaturization of the models. Furthermore, we describe VISIONE, our large-scale video search system designed to search extensive multimedia databases in an interactive and user-friendly manner.Source: CEUR WORKSHOP PROCEEDINGS, pp. 538-543. Pisa, Italy, 29-31/05/2023
Project(s): AI4Media via OpenAIRE, Future Artificial Intelligence Research

See at: ceur-ws.org Open Access | CNR IRIS Open Access | ISTI Repository Open Access | CNR IRIS Restricted


2023 Journal article Open Access OPEN
A deep learning-based pipeline for whitefly pest abundance estimation on chromotropic sticky traps
Ciampi L, Zeni V, Incrocci L, Canale A, Benelli G, Falchi F, Amato G, Chessa S
Integrated Pest Management (IPM) is an essential approach used in smart agriculture to manage pest populations and sustainably optimize crop production. One of the cornerstones underlying IPM solutions is pest monitoring, a practice often performed by farm owners by using chromotropic sticky traps placed on insect hot spots to gauge pest population densities. In this paper, we propose a \rev{1}{modular model-agnostic} deep learning-based counting pipeline for estimating the number of insects present in pictures of chromotropic sticky traps, thus reducing the need for manual trap inspections and minimizing human effort. Additionally, our solution generates a set of raw positions of the counted insects and confidence scores expressing their reliability, allowing practitioners to filter out unreliable predictions. We train and assess our technique by exploiting PST - Pest Sticky Traps, a new collection of dot-annotated images we created on purpose and we publicly release, suitable for counting whiteflies. Experimental evaluation shows that our proposed counting strategy can be a valuable Artificial Intelligence-based tool to help farm owners to control pest outbreaks and prevent crop damages effectively. Specifically, our solution achieves an average counting error of approximately $9\%$ compared to human capabilities requiring a matter of seconds, a large improvement respecting the time-intensive process of manual human inspections, which often take hours or even days.Source: ECOLOGICAL INFORMATICS, vol. 78
DOI: 10.1016/j.ecoinf.2023.102384
Project(s): AI4Media via OpenAIRE
Metrics:


See at: CNR IRIS Open Access | ISTI Repository Open Access | CNR IRIS Restricted


2023 Other Open Access OPEN
AIMH Research Activities 2023
Aloia N., Amato G., Bartalesi Lenzi V., Bianchi L., Bolettieri P., Bosio C., Carraglia M., Carrara F., Casarosa V., Ciampi L., Coccomini D. A., Concordia C., Corbara S., De Martino C., Di Benedetto M., Esuli A., Falchi F., Fazzari E., Gennaro C., Lagani G., Lenzi E., Meghini C., Messina N., Molinari A., Moreo Fernandez A., Nardi A., Pedrotti A., Pratelli N., Puccetti G., Rabitti F., Savino P., Sebastiani F., Sperduti G., Thanos C., Trupiano L., Vadicamo L., Vairo C., Versienti L.
The AIMH (Artificial Intelligence for Media and Humanities) laboratory is dedicated to exploring and pushing the boundaries in the field of Artificial Intelligence, with a particular focus on its application in digital media and humanities. This lab's objective is to enhance the current state of AI technology particularly on deep learning, text analysis, computer vision, multimedia information retrieval, multimedia content analysis, recognition, and retrieval. This report encapsulates the laboratory's progress and activities throughout the year 2023.DOI: 10.32079/isti-ar-2023/001
Metrics:


See at: CNR IRIS Open Access | ISTI Repository Open Access | CNR IRIS Restricted


2022 Conference article Open Access OPEN
AIMH Lab for Healthcare and Wellbeing
Di Benedetto M, Carrara F, Ciampi L, Falchi F, Gennaro C, Amato G
In this work we report the activities of the Artificial Intelligence for Media and Humanities (AIMH) laboratory of the ISTI-CNR related to Healthcare and Wellbeing. By exploiting the advances of recent machine learning methods and the compute power of desktop and mobile platforms, we will show how artificial intelligence tools can be used to improve healthcare systems in various parts of disease treatment. In particular we will see how deep neural networks can assist doctors from diagnosis (e.g., cell counting, pupil and brain analysis) to communication to patients with Augmented Reality .

See at: CNR IRIS Open Access | ISTI Repository Open Access | www.ital-ia2022.it Open Access | CNR IRIS Restricted


2022 Conference article Open Access OPEN
AIMH Lab for the Industry
Carrara F, Ciampi L, Di Benedetto M, Falchi F, Gennaro C, Massoli Fv, Amato G
In this short paper, we report the activities of the Artificial Intelligence for Media and Humanities (AIMH) laboratory of the ISTI-CNR related to Industry. The massive digitalization affecting all the stages of product design, production, and control calls for data-driven algorithms helping in the coordination of humans, machines, and digital resources in Industry 4.0. In this context, we developed AI-based Computer-Vision technologies of general interest in the emergent digital paradigm of the fourth industrial revolution, fo-cusing on anomaly detection and object counting for computer-assisted testing and quality control. Moreover, in the automotive sector, we explore the use of virtual worlds to develop AI systems in otherwise practically unfeasible scenarios, showing an application for accident avoidance in self-driving car AI agents.

See at: CNR IRIS Open Access | ISTI Repository Open Access | www.ital-ia2022.it Open Access | CNR IRIS Restricted


2022 Conference article Open Access OPEN
AIMH Lab: Smart Cameras for Public Administration
Ciampi L, Cafarelli D, Carrara F, Di Benedetto M, Falchi F, Gennaro C, Massoli Fv, Messina N, Amato G
In this short paper, we report the activities of the Artificial Intelligence for Media and Humanities (AIMH) laboratory of the ISTI-CNR related to Public Administration. In particular, we present some AI-based public services serving the citizens that help achieve common goals beneficial to the society, putting humans at the epicenter. Through the automatic analysis of images gathered from city cameras, we provide AI applications ranging from smart parking and smart mobility to human activity monitoring.

See at: CNR IRIS Open Access | ISTI Repository Open Access | www.ital-ia2022.it Open Access | CNR IRIS Restricted


2022 Conference article Open Access OPEN
Counting or localizing? Evaluating cell counting and detection in microscopy images
Ciampi L, Carrara F, Amato G, Gennaro C
Image-based automatic cell counting is an essential yet challenging task, crucial for the diagnosing of many diseases. Current solutions rely on Convolutional Neural Networks and provide astonishing results. However, their performance is often measured only considering counting errors, which can lead to masked mistaken estimations; a low counting error can be obtained with a high but equal number of false positives and false negatives. Consequently, it is hard to determine which solution truly performs best. In this work, we investigate three general counting approaches that have been successfully adopted in the literature for counting several different categories of objects. Through an experimental evaluation over three public collections of microscopy images containing marked cells, we assess not only their counting performance compared to several state-of-the-art methods but also their ability to correctly localize the counted cells. We show that commonly adopted counting metrics do not always agree with the localization performance of the tested models, and thus we suggest integrating the proposed evaluation protocol when developing novel cell counting solutions.DOI: 10.5220/0010923000003124
Project(s): AI4Media via OpenAIRE
Metrics:


See at: CNR IRIS Open Access | ISTI Repository Open Access | www.scitepress.org Open Access | CNR IRIS Restricted | CNR IRIS Restricted