66 result(s)
Page Size: 10, 20, 50
Export: bibtex, xml, json, csv
Order by:

CNR Author operator: and / or
more
Typology operator: and / or
Language operator: and / or
Date operator: and / or
more
Rights operator: and / or
2024 Journal article Open Access OPEN
Optimizing radiomics for prostate cancer diagnosis: feature selection strategies, machine learning classifiers, and MRI sequences
Mylona E., Zaridis D. I., Kalantzopoulos C., Tachos N. S., Regge D., Papanikolaou N., Tsiknakis M., Marias K., Marquez R., Henne T., Saillant C., Mora J. M., Pastor A. J., Agraniotis D., Pollalis C., Giavri Z., Hernandez W., Correia J., Bridge C., Kalpathy-Cramer J., Carloni G., Berti A., Germanese D., Del Corso G., Pachetti E., Pascali M. A., Colantonio S., Napolitano V., Maimone G., Cappello G., Mazzetti S., Giannini V., García-Martí G., Jacobs T., Doran S., Ribeiro A., Vit S., Emsley R., Koh D. M., Georgios G., Vasilis K., Slidevska K., Untanas A., Briediene R., Usinskiene J., Vilanova J. C., Karcaaltincaba M., Atak F., Karaosmanoglu A. D., Özmen M., Akata D., Nan, Mendola V., Tumminello L., Aringhieri G., Neri E., Marfil M., Navarro S., Ribas G., Cerdá-Alberich L., Martí-Bonmatí L., Futterer J., Twilt J. J., Saha A., De Rooij M., Huisman H., Chambel M., Rodrigues N., Rodrigues A. C., Verde A. C., De Almeida J. G., Dimitriadis A., Kalliatakis G., Trivizakis E., Kalokyri V., Sfakianakis S., Fotiadis D. I.
Objectives: Radiomics-based analyses encompass multiple steps, leading to ambiguity regarding the optimal approaches for enhancing model performance. This study compares the effect of several feature selection methods, machine learning (ML) classifiers, and sources of radiomic features, on models' performance for the diagnosis of clinically significant prostate cancer (csPCa) from bi-parametric MRI. Methods: Two multi-centric datasets, with 465 and 204 patients each, were used to extract 1246 radiomic features per patient and MRI sequence. Ten feature selection methods, such as Boruta, mRMRe, ReliefF, recursive feature elimination (RFE), random forest (RF) variable importance, L1-lasso, etc., four ML classifiers, namely SVM, RF, LASSO, and boosted generalized linear model (GLM), and three sets of radiomics features, derived from T2w images, ADC maps, and their combination, were used to develop predictive models of csPCa. Their performance was evaluated in a nested cross-validation and externally, using seven performance metrics. Results: In total, 480 models were developed. In nested cross-validation, the best model combined Boruta with Boosted GLM (AUC = 0.71, F1 = 0.76). In external validation, the best model combined L1-lasso with boosted GLM (AUC = 0.71, F1 = 0.47). Overall, Boruta, RFE, L1-lasso, and RF variable importance were the top-performing feature selection methods, while the choice of ML classifier didn't significantly affect the results. The ADC-derived features showed the highest discriminatory power with T2w-derived features being less informative, while their combination did not lead to improved performance. Conclusion: The choice of feature selection method and the source of radiomic features have a profound effect on the models' performance for csPCa diagnosis. Critical relevance statement: This work may guide future radiomic research, paving the way for the development of more effective and reliable radiomic models; not only for advancing prostate cancer diagnostic strategies, but also for informing broader applications of radiomics in different medical contexts. Key points: Radiomics is a growing field that can still be optimized. Feature selection method impacts radiomics models' performance more than ML algorithms. Best feature selection methods: RFE, LASSO, RF, and Boruta. ADC-derived radiomic features yield more robust models compared to T2w-derived radiomic features.Source: INSIGHTS INTO IMAGING, vol. 15 (issue 1)
DOI: 10.1186/s13244-024-01783-9
Project(s): ProCAncer-I via OpenAIRE
Metrics:


See at: Insights into Imaging Open Access | IRIS Cnr Open Access | CNR IRIS Open Access | insightsimaging.springeropen.com Open Access | IRIS Cnr Restricted | IRIS Cnr Restricted | IRIS Cnr Restricted | CNR IRIS Restricted


2024 Other Embargo
Monitoraggio della termoregolazione neonatale in contesto ospedaliero: verso un approccio integrato e non invasivo
Cancello Tortora C., Del Corso G., Germanese D., Positano V., Vozzi G.
La termoregolazione, ovvero la capacità di mantenere una temperatura adeguata, è una questione di notevole interesse e complessità nella comunità scientifica nell'ambito della neonatologia. Nei primi istanti di vita i neonati, sia pre-termine che a termine, presentano sistemi di regolazione della temperatura immaturi che li rendono vulnerabili alle condizioni subottimali extra-uterine. Questa tesi, svolta presso il Laboratorio Segnali e Immagini dell'Istituto di Scienze e Tecnologie dell'Informazione del CNR di Pisa, propone di monitorare, in contesto ospedaliero, mediante un sistema integrato e non invasivo, le variazioni di temperatura del neonato nelle prime ore di vita. L’obiettivo è quello di preparare il terreno per uno studio più ampio che, attraverso l’acquisizione e la valutazione di pattern termici sul neonato, sarà in grado di valutare lo stato patologico del neonato. Inoltre, verrà valutato se la stabilizzazione termica possa essere migliorata con l’attuazione di una pratica, nota come contatto pelle a pelle (SSC), tra madre e neonato, o eventualmente tra padre e neonato. L’ hardware del dispositivo è stato realizzato dal Centro di Formazione e Simulazione Neonatale (centro NINA) dell'Azienda Ospedaliero Universitaria Pisana. L'idea di monitorare i pattern termici di un neonato in maniera non invasiva si è tradotta in un dispositivo estremamente compatto e portatile, costituito da: (i) una termocamera, mediante la quale acquisire le immagini termiche del neonato, (ii) una telecamera rgb per acquisire le immagini del neonato nello spettro del visibile, estrarre lo scheletro per definire automaticamente i distretti anatomici di interesse, (iii) un sensore per la misurazione puntuale della temperatura, (iv) un sensore di umidità e temperatura ambientale per monitorare le condizioni ambientali della stanza in cui si trova il neonato, (v) un Raspeberry Pi per la gestione e l'integrazione di questi componenti nonchè l'estrazione e la pre-elaborazione dei dati. Il Software di controllo ed elaborazione sviluppato in questa tesi è stato scritto in linguaggio Python (v. 3.11) e gestisce gli stati del sistema, in particolare l’acquisizione sincrona di immagini termiche ed RGB, l’estrazione di dati e l’anonimizzazione delle immagini RGB dei neonati. L’elaborazione delle immagini RGB viene effettuata in locale dal Raspberry e comprende l’estrazione automatica delle regioni anatomiche di interesse (ROI) mediante tecniche allo stato dell’arte (i.e., libreria MediaPipe). Successivamente, queste ROI vengono trasposte sulle corrispondenti immagini termiche tramite una matrice di trasformazione omografica opportunamente calibrata tenendo in considerazione il vincolo rigido tra le due camere e le rispettive distanze focali. Queste ROI prendono come riferimento per il punto centrale il landmark estratto e come raggio le proporzioni tra due landmarks vicini e le dimensioni stimate del distretto anatomico di interesse. Esse rappresentano il punto di partenza dell’elaborazione delle immagini termiche. Dopo una fase iniziale di pre-processing, in cui il rumore di fondo è stato eliminato con varie tecniche di filtraggio, il contrasto tra le varie regioni è stato aumentato. Questo processo è stato propedeutico all’estrazione degli istogrammi, il cui andamento fornisce informazioni sulla presenza o meno di sfondo. Se lo sfondo è presente, viene avviato il segmentatore FastSAM, basato su una rete neurale convolutiva (CNN) allo stato dell'arte, che segmenta il distretto anatomico per evitare di includere lo sfondo nell’elaborazione. Un’interfaccia utente user-friendly ha permesso di gestire i landmarks provenienti dallo scheletro e di realizzare in maniera completamente automatica delle regioni di interesse (ROI) adattive sull’immagine termica. Dalla singola ROI sono state estratte dei pattern termici e delle features che estendessero quelle tradizionali come mediana e intervallo interquartile attraverso l’implementazione di una matrice di texture che deriva da descrittori matematici quantitativi di texture (della famiglia GLSZM- Gray level size zone matrix) che forniscono informazioni sull’eterogeneità termica delle ROI. La matrice è stata utile per estrarre un punteggio (score) da attribuire alla singola ROI evidenziando come un paziente con vaste aree di temperatura accettabile avesse un punteggio maggiore rispetto ad un paziente con zone molte fredde ed un’alta variabilità nella temperatura. Infine, sono state definite anche delle features a livello globale che mettono in relazione le misure ottenute dalla ROI sul viso (riferimento clinico neonatale) con quelle sul torace e sugli arti. Il sistema è stato validato prima in un contesto sperimentale controllato, la validazione finale e la conseguente acquisizione di dati sono avvenute in ambito ospedaliero, nel reparto di neonatologia dell'Azienda Ospedaliera Universitaria Pisana, utilizzando un fantoccio che simulava il comportamento termico di un neonato.

See at: CNR IRIS Restricted | CNR IRIS Restricted


2024 Conference article Restricted
Radiomics-based reliable predictions of side effects after radiotherapy for prostate cancer
Del Corso G., Pachetti E., Buongiorno R., Rodrigues A. C., Germanese D., Pascali M. A., Almeida J., Rodrigues N., Tsiknakis M., Papanikolaou N., Regge D., Marias K., Consortium Procancer-I, Colantonio S.
This work offers insight into the effectiveness of probabilistic models, specifically those based on ensemble approximations, in predicting adverse side effects following radiotherapy for prostate cancer. We trained a random forest model on radiomic features from 134 T2-weighted Magnetic Resonance (MRI) images of the prostate gland to identify patients experiencing acute or chronic rectal and urinary toxicity (AU-ROC ranging from 61.4% for endorectal coil acquisitions to 70.8% for the full dataset). We evaluated the reliability of the predictions using an ensemble approximation of simplified random forests obtained by an adaptive procedure of random subsampling of the training data. We used this reliability score to define a not-confident class and then recompute performance metrics more in accordance with a probabilistic approach. The outcomes we obtained (up to 7.9% increase in accuracy) indicate the approximated probabilistic models pledge more reliable predictions, thus being suitable for further investigation.DOI: 10.1109/isbi56570.2024.10635233
Project(s): ProCAncer-I via OpenAIRE
Metrics:


See at: doi.org Restricted | IRIS Cnr Restricted | IRIS Cnr Restricted | CNR IRIS Restricted


2024 Conference article Open Access OPEN
Facial landmark identification and data preparation can significantly improve the extraction of newborns' facial features
Del Corso G., Germanese D., Pascali M. A., Bardelli S., Cuttano A., Festante F., Guzzetta A., Rocchitelli L., Colantonio S.
Automatic extraction of facial feature can provide valuable information on the health of newborns. However, determining an optimal facial features extraction strategy, especially for preterm infants, is a challenging task due to significant differences in facial morphology and frequent pose changes. In this work, we collected video data from 10 newborns (8 preterm, 2 at term, ≤ 4 weeks post term equivalent age), obtaining a novel dataset of over 41, 000 labeled frames (Open Mouth, Closed Mouth, Tongue Protrusion). On the collected images, we applied a strong data preparation procedure (including mouth localization, cropping, and reorientation with models trained on adults), an adaptive image normalization strategy, and a proper data augmentation scheme. Thus, we trained a convolutional classifier with a large number of trainable parameters (i.e., ~1.2 million), coupled with multiple criteria to avoid overspecialization and consequent loss of generalization capability. This approach allows for highly reliable results (accuracy, precision, and recall over 92% on unseen data) and generalizes well to newborns with significantly different characteristics, even without including time-dependent information in the analysis. Therefore, these results prove that proper data preparation can narrow the gap between the classification of neonatal and adult facial features, allowing the integration of methods originally developed for adults into the complex setting of preterm infant analysis.DOI: 10.1109/fg59268.2024.10581971
Metrics:


See at: IRIS Cnr Open Access | IRIS Cnr Open Access | IRIS Cnr Open Access | doi.org Restricted | CNR IRIS Restricted | CNR IRIS Restricted


2024 Conference article Open Access OPEN
From Covid-19 detection to cancer grading: how medical-AI is boosting clinical diagnostics and may improve treatment
Berti A., Buongiorno R., Carloni G., Caudai C., Conti F., Del Corso G., Germanese D., Moroni D., Pachetti E., Pascali M. A., Colantonio S.
The integration of artificial intelligence (AI) into medical imaging has guided an era of transformation in healthcare. This paper presents the research activities that a multidisciplinary research group within the Signals and Images Lab of the Institute of Information Science and Technologies of the National Research Council of Italy is carrying out to explore the great potential of AI in medical imaging. From the convolutional neural network-based segmentation of Covid-19 lung patterns to the radiomic signature for benign/malignant breast nodule discrimination, to the automatic grading of prostate cancer, this work highlights the paradigm shift that AI has brought to medical imaging, revolutionizing diagnosis and patient care.Source: CEUR WORKSHOP PROCEEDINGS, vol. 3762, pp. 336-341. Naples, Italy, 29-30/05/2024

See at: ceur-ws.org Open Access | CNR IRIS Open Access | CNR IRIS Restricted


2024 Other Embargo
Optimizing medical image segmentation using a priori knowledge in attention mechanism-enriched convolutional neural networks
Buongiorno Rossana, Colantonio Sara, Germanese Danila, Ducange Pietro
In recent years, there has been a remarkable shift in medical image segmentation, driven by the intersection of Deep Learning (DL) and medical imaging technologies. This convergence has led to significant progress, fundamentally altering how medical image analysis is approached. DL methods, notably Convolutional Neural Networks (CNNs), have played a pivotal role in this transformation by revolutionizing the field of medical image segmentation. They facilitate the automatic extraction of features from raw image data, achieving unparalleled levels of accuracy and sensitivity. However, despite these advances, persistent challenges such as computational demands, data quality and availability, interpretability, and model generalization hinder the broad adoption of DL models in clinical environments. Moreover, while CNNs manage to autonomously extract and analyze image features with a good level of detail, they often struggle to identify regions in images that exhibit complexities that are challenging even to the human eye. To address these issues, attention and recurrence mechanisms have been introduced. The former enhances the network's ability to focus on relevant regions in the image while ignoring irrelevant background, whereas the latter studies long-range dependencies between different areas of the image to obtain broader contextual information. The first part of this doctoral thesis thoroughly examines and analyzes attention and recurrence mechanisms to determine their efficacy in binary medical image segmentation. Specifically, the objective was to identify the mechanism that strikes the optimal balance between resource utilization, data availability, and accurate segmentation outcomes for the given problem statement. The results of this analysis have shown that attention mechanisms improve segmentation accuracy by dynamically adjusting weights assigned to different image regions, and optimizing data requirements. However, effectively directing CNN's attention remained challenging in scenarios requiring a clear and precise differentiation between subtle variations crucial for accurate diagnoses. These challenges formed the basis for the second part of the thesis, which explores the integration of spatial priors into CNN architectures, specifically within a UNet-based framework enriched with the attention mechanism, namely the Attention UNet. More precisely, by incorporating prior knowledge about the spatial location of objects to be segmented, the proposed approach aims to enhance CNN effectiveness in the segmentation task. A new framework, called SPI-net, was designed for this purpose. SPI-net features an Attention-UNet as a backbone, an upstream block aimed at obtaining spatial prior, and an additional novel branch featuring long skip connections to inject nuanced context-aware information into the decoding pathway of the network. This improves its understanding of underlying structures and enhances segmentation accuracy. The experimental application and evaluation of SPI-net focused on the segmentation of COVID-19 infections, leveraging prior knowledge of disease spatial location to guide CNN attention. The results demonstrate the efficacy of SPI-net in accurately delineating disease patterns, outperforming traditional segmentation approaches. The comparative analysis highlights the limitations of conventional pre-processing operations, emphasizing the importance of integrating spatial priors into CNN architectures. Overall, this research contributes to the advancement of medical image segmentation by implicitly incorporating prior knowledge into CNNs, offering insights and empirical evidence to enhance segmentation accuracy and interpretability. The findings extend beyond COVID-19 segmentation, offering a promising framework for various medical imaging applications and contributing to the evolution of CNNs as reliable tools in healthcare diagnostics.

See at: CNR IRIS Restricted | CNR IRIS Restricted


2024 Journal article Open Access OPEN
Adaptive machine learning approach for importance evaluation of multimodal breast cancer radiomic features
Del Corso G., Germanese D., Caudai C., Anastasi G., Belli P., Formica A., Nicolucci A., Palma S., Pascali M. A., Pieroni S., Trombadori C., Colantonio S., Franchini M., Molinaro S.
Breast cancer holds the highest diagnosis rate among female tumors and is the leading cause of death among women. Quantitative analysis of radiological images shows the potential to address several medical challenges, including the early detection and classification of breast tumors. In the P.I.N.K study, 66 women were enrolled. Their paired Automated Breast Volume Scanner (ABVS) and Digital Breast Tomosynthesis (DBT) images, annotated with cancerous lesions, populated the first ABVS+DBT dataset. This enabled not only a radiomic analysis for the malignant vs. benign breast cancer classification, but also the comparison of the two modalities. For this purpose, the models were trained using a leave-one-out nested cross-validation strategy combined with a proper threshold selection approach. This approach provides statistically significant results even with medium-sized data sets. Additionally it provides distributional variables of importance, thus identifying the most informative radiomic features. The analysis proved the predictive capacity of radiomic models even using a reduced number of features. Indeed, from tomography we achieved AUC-ROC 89.9% using 19 features and 92.1% using 7 of them; while from ABVS we attained an AUC-ROC of 72.3% using 22 features and 85.8% using only 3 features. Although the predictive power of DBT outperforms ABVS, when comparing the predictions at the patient level, only 8.7% of lesions are misclassified by both methods, suggesting a partial complementarity. Notably, promising results (AUC-ROC ABVS-DBT 71.8% - 74.1% ) were achieved using non-geometric features, thus opening the way to the integration of virtual biopsy in medical routine.Source: JOURNAL OF IMAGING INFORMATICS IN MEDICINE, vol. 37 (issue 4), pp. 1642-1651
DOI: 10.1007/s10278-024-01064-3
Project(s): "Mortalità Zero - verso la personalizzazione degli interventi diagnostici"
Metrics:


See at: Journal of Imaging Informatics in Medicine Open Access | IRIS Cnr Open Access | IRIS Cnr Open Access | CNR IRIS Restricted | CNR IRIS Restricted


2023 Contribution to book Open Access OPEN
Introduction to machine learning in medicine
Buongiorno R, Caudai C, Colantonio S, Germanese D
This chapter aimed to describe, as simply as possible, what Machine Learning is and how it can be used fruitfully in the medical field.DOI: 10.1007/978-3-031-25928-9_3
Metrics:


See at: CNR IRIS Open Access | ISTI Repository Open Access | doi.org Restricted | CNR IRIS Restricted | CNR IRIS Restricted


2023 Conference article Open Access OPEN
Exploring the potentials and challenges of AI in supporting clinical diagnostics and remote assistance for the health and well-being of individuals
Berti A, Buongiorno R, Carloni G, Caudai C, Del Corso G, Germanese D, Pachetti E, Pascali Ma, Colantonio S
Innovative technologies powered by Artificial Intelligence have the big potential to support new models of care delivery, disease prevention and quality of life promotion. The ultimate goal is a paradigm shift towards more personalized, accessible, effective, and sustainable care and health systems. Nevertheless, despite the advances in the field over the last years, the adoption and deployment of AI technologies remains limited in clinical practice and real-world settings. This paper summarizes the activities that a multidisciplinary research group within the Signals and Images Lab of the Institute of Information Science and Technologies of the National Research Council of Italy is carrying out for exploring both the potential of AI in health and well-being as well as the challenges to their uptake in real-world settingsSource: CEUR WORKSHOP PROCEEDINGS. Pisa, Italy, 29-30/05/2023
Project(s): ProCAncer-I via OpenAIRE

See at: ceur-ws.org Open Access | CNR IRIS Open Access | ISTI Repository Open Access | CNR IRIS Restricted


2023 Journal article Open Access OPEN
Computer vision tasks for ambient intelligence in children's health
Germanese D, Colantonio S, Del Coco M, Carcagni P, Leo M
Computer vision is a powerful tool for healthcare applications since it can provide objective diagnosis and assessment of pathologies, not depending on clinicians' skills and experiences. It can also help speed-up population screening, reducing health care costs and improving the quality of service. Several works summarise applications and systems in medical imaging, whereas less work is devoted to surveying approaches for healthcare goals using ambient intelligence, i.e., observing individuals in natural settings. Even more, there is a lack of papers providing a survey of works exhaustively covering computer vision applications for children's health, which is a particularly challenging research area considering that most existing computer vision technologies have been trained and tested only on adults. The aim of this paper is then to survey, for the first time in the literature, the papers covering children's health-related issues by ambient intelligence methods and systems relying on computer vision.Source: INFORMATION, vol. 14 (issue 10)
DOI: 10.3390/info14100548
Metrics:


See at: Information Open Access | CNR IRIS Open Access | www.mdpi.com Open Access | CNR IRIS Restricted


2023 Journal article Open Access OPEN
Enhancing COVID-19 CT image segmentation: a comparative study of attention and recurrence in UNet models
Buongiorno R, Del Corso G, Germanese D, Colligiani L, Python L, Romei C, Colantonio S
Imaging plays a key role in the clinical management of Coronavirus disease 2019 (COVID-19) as the imaging findings reflect the pathological process in the lungs. The visual analysis of High-Resolution Computed Tomography of the chest allows for the differentiation of parenchymal abnormalities of COVID-19, which are crucial to be detected and quantified in order to obtain an accurate disease stratification and prognosis. However, visual assessment and quantification represent a time-consuming task for radiologists. In this regard, tools for semi-automatic segmentation, such as those based on Convolutional Neural Networks, can facilitate the detection of pathological lesions by delineating their contour. In this work, we compared four state-of-the-art Convolutional Neural Networks based on the encoder-decoder paradigm for the binary segmentation of COVID-19 infections after training and testing them on 90 HRCT volumetric scans of patients diagnosed with COVID-19 collected from the database of the Pisa University Hospital. More precisely, we started from a basic model, the well-known UNet, then we added an attention mechanism to obtain an Attention-UNet, and finally we employed a recurrence paradigm to create a Recurrent-Residual UNet (R2-UNet). In the latter case, we also added attention gates to the decoding path of an R2-UNet, thus designing an R2-Attention UNet so as to make the feature representation and accumulation more effective. We compared them to gain understanding of both the cognitive mechanism that can lead a neural model to the best performance for this task and the good compromise between the amount of data, time, and computational resources required. We set up a five-fold cross-validation and assessed the strengths and limitations of these models by evaluating the performances in terms of Dice score, Precision, and Recall defined both on 2D images and on the entire 3D volume.From the results of the analysis, it can be concluded that Attention-UNet outperforms the other models by achieving the best performance of 81,93%, in terms of 2D Dice score, on the test set. Additionally, we conducted statistical analysis to assess the performance differences among the models. Our findings suggest that integrating the recurrence mechanism within the UNet architecture leads to a decline in the model's effectiveness for our particular application.Source: JOURNAL OF IMAGING, vol. 9 (issue 12)
DOI: 10.3390/jimaging9120283
Metrics:


See at: CNR IRIS Open Access | ISTI Repository Open Access | www.mdpi.com Open Access | CNR IRIS Restricted


2023 Conference article Open Access OPEN
Exploring the potentials and challenges of Artificial Intelligence in supporting clinical diagnostics and remote assistance for the health and well-being of individuals
Berti A., Buongiorno R., Carloni G., Caudai C., Del Corso G., Germanese D., Pachetti E., Pascali M. A., Colantonio S.
Innovative technologies powered by Artificial Intelligence have the big potential to support new models of care delivery, disease prevention and quality of life promotion. The ultimate goal is a paradigm shift towards more personalized, accessible, effective, and sustainable care and health systems. Nevertheless, despite the advances in the field over the last years, the adoption and deployment of AI technologies remains limited in clinical practice and real-world settings. This paper summarizes the activities that a multidisciplinary research group within the Signals and Images Lab of the Institute of Information Science and Technologies of the National Research Council of Italy is carrying out for exploring both the potential of AI in health and well-being as well as the challenges to their uptake in real-world settings.Source: CEUR WORKSHOP PROCEEDINGS, vol. 3486. Pisa, Italy, 29-30/05/2023

See at: ceur-ws.org Open Access | CNR IRIS Open Access | CNR IRIS Restricted


2022 Conference article Open Access OPEN
Augmented reality, artificial intelligence and machine learning in Industry 4.0: case studies at SI-Lab
Bruno A, Coscetti S, Leone Gr, Germanese D, Magrini M, Martinelli M, Moroni D, Pascali Ma, Pieri G, Reggiannini M, Tampucci M
In recent years, the impressive advances in artificial intelligence, computer vision, pervasive computing, and augmented reality made them rise to pillars of the fourth industrial revolution. This short paper aims to provide a brief survey of current use cases in factory applications and industrial inspection under active development at the Signals and Images Lab, ISTI-CNR, Pisa.DOI: 10.5281/zenodo.6322733
Metrics:


See at: CNR IRIS Open Access | ISTI Repository Open Access | www.ital-ia2022.it Open Access | CNR IRIS Restricted


2022 Conference article Open Access OPEN
Exploring UAVs for structural health monitoring
Germanese D, Moroni D, Pascali Ma, Tampucci M, Berton A
The preservation and maintenance of architectural heritage on a large scale deserve the design, development, and exploitation of innovative methodologies and tools for sustainable Structural Heritage Monitoring (SHM). In the framework of the Moscardo Project (https://www.moscardo.it/), the role of Unmanned Aerial Vehicles (UAVs) in conjunction with a broader IoT platform for SHM has been investigated. UAVs resulted in significant aid for a safe, fast and routinely operated inspection of buildings in synergy with data collected in situ thanks to a network of pervasive wireless sensors (Bacco et al. 2020). The main idea has been to deploy an acquisition layer made of a network of low power sensors capable of collecting environmental parameters and building vibration modes. This layer has been connected to a service layer through gateways capable of performing data analysis and presenting aggregated results thanks to an integrated dashboard. In this architecture, the UAV has emerged as a particular network node for extending the acquisition layer by adding several imaging capabilities.

See at: CNR IRIS Open Access | ISTI Repository Open Access | ISTI Repository Open Access | www.dsiteconference.com Open Access | CNR IRIS Restricted | CNR IRIS Restricted


2022 Contribution to book Open Access OPEN
Artificial Intelligence for chest imaging against COVID-19: an insight into image segmentation methods
Buongiorno R, Germanese D, Colligiani L, Fanni Sc, Romei C, Colantonio S
The coronavirus disease 2019 (COVID-19), caused by the Severe Acute Respiratory Syndrome Coronavirus 2, emerged in late 2019 and soon developed as a pandemic leading to a world health crisis.Chest imaging examination plays a vital role in the clinical management and prognostic evaluation of COVID-19 since the imaging pathological findings reflect the inflammatory process of the lungs.Particularly, thanks to its highest sensitivity and resolution, Computer Tomography chest imaging serves well in the distinction of the different parenchymal patterns and manifestations of COVID-19. It is worth noting that detecting and quantifying such manifestations is a key step in evaluating disease impact and tracking its progression or regression over time. Nevertheless, the visual inspection or, even worse, the manual delimitation of such manifestations may be greatly time-consuming and overwhelming for radiologists, especially when pressed by the urgent needs of patient care.Image segmentation tools, powered by Artificial Intelligence, may sensibly reduce radiologists' workload as they may automate or, at least, facilitate the delineation of the pathological lesions and the other regions of interest for disease assessment. This delineation lays the basis for further diagnostic and prognostic analyses based on quantitative information extracted from the segmented lesions.This chapter overviews the Artificial Intelligence methods for the segmentation of chest Computed Tomography images. The focus is in particular on Deep Learning approaches, as these have lately become the mainstream approach to image segmentation. A novel method, leveraging attention-based learning, is presented and evaluated. Finally, a discussion of the potential, limitations, and still open challenges of the field concludes the chapter.DOI: 10.1016/b978-0-323-90531-2.00008-4
Metrics:


See at: CNR IRIS Open Access | www.sciencedirect.com Open Access | doi.org Restricted | IRIS Cnr Restricted | CNR IRIS Restricted | CNR IRIS Restricted


2022 Other Open Access OPEN
SI-Lab annual research report 2021
Righi M, Leone G R, Carboni A, Caudai C, Colantonio S, Kuruoglu E E, Leporini B, Magrini M, Paradisi P, Pascali M A, Pieri G, Reggiannini M, Salerno E, Scozzari A, Tonazzini A, Fusco G, Galesi G, Martinelli M, Pardini F, Tampucci M, Berti A, Bruno A, Buongiorno R, Carloni G, Conti F, Germanese D, Ignesti G, Matarese F, Omrani A, Pachetti E, Papini O, Benassi A, Bertini G, Coltelli P, Tarabella L, Straface S, Salvetti O, Moroni D
The Signal & Images Laboratory is an interdisciplinary research group in computer vision, signal analysis, intelligent vision systems and multimedia data understanding. It is part of the Institute of Information Science and Technologies (ISTI) of the National Research Council of Italy (CNR). This report accounts for the research activities of the Signal and Images Laboratory of the Institute of Information Science and Technologies during the year 2021.DOI: 10.32079/isti-ar-2022/003
Metrics:


See at: CNR IRIS Open Access | ISTI Repository Open Access | CNR IRIS Restricted


2021 Conference article Open Access OPEN
UIP-net: a decoder-encoder CNN for the detection and quantification of usual interstitial pneumoniae pattern in lung CT scan images
Buongiorno R, Germanese D, Romei C, Tavanti L, De Liperi A, Colantonio S
A key step of the diagnosis of Idiopathic Pulmonary Fibrosis (IPF) is the examination of high-resolution computed tomography images (HRCT). IPF exhibits a typical radiological pattern, named Usual Interstitial Pneumoniae (UIP) pattern, which can be detected in non-invasive HRCT investigations, thus avoiding surgical lung biopsy. Unfortunately, the visual recognition and quantification of UIP pattern can be challenging even for experienced radiologists due to the poor inter and intra-reader agreement. This study aimed to develop a tool for the semantic segmentation and the quantification of UIP pattern in patients with IPF using a deep-learning method based on a Convolutional Neural Network (CNN), called UIP-net. The proposed CNN, based on an encoder-decoder architecture, takes as input a thoracic HRCT image and outputs a binary mask for the automatic discrimination between UIP pattern and healthy lung parenchyma. To train and evaluate the CNN, a dataset of 5000 images, derived by 20 CT scans of different patients, was used. The network performance yielded 96.7% BF-score and 85.9% sensitivity. Once trained and tested, the UIP-net was used to obtain the segmentations of other 60 CT scans of different patients to estimate the volume of lungs affected by the UIP pattern. The measurements were compared with those obtained using the reference software for the automatic detection of UIP pattern, named Computer Aided Lungs Informatics for Pathology Evaluation and Rating (CALIPER), through the Bland-Altman plot. The network performance assessed in terms of both BF-score and sensitivity on the test-set and resulting from the comparison with CALIPER demonstrated that CNNs have the potential to reliably detect and quantify pulmonary disease in order to evaluate its progression and become a supportive tool for radiologists.DOI: 10.1007/978-3-030-68763-2_30
Metrics:


See at: CNR IRIS Open Access | link.springer.com Open Access | ISTI Repository Open Access | CNR IRIS Restricted | link.springer.com Restricted


2021 Other Open Access OPEN
SI-Lab Annual Research Report 2020
Leone Gr, Righi M, Carboni A, Caudai C, Colantonio S, Kuruoglu Ee, Leporini B, Magrini M, Paradisi P, Pascali Ma, Pieri G, Reggiannini M, Salerno E, Scozzari A, Tonazzini A, Fusco G, Galesi G, Martinelli M, Pardini F, Tampucci M, Buongiorno R, Bruno A, Germanese D, Matarese F, Coscetti S, Coltelli P, Jalil B, Benassi A, Bertini G, Salvetti O, Moroni D
The Signal & Images Laboratory (http://si.isti.cnr.it/) is an interdisciplinary research group in computer vision, signal analysis, smart vision systems and multimedia data understanding. It is part of the Institute for Information Science and Technologies of the National Research Council of Italy. This report accounts for the research activities of the Signal and Images Laboratory of the Institute of Information Science and Technologies during the year 2020.DOI: 10.32079/isti-ar-2021/001
Metrics:


See at: CNR IRIS Open Access | ISTI Repository Open Access | CNR IRIS Restricted


2021 Conference article Open Access OPEN
A deep learning approach for hepatic steatosis estimation from ultrasound imaging
Colantonio S, Salvati A, Caudai C, Bonino F, De Rosa L, Pascali Ma, Germanese D, Brunetto Mr, Faita F
This paper proposes a simple convolutional neural model as a novel method to predict the level of hepatic steatosis from ultrasound data. Hepatic steatosis is the major histologic feature of non-alcoholic fatty liver disease (NAFLD), which has become a major global health challenge. Recently a new definition for FLD, that take into account the risk factors and clinical characteristics of subjects, has been suggested; the proposed criteria for Metabolic Disfunction-Associated Fatty Liver Disease (MAFLD) are based on histological (biopsy), imaging or blood biomarker evidence of fat accumulation in the liver (hepatic steatosis), in subjects with overweight/obesity or presence of type 2 diabetes mellitus. In lean or normal weight, non-diabetic individuals with steatosis, MAFLD is diagnosed when at least two metabolic abnormalities are present. Ultrasound examinations are the most used technique to non-invasively identify liver steatosis in a screening settings. However, the diagnosis is operator dependent, as accurate image processing techniques have not entered yet in the diagnostic routine. In this paper, we discuss the adoption of simple convolutional neural models to estimate the degree of steatosis from echographic images in accordance with the state-of-the-art magnetic resonance spectroscopy measurements (expressed as percentage of the estimated liver fat). More than 22,000 ultrasound images were used to train three networks, and results show promising performances in our study (150 subjects).Source: COMMUNICATIONS IN COMPUTER AND INFORMATION SCIENCE (PRINT), pp. 703-714. Rhodes, Greece, 29/09/2021,1/10/ 2021
DOI: 10.1007/978-3-030-88113-9_57
Metrics:


See at: CNR IRIS Open Access | link.springer.com Open Access | CNR IRIS Restricted | CNR IRIS Restricted


2020 Other Open Access OPEN
Augmented reality and intelligent systems in Industry 4.0
Benassi A, Carboni A, Colantonio S, Coscetti S, Germanese D, Jalil B, Leone R, Magnavacca J, Magrini M, Martinelli M, Matarese F, Moroni D, Paradisi P, Pardini F, Pascali M, Pieri G, Reggiannini M, Righi M, Salvetti O, Tampucci M
Augmented reality and intelligent systems in Industry 4.0 - Presentazione ARTESDOI: 10.5281/zenodo.4277713
DOI: 10.5281/zenodo.4277712
Metrics:


See at: CNR IRIS Open Access | ISTI Repository Open Access | CNR IRIS Restricted