23 result(s)
Page Size: 10, 20, 50
Export: bibtex, xml, json, csv
Order by:

CNR Author operator: and / or
more
Typology operator: and / or
Language operator: and / or
Date operator: and / or
Rights operator: and / or
2021 Journal article Open Access OPEN
Psycho-acoustics inspired automatic speech recognition
Coro G., Massoli F. V., Origlia A., Cutugno F.
Understanding the human spoken language recognition process is still a far scientific goal. Nowadays, commercial automatic speech recognisers (ASRs) achieve high performance at recognising clean speech, but their approaches are poorly related to human speech recognition. They commonly process the phonetic structure of speech while neglecting supra-segmental and syllabic tracts integral to human speech recognition. As a result, these ASRs achieve low performance on spontaneous speech and require enormous costs to build up phonetic and pronunciation models and catch the large variability of human speech. This paper presents a novel ASR that addresses these issues and questions conventional ASR approaches. It uses alternative acoustic models and an exhaustive decoding algorithm to process speech at a syllabic temporal scale (100-250 ms) through a multi-temporal approach inspired by psycho-acoustic studies. Performance comparison on the recognition of spoken Italian numbers (from 0 to 1 million) demonstrates that our approach is cost-effective, outperforms standard phonetic models, and reaches state-of-the-art performance.Source: Computers & electrical engineering (Print) 93 (2021). doi:10.1016/j.compeleceng.2021.107238
DOI: 10.1016/j.compeleceng.2021.107238
Metrics:


See at: ISTI Repository Open Access | www.sciencedirect.com Restricted | CNR ExploRA


2020 Journal article Open Access OPEN
Cross-resolution learning for face recognition
Massoli F. V., Amato G., Falchi F.
Convolutional Neural Network models have reached extremely high performance on the Face Recognition task. Mostly used datasets, such as VGGFace2, focus on gender, pose, and age variations, in the attempt of balancing them to empower models to better generalize to unseen data. Nevertheless, image resolution variability is not usually discussed, which may lead to a resizing of 256 pixels. While specific datasets for very low-resolution faces have been proposed, less attention has been paid on the task of cross-resolution matching. Hence, the discrimination power of a neural network might seriously degrade in such a scenario. Surveillance systems and forensic applications are particularly susceptible to this problem since, in these cases, it is common that a low-resolution query has to be matched against higher-resolution galleries. Although it is always possible to either increase the resolution of the query image or to reduce the size of the gallery (less frequently), to the best of our knowledge, extensive experimentation of cross-resolution matching was missing in the recent deep learning-based literature. In the context of low- and cross-resolution Face Recognition, the contribution of our work is fourfold: i) we proposed a training procedure to fine-tune a state-of-the-art model to empower it to extract resolution-robust deep features; ii) we conducted an extensive test campaign by using high-resolution datasets (IJB-B and IJB-C) and surveillance-camera-quality datasets (QMUL-SurvFace, TinyFace, and SCface) showing the effectiveness of our algorithm to train a resolution-robust model; iii) even though our main focus was the cross-resolution Face Recognition, by using our training algorithm we also improved upon state-of-the-art model performances considering low-resolution matches; iv) we showed that our approach could be more effective concerning preprocessing faces with super-resolution techniques. The python code of the proposed method will be available at https://github.com/fvmassoli/cross-resolution-face-recognition.Source: Image and vision computing 99 (2020). doi:10.1016/j.imavis.2020.103927
DOI: 10.1016/j.imavis.2020.103927
DOI: 10.48550/arxiv.1912.02851
Project(s): AI4EU via OpenAIRE
Metrics:


See at: arXiv.org e-Print Archive Open Access | Image and Vision Computing Open Access | ISTI Repository Open Access | Image and Vision Computing Restricted | doi.org Restricted | www.sciencedirect.com Restricted | CNR ExploRA


2020 Journal article Embargo
Cross-resolution face recognition adversarial attacks
Massoli F. V., Falchi F., Amato G.
Face Recognition is among the best examples of computer vision problems where the supremacy of deep learning techniques compared to standard ones is undeniable. Unfortunately, it has been shown that they are vulnerable to adversarial examples - input images to which a human imperceptible perturbation is added to lead a learning model to output a wrong prediction. Moreover, in applications such as biometric systems and forensics, cross-resolution scenarios are easily met with a non-negligible impact on the recognition performance and adversary's success. Despite the existence of such vulnerabilities set a harsh limit to the spread of deep learning-based face recognition systems to real-world applications, a comprehensive analysis of their behavior when threatened in a cross-resolution setting is missing in the literature. In this context, we posit our study, where we harness several of the strongest adversarial attacks against deep learning-based face recognition systems considering the cross-resolution domain. To craft adversarial instances, we exploit attacks based on three different metrics, i.e., L, L, and L, and we study the resilience of the models across resolutions. We then evaluate the performance of the systems against the face identification protocol, open- and close-set. In our study, we find that the deep representation attacks represents a much dangerous menace to a face recognition system than the ones based on the classification output independently from the used metric. Furthermore, we notice that the input image's resolution has a non-negligible impact on an adversary's success in deceiving a learning model. Finally, by comparing the performance of the threatened networks under analysis, we show how they can benefit from a cross-resolution training approach in terms of resilience to adversarial attacks.Source: Pattern recognition letters 140 (2020): 222–229. doi:10.1016/j.patrec.2020.10.008
DOI: 10.1016/j.patrec.2020.10.008
Project(s): AI4EU via OpenAIRE
Metrics:


See at: Pattern Recognition Letters Restricted | www.sciencedirect.com Restricted | CNR ExploRA


2020 Journal article Open Access OPEN
Detection of Face Recognition Adversarial Attacks
Massoli F. V., Carrara F., Amato G., Falchi F.
Deep Learning methods have become state-of-the-art for solving tasks such as Face Recognition (FR). Unfortunately, despite their success, it has been pointed out that these learning models are exposed to adversarial inputs - images to which an imperceptible amount of noise for humans is added to maliciously fool a neural network - thus limiting their adoption in sensitive real-world applications. While it is true that an enormous effort has been spent to train robust models against this type of threat, adversarial detection techniques have recently started to draw attention within the scientific community. The advantage of using a detection approach is that it does not require to re-train any model; thus, it can be added to any system. In this context, we present our work on adversarial detection in forensics mainly focused on detecting attacks against FR systems in which the learning model is typically used only as features extractor. Thus, training a more robust classifier might not be enough to counteract the adversarial threats. In this frame, the contribution of our work is four-fold: (i) we test our proposed adversarial detection approach against classification attacks, i.e., adversarial samples crafted to fool an FR neural network acting as a classifier; (ii) using a k-Nearest Neighbor (k-NN) algorithm as a guide, we generate deep features attacks against an FR system based on a neural network acting as features extractor, followed by a similarity-based procedure which returns the query identity; (iii) we use the deep features attacks to fool an FR system on the 1:1 face verification task, and we show their superior effectiveness with respect to classification attacks in evading such type of system; (iv) we use the detectors trained on the classification attacks to detect the deep features attacks, thus showing that such approach is generalizable to different classes of offensives.Source: Computer vision and image understanding (Print) 202 (2020). doi:10.1016/j.cviu.2020.103103
DOI: 10.1016/j.cviu.2020.103103
DOI: 10.48550/arxiv.1912.02918
Project(s): AI4EU via OpenAIRE
Metrics:


See at: arXiv.org e-Print Archive Open Access | Computer Vision and Image Understanding Open Access | ISTI Repository Open Access | ZENODO Open Access | Computer Vision and Image Understanding Restricted | doi.org Restricted | www.sciencedirect.com Restricted | CNR ExploRA


2020 Conference article Open Access OPEN
Cross-resolution deep features based image search
Massoli F. V., Falchi F., Gennaro C., Amato G.
Deep Learning models proved to be able to generate highly discriminative image descriptors, named deep features, suitable for similarity search tasks such as Person Re-Identification and Image Retrieval. Typically, these models are trained by employing high-resolution datasets, therefore reducing the reliability of the produced representations when low-resolution images are involved. The similarity search task becomes even more challenging in the cross-resolution scenarios, i.e., when a low-resolution query image has to be matched against a database containing descriptors generated from images at different, and usually high, resolutions. To solve this issue, we proposed a deep learning-based approach by which we empowered a ResNet-like architecture to generate resolution-robust deep features. Once trained, our models were able to generate image descriptors less brittle to resolution variations, thus being useful to fulfill a similarity search task in cross-resolution scenarios. To asses their performance, we used synthetic as well as natural low-resolution images. An immediate advantage of our approach is that there is no need for Super-Resolution techniques, thus avoiding the need to synthesize queries at higher resolutions.Source: Similarity Search and Applications, pp. 352–360, Copenhagen, Denmark, 20/09/2020, 2/10/2020
DOI: 10.1007/978-3-030-60936-8_27
Metrics:


See at: link.springer.com Open Access | ISTI Repository Open Access | doi.org Restricted | CNR ExploRA


2020 Conference article Open Access OPEN
KNN-guided Adversarial Attacks
Massoli F. V., Falchi F., Amato G.
In the last decade, we have witnessed a renaissance of Deep Learning models. Nowadays, they are widely used in industrial as well as scientific fields, and noticeably, these models reached super-human per-formances on specific tasks such as image classification. Unfortunately, despite their great success, it has been shown that they are vulnerable to adversarial attacks-images to which a specific amount of noise imper-ceptible to human eyes have been added to lead the model to a wrong decision. Typically, these malicious images are forged, pursuing a misclas-sification goal. However, when considering the task of Face Recognition (FR), this principle might not be enough to fool the system. Indeed, in the context FR, the deep models are generally used merely as features ex-tractors while the final task of recognition is accomplished, for example, by similarity measurements. Thus, by crafting adversarials to fool the classifier, it might not be sufficient to fool the overall FR pipeline. Start-ing from this observation, we proposed to use a k-Nearest Neighbour algorithm as guidance to craft adversarial attacks against an FR system. In our study, we showed how this kind of attack could be more threaten-ing for an FR system than misclassification-based ones considering both the targeted and untargeted attack strategies.Source: SEBD 2020. Italian Symposium on Advanced Database Systems, pp. 302–309, Villasimius, Sud Sardegna, Italia, 21-24/6/2020

See at: ceur-ws.org Open Access | ISTI Repository Open Access | CNR ExploRA


2020 Report Unknown
WeAreClouds@Lucca - D1.1 Analisi del territorio
Massoli F. V., Carboni A., Moroni D, Falchi F.
Deliverable D1.1 del progetto WeAreClouds@Lucca. Analisi potenziali utilizzi delle tecnologie sviluppate e in corso di sviluppo al CNR nell'ambito del CNR e in funzione delle telecamere dislocate nel territorio del Comujne di Lucca.Source: ISTI Project report, WeAreClouds@Lucca, D1.1, 2020

See at: CNR ExploRA


2020 Report Open Access OPEN
WeAreClouds@Lucca - D1.2 Stato dell'arte scientifico
Massoli F. V., Carboni A., Moroni D., Falchi F.
Deliverable D1.2 del progetto WeAreClouds@Lucca. Stato dell'arte scientificoSource: ISTI Project report, WeAreClouds@Lucca, D1.2, 2020

See at: ISTI Repository Open Access | CNR ExploRA


2021 Conference article Open Access OPEN
A multi-resolution training for expression recognition in the wild
Massoli F. V., Cafarelli D., Amato G., Falchi F.
Facial expressions play a fundamental role in human communication, and their study, which represents a multidisciplinary subject, embraces a great variety of research fields, e.g., from psychology to computer science, among others. Concerning Deep Learning, the recognition of facial expressions is a task named Facial Expression Recognition (FER). With such an objective, the goal of a learning model is to classify human emotions starting from a facial image of a given subject. Typically, face images are acquired by cameras that have, by nature, different characteristics, such as the output resolution. Moreover, other circumstances might involve cameras placed far from the observed scene, thus obtaining faces with very low resolutions. Therefore, since the FER task might involve analyzing face images that can be acquired with heterogeneous sources, it is plausible to expect that resolution plays a vital role. In such a context, we propose a multi-resolution training approach to solve the FER task. We ground our intuition on the observation that, often, face images are acquired at different resolutions. Thus, directly considering such property while training a model can help achieve higher performance on recognizing facial expressions. To our aim, we use a ResNet-like architecture, equipped with Squeeze-and-Excitation blocks, trained on the Affect-in-the-Wild 2 dataset. Not being available a test set, we conduct tests and model selection by employing the validation set only on which we achieve more than 90% accuracy on classifying the seven expressions that the dataset comprises.Source: SEBD 2021 - Italian Symposium on Advanced Database Systems, pp. 427–433, Pizzo Calabro, 5-9/9/2021
Project(s): AI4EU via OpenAIRE

See at: ceur-ws.org Open Access | ISTI Repository Open Access | CNR ExploRA


2022 Journal article Open Access OPEN
A leap among quantum computing and quantum neural networks: a survey
Massoli F. V., Vadicamo L., Amato G., Falchi F.
In recent years, Quantum Computing witnessed massive improvements in terms of available resources and algorithms development. The ability to harness quantum phenomena to solve computational problems is a long-standing dream that has drawn the scientific community's interest since the late 80s. In such a context, we propose our contribution. First, we introduce basic concepts related to quantum computations, and then we explain the core functionalities of technologies that implement the Gate Model and Adiabatic Quantum Computing paradigms. Finally, we gather, compare and analyze the current state-of-the-art concerning Quantum Perceptrons and Quantum Neural Networks implementations.Source: ACM computing surveys (2022). doi:10.1145/3529756
DOI: 10.1145/3529756
DOI: 10.48550/arxiv.2107.03313
Project(s): AI4EU via OpenAIRE, AI4Media via OpenAIRE
Metrics:


See at: arXiv.org e-Print Archive Open Access | ISTI Repository Open Access | ACM Computing Surveys Restricted | doi.org Restricted | CNR ExploRA


2021 Journal article Open Access OPEN
MOCCA: multilayer one-class classification for anomaly detection
Massoli F. V., Falchi F., Kantarci A., Akti S., Ekenel H. K., Amato G.
Anomalies are ubiquitous in all scientific fields and can express an unexpected event due to incomplete knowledge about the data distribution or an unknown process that suddenly comes into play and distorts the observations. Usually, due to such events' rarity, to train deep learning (DL) models on the anomaly detection (AD) task, scientists only rely on "normal" data, i.e., nonanomalous samples. Thus, letting the neural network infer the distribution beneath the input data. In such a context, we propose a novel framework, named multilayer one-class classification (MOCCA), to train and test DL models on the AD task. Specifically, we applied our approach to autoencoders. A key novelty in our work stems from the explicit optimization of the intermediate representations for the task at hand. Indeed, differently from commonly used approaches that consider a neural network as a single computational block, i.e., using the output of the last layer only, MOCCA explicitly leverages the multilayer structure of deep architectures. Each layer's feature space is optimized for AD during training, while in the test phase, the deep representations extracted from the trained layers are combined to detect anomalies. With MOCCA, we split the training process into two steps. First, the autoencoder is trained on the reconstruction task only. Then, we only retain the encoder tasked with minimizing the L-2 distance between the output representation and a reference point, the anomaly-free training data centroid, at each considered layer. Subsequently, we combine the deep features extracted at the various trained layers of the encoder model to detect anomalies at inference time. To assess the performance of the models trained with MOCCA, we conduct extensive experiments on publicly available datasets, namely CIFAR10, MVTec AD, and ShanghaiTech. We show that our proposed method reaches comparable or superior performance to state-of-the-art approaches available in the literature. Finally, we provide a model analysis to give insights regarding the benefits of our training procedure.Source: IEEE Transactions on Neural Networks and Learning Systems 33 (2021): 2313–2323. doi:10.1109/TNNLS.2021.3130074
DOI: 10.1109/tnnls.2021.3130074
DOI: 10.48550/arxiv.2012.12111
Project(s): AI4EU via OpenAIRE
Metrics:


See at: arXiv.org e-Print Archive Open Access | IEEE Transactions on Neural Networks and Learning Systems Open Access | ISTI Repository Open Access | IEEE Transactions on Neural Networks and Learning Systems Restricted | doi.org Restricted | ieeexplore.ieee.org Restricted | IEEE Transactions on Neural Networks and Learning Systems Restricted | CNR ExploRA


2020 Conference article Open Access OPEN
Multi-Resolution Face Recognition with Drones
Amato G., Falchi F., Gennaro C., Massoli F. V., Vairo C.
Smart cameras have recently seen a large diffusion and represent a low-cost solution for improving public security in many scenarios. Moreover, they are light enough to be lifted by a drone. Face recognition enabled by drones equipped with smart cameras has already been reported in the literature. However, the use of the drone generally imposes tighter constraints than other facial recognition scenarios. First, weather conditions, such as the presence of wind, pose a severe limit on image stability. Moreover, the distance the drones fly is typically much high than fixed ground cameras, which inevitably translates into a degraded resolution of the face images. Furthermore, the drones' operational altitudes usually require the use of optical zoom, thus amplifying the harmful effects of their movements. For all these reasons, in drone scenarios, image degradation strongly affects the behavior of face detection and recognition systems. In this work, we studied the performance of deep neural networks for face re-identification specifically designed for low-quality images and applied them to a drone scenario using a publicly available dataset known as DroneSURF.Source: 3rd International Conference on Sensors, Signal and Image Processing, pp. 13–18, Praga, Czech Republic (Virtual), 23-25/10/2020
DOI: 10.1145/3441233.3441237
Metrics:


See at: ISTI Repository Open Access | dl.acm.org Restricted | dl.acm.org Restricted | CNR ExploRA


2019 Conference article Open Access OPEN
Improving Multi-scale Face Recognition Using VGGFace2
Massoli F. V., Amato G., Falchi F., Gennaro C., Vairo C.
Convolutional neural networks have reached extremely high performances on the Face Recognition task. These models are commonly trained by using high-resolution images and for this reason, their discrimination ability is usually degraded when they are tested against low-resolution images. Thus, Low-Resolution Face Recognition remains an open challenge for deep learning models. Such a scenario is of particular interest for surveillance systems in which it usually happens that a low-resolution probe has to be matched with higher resolution galleries. This task can be especially hard to accomplish since the probe can have resolutions as low as 8, 16 and 24 pixels per side while the typical input of state-of-the-art neural network is 224. In this paper, we described the training campaign we used to fine-tune a ResNet-50 architecture, with Squeeze-and-Excitation blocks, on the tasks of very low and mixed resolutions face recognition. For the training process we used the VGGFace2 dataset and then we tested the performance of the final model on the IJB-B dataset; in particular, we tested the neural network on the 1:1 verification task. In our experiments we considered two different scenarios: (1) probe and gallery with same resolution; (2) probe and gallery with mixed resolutions. Experimental results show that with our approach it is possible to improve upon state-of-the-art models performance on the low and mixed resolution face recognition tasks with a negligible loss at very high resolutions.Source: BioFor Workshop on Recent Advances in Digital Security: Biometrics and Forensics, pp. 21–29, Trento, Berlino, 8/9/2019
DOI: 10.1007/978-3-030-30754-7_3
Metrics:


See at: ISTI Repository Open Access | doi.org Restricted | link.springer.com Restricted | CNR ExploRA


2019 Conference article Open Access OPEN
Face Verification and Recognition for Digital Forensics and Information Security
Amato G., Falchi F., Gennaro C., Massoli F. V., Passalis N., Tefas A., Trivilini A., Vairo C.
In this paper, we present an extensive evaluation of face recognition and verification approaches performed by the European COST Action MULTI-modal Imaging of FOREnsic SciEnce Evidence (MULTI-FORESEE). The aim of the study is to evaluate various face recognition and verification methods, ranging from methods based on facial landmarks to state-of-the-art off-the-shelf pre-trained Convolutional Neural Networks (CNN), as well as CNN models directly trained for the task at hand. To fulfill this objective, we carefully designed and implemented a realistic data acquisition process, that corresponds to a typical face verification setup, and collected a challenging dataset to evaluate the real world performance of the aforementioned methods. Apart from verifying the effectiveness of deep learning approaches in a specific scenario, several important limitations are identified and discussed through the paper, providing valuable insight for future research directions in the field.Source: 7th International Symposium on Digital Forensics and Security (ISDFS 2019), Barcelos, Portugal, 10/6/2019, 12/6/2019
DOI: 10.1109/isdfs.2019.8757511
Metrics:


See at: ISTI Repository Open Access | doi.org Restricted | ieeexplore.ieee.org Restricted | CNR ExploRA


2019 Conference article Open Access OPEN
CNN-based system for low resolution face recognition
Massoli F. V., Amato G., Falchi F., Gennaro C., Vairo C.
Since the publication of the AlexNet in 2012, Deep Convolutional Neural Network models became the most promising and powerful technique for image representation. Specifically, the ability of their inner layers to extract high level abstractions of the input images, called deep features vectors, has been employed. Such vectors live in a high dimensional space in which an inner product and thus a metric is defined. The latter allows to carry out similarity measurements among them. This property is particularly useful in order to accomplish tasks such as Face Recognition. Indeed, in order to identify a person it is possible to compare deep features, used as face descriptors, from different identities by means of their similarities. Surveillance systems, among others, utilize this technique. To be precise, deep features extracted from probe images are matched against a database of descriptors from known identities. A critical point is that the database typically contains features extracted from high resolution images while the probes, taken by surveillance cameras, can be at a very low resolution. Therefore, it is mandatory to have a neural network which is able to extract deep features that are robust with respect to resolution variations. In this paper we discuss a CNN-based pipeline that we built for the task of Face Recognition among images with different resolution. The entire system relies on the ability of a CNN to extract deep features that can be used to perform a similarity search in order to fulfill the face recognition task.Source: 27th Italian Symposium on Advanced Database Systems, Castiglione della Pescaia (Grosseto), Italy, June 16th to 19th, 2019

See at: ISTI Repository Open Access | CNR ExploRA


2020 Report Unknown
WeAreClouds@Lucca - D1.3 Definizione dei requisiti
Carboni A., Massoli F. V., Moroni D., Leone G. R., Falchi F.
Deliverable D1.3 Definizione dei requisiti del progetto WeAreClouds@LuccaSource: ISTI Project report, WeAreClouds@Lucca, D1.3, 2020

See at: CNR ExploRA


2019 Conference article Open Access OPEN
Intelligenza Artificiale e Analisi Visuale per la Cyber Security
Vairo C., Amato G., Ciampi L., Falchi F., Gennaro C., Massoli F. V.
Negli ultimi anni la Cyber Security ha acquisito una connotazione sempre più vasta, andando oltre la concezione di semplice sicurezza dei sistemi informatici e includendo anche la sorveglianza e la sicurezza in senso lato, sfruttando le ultime tecnologie come ad esempio l'intelligenza artificiale. In questo contributo vengono presentate le principali attività di ricerca e alcune delle tecnologie utilizzate e sviluppate dal gruppo di ricerca AIMIR dell'ISTI-CNR, e viene fornita una panoramica dei progetti di ricerca, sia passati che attualmente attivi, in cui queste tecnologie di intelligenza artificiale vengono utilizzare per lo sviluppo di applicazioni e servizi per la Cyber Security.Source: Ital-IA, Roma, 18/3/2019, 19/3/2019

See at: ISTI Repository Open Access | www.ital-ia.it Open Access | CNR ExploRA


2022 Conference article Open Access OPEN
AIMH Lab for Cybersecurity
Vairo C., Coccomini D. A., Falchi F., Gennaro C., Massoli F. V., Messina N., Amato G.
In this short paper, we report the activities of the Artificial Intelligence for Media and Humanities (AIMH) laboratory of the ISTI-CNR related to Cy-bersecurity. We discuss about our active research fields, their applications and challenges. We focus on face recognition and detection of adversarial examples and deep fakes. We also present our activities on the detection of persuasion techniques combining image and text analysis.Source: Ital-IA 2022 - Workshop su AI per Cybersecurity, 10/02/2022

See at: ISTI Repository Open Access | www.ital-ia2022.it Open Access | CNR ExploRA


2022 Conference article Open Access OPEN
AIMH Lab for the Industry
Carrara F., Ciampi L., Di Benedetto M., Falchi F., Gennaro C., Massoli F. V., Amato G.
In this short paper, we report the activities of the Artificial Intelligence for Media and Humanities (AIMH) laboratory of the ISTI-CNR related to Industry. The massive digitalization affecting all the stages of product design, production, and control calls for data-driven algorithms helping in the coordination of humans, machines, and digital resources in Industry 4.0. In this context, we developed AI-based Computer-Vision technologies of general interest in the emergent digital paradigm of the fourth industrial revolution, fo-cusing on anomaly detection and object counting for computer-assisted testing and quality control. Moreover, in the automotive sector, we explore the use of virtual worlds to develop AI systems in otherwise practically unfeasible scenarios, showing an application for accident avoidance in self-driving car AI agents.Source: Ital-IA 2022 - Workshop su AI per l'Industria, Online conference, 10/02/2022

See at: ISTI Repository Open Access | www.ital-ia2022.it Open Access | CNR ExploRA


2022 Conference article Open Access OPEN
AIMH Lab: Smart Cameras for Public Administration
Ciampi L., Cafarelli D., Carrara F., Di Benedetto M., Falchi F., Gennaro C., Massoli F. V., Messina N., Amato G.
In this short paper, we report the activities of the Artificial Intelligence for Media and Humanities (AIMH) laboratory of the ISTI-CNR related to Public Administration. In particular, we present some AI-based public services serving the citizens that help achieve common goals beneficial to the society, putting humans at the epicenter. Through the automatic analysis of images gathered from city cameras, we provide AI applications ranging from smart parking and smart mobility to human activity monitoring.Source: Ital-IA 2022 - Workshop su AI per la Pubblica Amministrazione, Online conference, 10/02/2022

See at: ISTI Repository Open Access | www.ital-ia2022.it Open Access | CNR ExploRA