64 result(s)
Page Size: 10, 20, 50
Export: bibtex, xml, json, csv
Order by:

CNR Author operator: and / or
more
Typology operator: and / or
Language operator: and / or
Date operator: and / or
more
Rights operator: and / or
2021 Journal article Open Access OPEN

The VISIONE video search system: exploiting off-the-shelf text search engines for large-scale video retrieval
Amato G., Bolettieri P., Carrara F., Debole F., Falchi F., Gennaro C., Vadicamo L., Vairo C.
This paper describes in detail VISIONE, a video search system that allows users to search for videos using textual keywords, the occurrence of objects and their spatial relationships, the occurrence of colors and their spatial relationships, and image similarity. These modalities can be combined together to express complex queries and meet users' needs. The peculiarity of our approach is that we encode all information extracted from the keyframes, such as visual deep features, tags, color and object locations, using a convenient textual encoding that is indexed in a single text retrieval engine. This offers great flexibility when results corresponding to various parts of the query (visual, text and locations) need to be merged. In addition, we report an extensive analysis of the retrieval performance of the system, using the query logs generated during the Video Browser Showdown (VBS) 2019 competition. This allowed us to fine-tune the system by choosing the optimal parameters and strategies from those we tested.Source: JOURNAL OF IMAGING 7 (2021). doi:10.3390/jimaging7050076
DOI: 10.3390/jimaging7050076

See at: ISTI Repository Open Access | ISTI Repository Open Access | CNR ExploRA Open Access | www.mdpi.com Open Access


2020 Conference article Open Access OPEN

Edge-Based Video Surveillance with Embedded Devices
Kavalionak H., Gennaro C., Amato G., Vairo C., Perciante C., Meghini C., Falchi F., Rabitti F.
Video surveillance systems have become indispensable tools for the security and organization of public and private areas. In this work, we propose a novel distributed protocol for an edge-based face recogni-tion system that takes advantage of the computational capabilities of the surveillance devices (i.e., cameras) to perform person recognition. The cameras fall back to a centralized server if their hardware capabili-ties are not enough to perform the recognition. We evaluate the proposed algorithm via extensive experiments on a freely available dataset. As a prototype of surveillance embedded devices, we have considered a Rasp-berry PI with the camera module. Using simulations, we show that our algorithm can reduce up to 50% of the load of the server with no negative impact on the quality of the surveillance service.Source: 28th Symposium on Advanced Database Systems (SEBD), pp. 278–285, Villasimius, Sardinia, Italy, 21-24/06/2020

See at: ceur-ws.org Open Access | ISTI Repository Open Access | CNR ExploRA Open Access


2020 Conference article Open Access OPEN

Multi-Resolution Face Recognition with Drones
Amato G., Falchi F., Gennaro C., Massoli F. V., Vairo C.
Smart cameras have recently seen a large diffusion and represent a low-cost solution for improving public security in many scenarios. Moreover, they are light enough to be lifted by a drone. Face recognition enabled by drones equipped with smart cameras has already been reported in the literature. However, the use of the drone generally imposes tighter constraints than other facial recognition scenarios. First, weather conditions, such as the presence of wind, pose a severe limit on image stability. Moreover, the distance the drones fly is typically much high than fixed ground cameras, which inevitably translates into a degraded resolution of the face images. Furthermore, the drones' operational altitudes usually require the use of optical zoom, thus amplifying the harmful effects of their movements. For all these reasons, in drone scenarios, image degradation strongly affects the behavior of face detection and recognition systems. In this work, we studied the performance of deep neural networks for face re-identification specifically designed for low-quality images and applied them to a drone scenario using a publicly available dataset known as DroneSURF.Source: 3rd International Conference on Sensors, Signal and Image Processing, pp. 13–18, Praga, Czech Republic (Virtual), 23-25/10/2020
DOI: 10.1145/3441233.3441237

See at: ISTI Repository Open Access | dl.acm.org Restricted | dl.acm.org Restricted | CNR ExploRA Restricted


2020 Journal article Open Access OPEN

5G-Enabled Security Scenarios for Unmanned Aircraft: Experimentation in Urban Environment
Ferro E., Gennaro C., Nordio A., Paonessa F., Vairo C., Virone G., Argentieri A., Berton A., Bragagnini A.
The telecommunication industry has seen rapid growth in the last few decades. This trend has been fostered by the diffusion of wireless communication technologies. In the city of Matera, Italy (European capital of culture 2019), two applications of 5G for public security have been tested by using an aerial drone: the recognition of objects and people in a crowded city and the detection of radio-frequency jammers. This article describes the experiments and the results obtained.Source: Drones volume 4 (2020). doi:10.3390/drones4020022
DOI: 10.3390/drones4020022

See at: Drones Open Access | ISTI Repository Open Access | CNR ExploRA Open Access | DOAJ-Articles Open Access | Drones Open Access


2020 Report Open Access OPEN

5G: Scenari di monitoraggio attraverso droni. TEST OPERATIVI e DEMO MATERA, ex-ospedale di San Rocco
Ferro E., Gennaro C., Vairo C., Berton A., Virone G., Paonessa F., Argentieri A.
Questo Documento ha lo scopo di descrivere in dettaglio i test operativi fatti a Matera con il drone e la demo fatta in data 27 Giugno davanti a persone del MISE. La localita? usata sia per i test operativi che per la demo è l' ex-ospedale di San Rocco. In particolare, gli scenari oggetto della demo sono: Scenario 8.3.6 - Sicurezza pubblica attraverso l'uso di droni Scenario 8.3.7 - Rilevazione di Jammer a Radiofrequenza mediante Drone I test operativi sono stati effettuati nei giorni 5 e 6 Giugno 2019; in particolare: o 5 Giugno: test dell'intero sistema di comunicazione 5G o 6 Giugno: prove di volo con carico La demo si è svolta Giovedì 27 Giugno 2019.

See at: ISTI Repository Open Access | CNR ExploRA Open Access


2020 Report Open Access OPEN

5G: Scenari di monitoraggio attraverso droni - D3 - L' uso dei droni nell' agricoltura di precisione a Matera
Ferro E., Gennaro C., Vairo C., Berton A., Argentieri A.
Questo documento ha lo scopo di descrivere i test operativi fatti nel 2020 a Matera con il drone per lo Scenario 8.11.2: Agricoltura di Precisione con Veicoli Autonomi. I test operativi sono stati effettuati compatibilmente con le restrizioni dovute alla pandemia da Covid- 19. L'area interessata (Figura 1) e? stata un campo di trifoglio messo a disposizione da Masseria del Parco (La Martella- Matera- Basilicata), dove l'Universita? della Basilicata, a febbraio 2020 aveva provveduto a fare una fertilizzazione a rateo variabile (da 0,35 a 50kg per ettaro).

See at: ISTI Repository Open Access | CNR ExploRA Open Access


2020 Report Open Access OPEN

AIMH research activities 2020
Aloia N., Amato G., Bartalesi V., Benedetti F., Bolettieri P., Carrara F., Casarosa V., Ciampi L., Concordia C., Corbara S., Esuli A., Falchi F., Gennaro C., Lagani G., Massoli F. V., Meghini C., Messina N., Metilli D., Molinari A., Moreo A., Nardi A., Pedrotti A., Pratelli N., Rabitti F., Savino P., Sebastiani F., Thanos C., Trupiano L., Vadicamo L., Vairo C.
Annual Report of the Artificial Intelligence for Media and Humanities laboratory (AIMH) research activities in 2020.

See at: ISTI Repository Open Access | CNR ExploRA Open Access


2019 Journal article Open Access OPEN

Distributed video surveillance using smart cameras
Kavalionak H., Gennaro C., Amato G., Vairo C., Perciante C., Meghini C., Falchi F.
Video surveillance systems have become an indispensable tool for the security and organization of public and private areas. Most of the current commercial video surveillance systems rely on a classical client/server architecture to perform face and object recognition. In order to support the more complex and advanced video surveillance systems proposed in the last years, companies are required to invest resources in order to maintain the servers dedicated to the recognition tasks. In this work, we propose a novel distributed protocol for a face recognition system that exploits the computational capabilities of the surveillance devices (i.e. cameras) to perform the recognition of the person. The cameras fall back to a centralized server if their hardware capabilities are not enough to perform the recognition. In order to evaluate the proposed algorithm we simulate and test the 1NN and weighted kNN classification algorithms via extensive experiments on a freely available dataset. As a prototype of surveillance devices we have considered Raspberry PI entities. By means of simulations, we show that our algorithm is able to reduce up to 50% of the load from the server with no negative impact on the quality of the surveillance service.Source: Journal of grid computing 17 (2019): 59–77. doi:10.1007/s10723-018-9467-x
DOI: 10.1007/s10723-018-9467-x

See at: ISTI Repository Open Access | Journal of Grid Computing Restricted | Journal of Grid Computing Restricted | Journal of Grid Computing Restricted | link.springer.com Restricted | Journal of Grid Computing Restricted | Journal of Grid Computing Restricted | Journal of Grid Computing Restricted | CNR ExploRA Restricted


2019 Conference article Open Access OPEN

An Image Retrieval System for Video
Bolettieri P., Carrara F., Debole F., Falchi F., Gennaro C., Vadicamo L., Vairo C.
Since the 1970's the Content-Based Image Indexing and Retrieval (CBIR) has been an active area. Nowadays, the rapid increase of video data has paved the way to the advancement of the technologies in many different communities for the creation of Content-Based Video Indexing and Retrieval (CBVIR). However, greater attention needs to be devoted to the development of effective tools for video search and browse. In this paper, we present Visione, a system for large-scale video retrieval. The system integrates several content-based analysis and retrieval modules, including a keywords search, a spatial object-based search, and a visual similarity search. From the tests carried out by users when they needed to find as many correct examples as possible, the similarity search proved to be the most promising option. Our implementation is based on state-of-the-art deep learning approaches for content analysis and leverages highly efficient indexing techniques to ensure scalability. Specifically, we encode all the visual and textual descriptors extracted from the videos into (surrogate) textual representations that are then efficiently indexed and searched using an off-the-shelf text search engine using similarity functions.Source: International Conference on Similarity Search and Applications (SISAP), pp. 332–339, Newark, NJ, USA, 2-4/10/2019
DOI: 10.1007/978-3-030-32047-8_29

See at: ISTI Repository Open Access | academic.microsoft.com Restricted | dblp.uni-trier.de Restricted | link.springer.com Restricted | link.springer.com Restricted | CNR ExploRA Restricted | rd.springer.com Restricted


2019 Software Unknown

VISIONE Content-Based Video Retrieval System, VBS 2019
Amato G., Bolettieri P., Carrara F., Debole F., Falchi F., Gennaro C., Vadicamo L., Vairo C.
VISIONE is a content-based video retrieval system that participated to VBS for the very first time in 2019. It is mainly based on state-of-the-art deep learning approaches for visual content analysis and exploits highly efficient indexing techniques to ensure scalability. The system supports query by scene tag, query by object location, query by color sketch, and visual similarity search.

See at: bilioso.isti.cnr.it | CNR ExploRA


2019 Report Open Access OPEN

SmartPark@Lucca - D5. Integrazione e sperimentazione sul campo
Amato G., Bolettieri P., Carrara F., Ciampi L., Gennaro C., Leone G. R., Moroni D., Pieri G., Vairo C.
In questo deliverable sono descritte le attività eseguite all'interno del WP3, in particolare relative al Task 3.1 - Integrazione e al Task 3.2 - Sperimentazione sul campo.Source: Project report, SmartPark@Lucca, Deliverable D5, pp.1–24, 2019

See at: ISTI Repository Open Access | CNR ExploRA Open Access


2019 Conference article Open Access OPEN

VISIONE at VBS2019
Amato G., Bolettieri P., Carrara F., Debole F., Falchi F., Gennaro C., Vadicamo L., Vairo C.
This paper presents VISIONE, a tool for large-scale video search. The tool can be used for both known-item and ad-hoc video search tasks since it integrates several content-based analysis and re- trieval modules, including a keyword search, a spatial object-based search, and a visual similarity search. Our implementation is based on state-of- the-art deep learning approaches for the content analysis and leverages highly efficient indexing techniques to ensure scalability. Specifically, we encode all the visual and textual descriptors extracted from the videos into (surrogate) textual representations that are then efficiently indexed and searched using an off-the-shelf text search engine.Source: MMM 2019 - 25th International Conference on Multimedia Modeling, pp. 591–596, Thessaloniki, Greece, 08-11/01/2019
DOI: 10.1007/978-3-030-05716-9_51

See at: ISTI Repository Open Access | link.springer.com Restricted | CNR ExploRA Restricted


2019 Conference article Open Access OPEN

Intelligenza Artificiale e Analisi Visuale per la Cyber Security
Vairo C., Amato G., Ciampi L., Falchi F., Gennaro C., Massoli F. V.
Negli ultimi anni la Cyber Security ha acquisito una connotazione sempre più vasta, andando oltre la concezione di semplice sicurezza dei sistemi informatici e includendo anche la sorveglianza e la sicurezza in senso lato, sfruttando le ultime tecnologie come ad esempio l'intelligenza artificiale. In questo contributo vengono presentate le principali attività di ricerca e alcune delle tecnologie utilizzate e sviluppate dal gruppo di ricerca AIMIR dell'ISTI-CNR, e viene fornita una panoramica dei progetti di ricerca, sia passati che attualmente attivi, in cui queste tecnologie di intelligenza artificiale vengono utilizzare per lo sviluppo di applicazioni e servizi per la Cyber Security.Source: Ital-IA, Roma, 18/3/2019, 19/3/2019

See at: ISTI Repository Open Access | CNR ExploRA Open Access | www.ital-ia.it Open Access


2019 Conference article Open Access OPEN

Improving Multi-scale Face Recognition Using VGGFace2
Massoli F. V., Amato G., Falchi F., Gennaro C., Vairo C.
Convolutional neural networks have reached extremely high performances on the Face Recognition task. These models are commonly trained by using high-resolution images and for this reason, their discrimination ability is usually degraded when they are tested against low-resolution images. Thus, Low-Resolution Face Recognition remains an open challenge for deep learning models. Such a scenario is of particular interest for surveillance systems in which it usually happens that a low-resolution probe has to be matched with higher resolution galleries. This task can be especially hard to accomplish since the probe can have resolutions as low as 8, 16 and 24 pixels per side while the typical input of state-of-the-art neural network is 224. In this paper, we described the training campaign we used to fine-tune a ResNet-50 architecture, with Squeeze-and-Excitation blocks, on the tasks of very low and mixed resolutions face recognition. For the training process we used the VGGFace2 dataset and then we tested the performance of the final model on the IJB-B dataset; in particular, we tested the neural network on the 1:1 verification task. In our experiments we considered two different scenarios: (1) probe and gallery with same resolution; (2) probe and gallery with mixed resolutions. Experimental results show that with our approach it is possible to improve upon state-of-the-art models performance on the low and mixed resolution face recognition tasks with a negligible loss at very high resolutions.Source: BioFor Workshop on Recent Advances in Digital Security: Biometrics and Forensics, pp. 21–29, Trento, Berlino, 8/9/2019
DOI: 10.1007/978-3-030-30754-7_3

See at: ISTI Repository Open Access | academic.microsoft.com Restricted | dblp.uni-trier.de Restricted | link.springer.com Restricted | link.springer.com Restricted | link.springer.com Restricted | CNR ExploRA Restricted | rd.springer.com Restricted


2019 Conference article Open Access OPEN

Face Verification and Recognition for Digital Forensics and Information Security
Amato G., Falchi F., Gennaro C., Massoli F. V., Passalis N., Tefas A., Trivilini A., Vairo C.
In this paper, we present an extensive evaluation of face recognition and verification approaches performed by the European COST Action MULTI-modal Imaging of FOREnsic SciEnce Evidence (MULTI-FORESEE). The aim of the study is to evaluate various face recognition and verification methods, ranging from methods based on facial landmarks to state-of-the-art off-the-shelf pre-trained Convolutional Neural Networks (CNN), as well as CNN models directly trained for the task at hand. To fulfill this objective, we carefully designed and implemented a realistic data acquisition process, that corresponds to a typical face verification setup, and collected a challenging dataset to evaluate the real world performance of the aforementioned methods. Apart from verifying the effectiveness of deep learning approaches in a specific scenario, several important limitations are identified and discussed through the paper, providing valuable insight for future research directions in the field.Source: 7th International Symposium on Digital Forensics and Security (ISDFS 2019), Barcelos, Portugal, 10/6/2019, 12/6/2019
DOI: 10.1109/isdfs.2019.8757511

See at: ISTI Repository Open Access | academic.microsoft.com Restricted | dblp.uni-trier.de Restricted | ieeexplore.ieee.org Restricted | CNR ExploRA Restricted | xplorestaging.ieee.org Restricted


2019 Conference article Open Access OPEN

CNN-based system for low resolution face recognition
Massoli F. V., Amato G., Falchi F., Gennaro C., Vairo C.
Since the publication of the AlexNet in 2012, Deep Convolutional Neural Network models became the most promising and powerful technique for image representation. Specifically, the ability of their inner layers to extract high level abstractions of the input images, called deep features vectors, has been employed. Such vectors live in a high dimensional space in which an inner product and thus a metric is defined. The latter allows to carry out similarity measurements among them. This property is particularly useful in order to accomplish tasks such as Face Recognition. Indeed, in order to identify a person it is possible to compare deep features, used as face descriptors, from different identities by means of their similarities. Surveillance systems, among others, utilize this technique. To be precise, deep features extracted from probe images are matched against a database of descriptors from known identities. A critical point is that the database typically contains features extracted from high resolution images while the probes, taken by surveillance cameras, can be at a very low resolution. Therefore, it is mandatory to have a neural network which is able to extract deep features that are robust with respect to resolution variations. In this paper we discuss a CNN-based pipeline that we built for the task of Face Recognition among images with different resolution. The entire system relies on the ability of a CNN to extract deep features that can be used to perform a similarity search in order to fulfill the face recognition task.Source: 27th Italian Symposium on Advanced Database Systems, Castiglione della Pescaia (Grosseto), Italy, June 16th to 19th, 2019

See at: ISTI Repository Open Access | CNR ExploRA Open Access


2019 Conference article Open Access OPEN

Parking Lot Monitoring with Smart Cameras
Amato G., Bolettieri P., Carrara F., Ciampi L., Gennaro C., Leone G. R., Moroni D., Pieri G., Vairo C.
In this article, we present a scenario for monitoring the occupancy of parking spaces in the historical city of Lucca (Italy) based on the use of intelligent cameras and the most modern technologies of artificial intelligence. The system is designed to use different smart-camera prototypes: where the connection to the power grid is available, we propose a powerful embedded hardware solution that exploits a Deep Neural Network. Otherwise, a fully autonomous energy-harvesting node based on a low-energy custom board employing lightweight image analysis algorithms is considered.Source: 5th Italian Conference on ICT for Smart Cities And Communities, pp. 1–3, Pisa, Italy, 18-20 September, 2019

See at: ISTI Repository Open Access | CNR ExploRA Open Access


2019 Report Open Access OPEN

AIMIR 2019 Research Activities
Amato G., Bolettieri P., Carrara F., Ciampi L., Di Benedetto M., Debole F., Falchi F., Gennaro C., Lagani G., Massoli F. V., Messina N., Rabitti F., Savino P., Vadicamo L., Vairo C.
Multimedia Information Retrieval (AIMIR) research group is part of the NeMIS laboratory of the Information Science and Technologies Institute "A. Faedo" (ISTI) of the Italian National Research Council (CNR). The AIMIR group has a long experience in topics related to: Artificial Intelligence, Multimedia Information Retrieval, Computer Vision and Similarity search on a large scale. We aim at investigating the use of Artificial Intelligence and Deep Learning, for Multimedia Information Retrieval, addressing both effectiveness and efficiency. Multimedia information retrieval techniques should be able to provide users with pertinent results, fast, on huge amount of multimedia data. Application areas of our research results range from cultural heritage to smart tourism, from security to smart cities, from mobile visual search to augmented reality. This report summarize the 2019 activities of the research group.Source: AIMIR Annual Report, 2019

See at: ISTI Repository Open Access | CNR ExploRA Open Access


2018 Journal article Open Access OPEN

Towards multimodal surveillance for smart building security
Amato G., Barsocchi P., Falchi F., Ferro E., Gennaro C., Leone G. R., Moroni D., Salvetti O., Vairo C.
The main goal of a surveillance system is to collect information in a sensing environment and notify unexpected behavior. Information provided by single sensor and surveillance technology may not be sufficient to understand the whole context of the monitored environment. On the other hand, by combining information coming from different sources, the overall performance of a surveillance system can be improved. In this paper, we present the Smart Building Suite, in which independent and different technologies are developed in order to realize a multimodal surveillance system.Source: Proceedings (MDPI) 2 (2018). doi:10.3390/proceedings2020095
DOI: 10.3390/proceedings2020095
DOI: 10.5281/zenodo.1159162
DOI: 10.5281/zenodo.1159161

See at: Proceedings Open Access | Proceedings Open Access | Proceedings Open Access | ISTI Repository Open Access | CNR ExploRA Open Access | Proceedings Open Access | Proceedings Open Access | ZENODO Open Access


2018 Conference article Open Access OPEN

A comparison of face verification with facial landmarks and deep features
Amato G., Falchi F., Gennaro C., Vairo C.
Face verification is a key task in many application fields, such as security and surveillance. Several approaches and methodologies are currently used to try to determine if two faces belong to the same person. Among these, facial landmarks are very important in forensics, since the distance between some characteristic points of a face can be used as an objective measure in court during trials. However, the accuracy of the approaches based on facial landmarks in verifying whether a face belongs to a given person or not is often not quite good. Recently, deep learning approaches have been proposed to address the face verification problem, with very good results. In this paper, we compare the accuracy of facial landmarks and deep learning approaches in performing the face verification task. Our experiments, conducted on a real case scenario, show that the deep learning approach greatly outperforms in accuracy the facial landmarks approach.Source: MMEDIA 2018 - Tenth International Conference on Advances in Multimedia, pp. 1–6, Athens, Greece, 22-26 April 2018

See at: ISTI Repository Open Access | CNR ExploRA Open Access | www.thinkmind.org Open Access