34 result(s)
Page Size: 10, 20, 50
Export: bibtex, xml, json, csv
Order by:

CNR Author operator: and / or
more
Typology operator: and / or
Language operator: and / or
Date operator: and / or
more
Rights operator: and / or
2022 Conference article Open Access OPEN

AIMH Lab for Healthcare and Wellbeing
Di Benedetto M., Carrara F., Ciampi L., Falchi F., Gennaro C., Amato G.
In this work we report the activities of the Artificial Intelligence for Media and Humanities (AIMH) laboratory of the ISTI-CNR related to Healthcare and Wellbeing. By exploiting the advances of recent machine learning methods and the compute power of desktop and mobile platforms, we will show how artificial intelligence tools can be used to improve healthcare systems in various parts of disease treatment. In particular we will see how deep neural networks can assist doctors from diagnosis (e.g., cell counting, pupil and brain analysis) to communication to patients with Augmented Reality .Source: Ital-IA 2022 - Workshop AI per la Medicina e la Salute, Online conference, 10/02/2022

See at: ISTI Repository Open Access | CNR ExploRA Open Access | www.ital-ia2022.it Open Access


2022 Conference article Open Access OPEN

AIMH Lab for the Industry
Carrara F., Ciampi L., Di Benedetto M., Falchi F., Gennaro C., Massoli F. V., Amato G.
In this short paper, we report the activities of the Artificial Intelligence for Media and Humanities (AIMH) laboratory of the ISTI-CNR related to Industry. The massive digitalization affecting all the stages of product design, production, and control calls for data-driven algorithms helping in the coordination of humans, machines, and digital resources in Industry 4.0. In this context, we developed AI-based Computer-Vision technologies of general interest in the emergent digital paradigm of the fourth industrial revolution, fo-cusing on anomaly detection and object counting for computer-assisted testing and quality control. Moreover, in the automotive sector, we explore the use of virtual worlds to develop AI systems in otherwise practically unfeasible scenarios, showing an application for accident avoidance in self-driving car AI agents.Source: Ital-IA 2022 - Workshop su AI per l'Industria, Online conference, 10/02/2022

See at: CNR ExploRA Open Access | www.ital-ia2022.it Open Access


2022 Conference article Open Access OPEN

AIMH Lab: Smart Cameras for Public Administration
Ciampi L., Cafarelli D., Carrara F., Di Benedetto M., Falchi F., Gennaro C., Massoli F. V., Messina N., Amato G.
In this short paper, we report the activities of the Artificial Intelligence for Media and Humanities (AIMH) laboratory of the ISTI-CNR related to Public Administration. In particular, we present some AI-based public services serving the citizens that help achieve common goals beneficial to the society, putting humans at the epicenter. Through the automatic analysis of images gathered from city cameras, we provide AI applications ranging from smart parking and smart mobility to human activity monitoring.Source: Ital-IA 2022 - Workshop su AI per la Pubblica Amministrazione, Online conference, 10/02/2022

See at: ISTI Repository Open Access | CNR ExploRA Open Access | www.ital-ia2022.it Open Access


2022 Journal article Open Access OPEN

An embedded toolset for human activity monitoring in critical environments
Di Benedetto M., Carrara F., Ciampi L., Falchi F., Gennaro C., Amato G.
In many working and recreational activities, there are scenarios where both individual and collective safety have to be constantly checked and properly signaled, as occurring in dangerous workplaces or during pandemic events like the recent COVID-19 disease. From wearing personal protective equipment to filling physical spaces with an adequate number of people, it is clear that a possibly automatic solution would help to check compliance with the established rules. Based on an off-the-shelf compact and low-cost hardware, we present a deployed real use-case embedded system capable of perceiving people's behavior and aggregations and supervising the appliance of a set of rules relying on a configurable plug-in framework. Working on indoor and outdoor environments, we show that our implementation of counting people aggregations, measuring their reciprocal physical distances, and checking the proper usage of protective equipment is an effective yet open framework for monitoring human activities in critical conditions.Source: Expert systems with applications 199 (2022). doi:10.1016/j.eswa.2022.117125
DOI: 10.1016/j.eswa.2022.117125
Project(s): AI4EU via OpenAIRE, AI4Media via OpenAIRE

See at: ISTI Repository Open Access | CNR ExploRA Restricted


2021 Report Open Access OPEN

AIMH research activities 2021
Aloia N., Amato G., Bartalesi V., Benedetti F., Bolettieri P., Cafarelli D., Carrara F., Casarosa V., Coccomini D., Ciampi L., Concordia C., Corbara S., Di Benedetto M., Esuli A., Falchi F., Gennaro C., Lagani G., Massoli F. V., Meghini C., Messina N., Metilli D., Molinari A., Moreo A., Nardi A., Pedrotti A., Pratelli N., Rabitti F., Savino P., Sebastiani F., Sperduti G., Thanos C., Trupiano L., Vadicamo L., Vairo C.
The Artificial Intelligence for Media and Humanities laboratory (AIMH) has the mission to investigate and advance the state of the art in the Artificial Intelligence field, specifically addressing applications to digital media and digital humanities, and taking also into account issues related to scalability. This report summarize the 2021 activities of the research group.Source: ISTI Annual Report, ISTI-2021-AR/003, pp.1–34, 2021
DOI: 10.32079/isti-ar-2021/003

See at: ISTI Repository Open Access | CNR ExploRA Open Access


2020 Journal article Open Access OPEN

Learning accurate personal protective equipment detection from virtual worlds
Di Benedetto M., Carrara F., Meloni E., Amato G., Falchi F., Gennaro C.
Deep learning has achieved impressive results in many machine learning tasks such as image recognition and computer vision. Its applicability to supervised problems is however constrained by the availability of high-quality training data consisting of large numbers of humans annotated examples (e.g. millions). To overcome this problem, recently, the AI world is increasingly exploiting artificially generated images or video sequences using realistic photo rendering engines such as those used in entertainment applications. In this way, large sets of training images can be easily created to train deep learning algorithms. In this paper, we generated photo-realistic synthetic image sets to train deep learning models to recognize the correct use of personal safety equipment (e.g., worker safety helmets, high visibility vests, ear protection devices) during at-risk work activities. Then, we performed the adaptation of the domain to real-world images using a very small set of real-world images. We demonstrated that training with the synthetic training set generated and the use of the domain adaptation phase is an effective solution for applications where no training set is available.Source: Multimedia tools and applications (2020). doi:10.1007/s11042-020-09597-9
DOI: 10.1007/s11042-020-09597-9
Project(s): AI4EU via OpenAIRE

See at: ISTI Repository Open Access | Multimedia Tools and Applications Restricted | Multimedia Tools and Applications Restricted | Multimedia Tools and Applications Restricted | Multimedia Tools and Applications Restricted | CNR ExploRA Restricted


2020 Report Open Access OPEN

AIMH research activities 2020
Aloia N., Amato G., Bartalesi V., Benedetti F., Bolettieri P., Carrara F., Casarosa V., Ciampi L., Concordia C., Corbara S., Esuli A., Falchi F., Gennaro C., Lagani G., Massoli F. V., Meghini C., Messina N., Metilli D., Molinari A., Moreo A., Nardi A., Pedrotti A., Pratelli N., Rabitti F., Savino P., Sebastiani F., Thanos C., Trupiano L., Vadicamo L., Vairo C.
Annual Report of the Artificial Intelligence for Media and Humanities laboratory (AIMH) research activities in 2020.Source: ISTI Annual Report, ISTI-2020-AR/001, 2020
DOI: 10.32079/isti-ar-2020/001

See at: ISTI Repository Open Access | CNR ExploRA Open Access


2019 Conference article Open Access OPEN

Intelligenza Artificiale per Ricerca in Big Multimedia Data
Carrara F., Amato G., Debole F., Di Benedetto M., Falchi F., Gennaro C., Messina N.
La diffusa produzione di immagini e media digitali ha reso necessario l'utilizzo di metodi automatici di analisi e indicizzazione su larga scala per la loro fruzione. Il gruppo AIMIR dell'ISTI-CNR si è specializzato da anni in questo ambito ed ha abbracciato tecniche di Deep Learning basate su reti neurali artificiali per molteplici aspetti di questa disciplina, come l'analisi, l'annotazione e la descrizione automatica di contenuti visuali e il loro recupero su larga scala.Source: Ital-IA, Roma, 18/3/2019, 19/3/2019

See at: ISTI Repository Open Access | CNR ExploRA Open Access | www.ital-ia.it Open Access


2019 Conference article Open Access OPEN

Learning Safety Equipment Detection using Virtual Worlds
Di Benedetto M., Meloni E., Amato G., Falchi F., Gennaro C.
Nowadays, the possibilities offered by state-of-The-Art deep neural networks allow the creation of systems capable of recognizing and indexing visual content with very high accuracy. Performance of these systems relies on the availability of high quality training sets, containing a large number of examples (e.g. million), in addition to the the machine learning tools themselves. For several applications, very good training sets can be obtained, for example, crawling (noisily) annotated images from the internet, or by analyzing user interaction (e.g.: on social networks). However, there are several applications for which high quality training sets are not easy to be obtained/created. Consider, as an example, a security scenario where one wants to automatically detect rarely occurring threatening events. In this respect, recently, researchers investigated the possibility of using a visual virtual environment, capable of artificially generating controllable and photo-realistic contents, to create training sets for applications with little available training images. We explored this idea to generate synthetic photo-realistic training sets to train classifiers to recognize the proper use of individual safety equipment (e.g.: worker protection helmets, high-visibility vests, ear protection devices) during risky human activities. Then, we performed domain adaptation to real images by using a very small image data set of real-world photographs. We show that training with the generated synthetic training set and using the domain adaptation step is an effective solution to address applications for which no training sets exist.Source: 2019 International Conference on Content-Based Multimedia Indexing (CBMI), Dublin, Ireland, 4/9/2019, 6/9/2019
DOI: 10.1109/cbmi.2019.8877466
Project(s): AI4EU via OpenAIRE

See at: ISTI Repository Open Access | academic.microsoft.com Restricted | dblp.uni-trier.de Restricted | ieeexplore.ieee.org Restricted | CNR ExploRA Restricted | xplorestaging.ieee.org Restricted


2019 Report Open Access OPEN

AIMIR 2019 Research Activities
Amato G., Bolettieri P., Carrara F., Ciampi L., Di Benedetto M., Debole F., Falchi F., Gennaro C., Lagani G., Massoli F. V., Messina N., Rabitti F., Savino P., Vadicamo L., Vairo C.
Multimedia Information Retrieval (AIMIR) research group is part of the NeMIS laboratory of the Information Science and Technologies Institute "A. Faedo" (ISTI) of the Italian National Research Council (CNR). The AIMIR group has a long experience in topics related to: Artificial Intelligence, Multimedia Information Retrieval, Computer Vision and Similarity search on a large scale. We aim at investigating the use of Artificial Intelligence and Deep Learning, for Multimedia Information Retrieval, addressing both effectiveness and efficiency. Multimedia information retrieval techniques should be able to provide users with pertinent results, fast, on huge amount of multimedia data. Application areas of our research results range from cultural heritage to smart tourism, from security to smart cities, from mobile visual search to augmented reality. This report summarize the 2019 activities of the research group.Source: AIMIR Annual Report, 2019

See at: ISTI Repository Open Access | CNR ExploRA Open Access


2018 Conference article Open Access OPEN

UAVs and UAV swarms for civilian applications: communications and image processing in the SCIADRO project
Bacco M., Chessa S., Di Benedetto M., Fabbri D., Girolami M., Gotta A., Moroni D., Pascali M. A., Pellegrini V.
The use of Unmanned Aerial Vehicles (UAVs), or drones, is increasingly common in both research and industrial fields. Nowadays, the use of single UAVs is quite established and several products are already available to consumers, while UAV swarms are still subject of research and development. This position paper describes the objectives of a research project, namely SCIADRO2, which deals with innovative applications and network architectures based on the use of UAVs and UAV swarms in several civilian fields.Source: WiSATS 2017 - International Conference on Wireless and Satellite Systems, pp. 115–124, Oxford, UK, 14-15 September 2017
DOI: 10.1007/978-3-319-76571-6_12

See at: ISTI Repository Open Access | academic.microsoft.com Restricted | link.springer.com Restricted | link.springer.com Restricted | CNR ExploRA Restricted | rd.springer.com Restricted


2014 Contribution to book Restricted

Web and mobile visualization for cultural heritage
Di Benedetto M., Ponchio F., Malomo L., Callieri M., Dellepiane M., Cignoni P., Scopigno R.
Thanks to the impressive research results produced in the last decade, digital technologies are now mature for producing high-quality digital replicas of Cultural Heritage (CH) artifacts. At the same time, CH practitioners and scholars have also access to a number of technologies that allow distributing and presenting those models to everybody and everywhere by means of a number of communication platforms. The goal of this chapter is to present some recent technologies for supporting the visualization of complex models, by focusing on the requirements of interactive manipulation and visualization of 3D models on the web and on mobile platforms. The section will present some recent experiences where high-quality 3D models have been used in CH research, restoration and conservation. Some open issues in this domain will also be presented and discussed.Source: 3D Research Challenges in Cultural Heritage. A Roadmap in Digital Heritage Preservation, edited by Marinos Ioannides, Ewald Quak, pp. 18–35. Berlin: Springer, 2014
DOI: 10.1007/978-3-662-44630-0_2
Project(s): ARIADNE via OpenAIRE

See at: academic.microsoft.com Restricted | dblp.uni-trier.de Restricted | doi.org Restricted | link.springer.com Restricted | link.springer.com Restricted | CNR ExploRA Restricted | rd.springer.com Restricted | www.scopus.com Restricted


2014 Journal article Restricted

eXploreMaps: efficient construction and ubiquitous exploration of panoramic view graphics of complex 3D environments
Di Benedetto M., Ganovelli F., Balsa Rodriguez M., Jaspe Villanueva A., Gobbetti E., Scopigno R.
We introduce a novel efficient technique for automatically transforming a generic renderable 3D scene into a simple graph representation named ExploreMaps, where nodes are nicely placed point of views, called probes, and arcs are smooth paths between neighboring probes. Each probe is associated with a panoramic image enriched with preferred viewing orientations, and each path with a panoramic video. Our GPU-accelerated unattended construction pipeline distributes probes so as to guarantee coverage of the scene while accounting for perceptual criteria before finding smooth, good looking paths between neighboring probes. Images and videos are precomputed at construction time with off-line photorealistic rendering engines, providing a convincing 3D visualization beyond the limits of current real-time graphics techniques. At run-time, the graph is exploited both for creating automatic scene indexes and movie previews of complex scenes and for supporting interactive exploration through a low-DOF assisted navigation interface and the visual indexing of the scene provided by the selected viewpoints. Due to negligible CPU overhead and very limited use of GPU functionality, real-time performance is achieved on emerging web-based environments based on WebGL even on low-powered mobile devices.Source: Computer graphics forum (Online) 33 (2014): 459–468. doi:10.1111/cgf.12334
DOI: 10.1111/cgf.12334

See at: Computer Graphics Forum Restricted | Computer Graphics Forum Restricted | Computer Graphics Forum Restricted | Computer Graphics Forum Restricted | Computer Graphics Forum Restricted | onlinelibrary.wiley.com Restricted | Computer Graphics Forum Restricted | CNR ExploRA Restricted | Computer Graphics Forum Restricted | Computer Graphics Forum Restricted | Computer Graphics Forum Restricted | Computer Graphics Forum Restricted


2014 Conference article Open Access OPEN

An advanced solution for publishing 3D content on the Web
Potenziani M., Corsini M., Callieri M., Di Benedetto M., Ponchio F., Dellepiane M., Scopigno R.
The deployment of 3D content on the Web gained momentum in many contexts. The Cultural Heritage field is a good example, since 3D data need to be published on the Web for many different purposes (presenting museum collections, displaying virtual reconstruction of ancient sites, showing connections between artefacts of interest, etc.); moreover, 3D data are often paired by other multimedia information and, finally, solutions should be cheap and high-fidelity. The current approach for developing 3D graphics applications on the web is to employ WebGL, a graphics javascript API that is embedded in modern web browsers. WebGL was derived from the standard OpenGL graphics library and it has been recently adopted by all the major web browsers. Thanks to this, people that access a web site based on WebGL do not need to install any additional plugins or other applications to use the interactive 3D content of the web page. The main problem related with the use of WebGL is the substantial technical skills the developers should master (such as web design, Javascript programming and knowledge of Computer Graphics) in order to implement an application. We present 3DHop (3D Heritage Online Presenter), an advance technological solution that allows people without high/mid level programming skills to be able to publish 3D content on the Web in several forms. This technology consists in a set of components and templates for the online visualization that have in common the characteristics to be easily modifiable with a minimal effort to obtain the desired final effect. This technology has been developed in the context of a few EU projects and it has been already used in Virtual Museums projects. 3DHop is now reaching its maturity and becomes a solid tool open to the Cultural Heritage community, reducing the learning curve and speeding up the implementation of virtual museums and of on-line interactive presentations. From a technical point of view, 3DHop is based on SpiderGL, a support library built on top of WebGL and developed by CNR-ISTI. A brief overview of this library will also be given in the presentation.Source: MWF 2014 - Museums and the Web Florence 2014, Florence, Italy, 18-21 February 2014
Project(s): V-MUST.NET via OpenAIRE

See at: CNR ExploRA Open Access


2014 Book Unknown

Introduction to Computer Graphics: a practical learning approach
Ganovelli F., Corsini M., Pattanaik S., Di Benedetto M.
This book guides students in developing their own interactive graphics application. The authors show step by step how to implement computer graphics concepts and theory using the EnvyMyCar (NVMC) framework as a consistent example throughout the text. They use the WebGL graphics API to develop NVMC, a simple, interactive car racing game. Each chapter focuses on a particular computer graphics aspect, such as 3D modeling and lighting. The authors help students understand how to handle 3D geometric transformations, texturing, complex lighting effects, and more. This practical approach leads students to draw the elements and effects needed to ultimately create a visually pleasing car racing game.Source: London: CRC Press - Taylor & Francis Group, 2014

See at: CNR ExploRA | www.crcpress.com


2012 Contribution to book Restricted

Features And Design Choices in SpiderGL.
Di Benedetto M., Ganovelli F., Banterle F.
Technologies related to Computer Graphics (CG) are constantly growing. This is mostly due to the widespread availability of 3D acceleration hard- ware with an unprecedented ratio of performance to cost. In the past, access to such accelerators was conned to workstations; nowadays, even hand-held devices such as smartphones are equipped with powerful graph- ics hardware. On a parallel time line, with the introduction of OpenGL, CG software moved to proprietary solutions from royalty free specications. In addition, widespread access to broadband Internet connections led to a tremendous increase in content availability, as well as a great enrichment of web technologies, such as HTML5. In this mature scenario, the WebGL specication was introduced to allow CG and Web programmers to leverage the power of GPUs directly within web pages. WebGL is a powerful technology based on the OpenGLjES 2.0 specication, and it thus adheres the philosophy of a barebones low-level API. As it happens in similar contexts, a series of higher-level libraries have been developed to ease usage and implement more complex constructs. SpiderGL [Di Benedetto et al. 10] is a JavaScript CG library that uses WebGL for real-time rendering. The library exposes a series of utilities, data structures, and algorithms to serve typical graphics tasks. When de- veloping SpiderGL, we wanted to create a library able to simplify the most common usage pattern of WebGL, and that could guarantee a seamless in- tegration into complex software packages. Its role of middleware imposed on us a need to enforce consistency whenever users wanted to access the underlying WebGL layer, and to provide a solid foundation for the devel- opment of higher-level components.Source: OpenGL Insights, edited by Patrick Cozzi and Christophe Riccio, pp. 583–604. Boca Raton: CRC Press, 2012
Project(s): V-CITY via OpenAIRE

See at: openglinsights.com Restricted | CNR ExploRA Restricted


2012 Journal article Restricted

A parallel architecture for interactively rendering scattering and refraction effects
Bernabei D., Hakke-Patil A., Banterle F., Di Benedetto M., Ganovelli F., Pattanaik S., Scopigno R.
A new method for interactive rendering of complex lighting effects combines two algorithms. The first performs accurate ray tracing in heterogeneous refractive media to compute high-frequency phenomena. The second applies lattice-Boltzmann lighting to account for low-frequency multiple-scattering effects. The two algorithms execute in parallel on modern graphics hardware. This article includes a video animation of the authors' real-time algorithm rendering a variety of scenesSource: IEEE computer graphics and applications (Online) 32 (2012): 34–43. doi:10.1109/MCG.2011.106
DOI: 10.1109/mcg.2011.106
Project(s): 3D-COFORM via OpenAIRE

See at: IEEE Computer Graphics and Applications Restricted | IEEE Computer Graphics and Applications Restricted | ieeexplore.ieee.org Restricted | IEEE Computer Graphics and Applications Restricted | CNR ExploRA Restricted | IEEE Computer Graphics and Applications Restricted | IEEE Computer Graphics and Applications Restricted | IEEE Computer Graphics and Applications Restricted | IEEE Computer Graphics and Applications Restricted


2012 Journal article Open Access OPEN

From the digitization of cultural artifacts to the web publishing of digital 3D collections: an automatic pipeline for knowledge sharing
Larue F., Di Benedetto M., Dellepiane M., Scopigno R.
In this paper, we introduce a novel approach intended to simplify the production of multimedia content from real objects for the purpose of knowledge sharing, which is particularly appropriate to the cultural heritage field. It consists in a pipeline that covers all steps from the digitization of the objects up to the Web publishing of the resulting digital copies. During a first stage, the digitization is performed by a high speed 3D scanner that recovers the object's geometry. A second stage then extracts from the recovered data a color texture as well as a texture of details, in order to enrich the acquired geometry in a more realistic way. Finally, a third stage converts these data so that they are compatible with the recent WebGL paradigm, then providing 3D multimedia content directly exploitable by end-users by means of standard Internet browsers. The pipeline design is centered on automation and speed, so that it can be used by non expert users to produce multimedia content from potentially large object's collections, like it may be the case in cultural heritage. The choice of a high speed scanner is particularly adapted for such a design, since this kind of devices has the advantage of being fast and intuitive. Processing stages that follow the digitization are both completely automatic and ``seamless'', in the sense that it is not incumbent upon the user to perform tasks manually, nor to use external softwares that generally need additional operations to solve compatibility issues.Source: Journal of Multimedia 7 (2012): 133–144. doi:10.4304/jmm.7.2.132-144
DOI: 10.4304/jmm.7.2.132-144
Project(s): 3D-COFORM via OpenAIRE

See at: Journal of Multimedia Open Access | Journal of Multimedia Restricted | Journal of Multimedia Restricted | Hyper Article en Ligne Restricted | ojs.academypublisher.com Restricted | CNR ExploRA Restricted | Journal of Multimedia Restricted | Journal of Multimedia Restricted


2011 Conference article Restricted

Reconstructing and Exploring Massive Detailed Cityscapes
Gobbetti E., Marton F., Di Benedetto M., Ganovelli F., Buehler M., Schubiger S., Specht M., Engels C., Van Gool L.
We present a state-of-the-art system for obtaining and exploring large scale thr ee-dimensional models of urban landscapes. A multimodal approach to reconstructi on fuses cadastral information, laser range data, and oblique imagery into build ing models, which are then refined by applying procedural rules for replacing te xtures with 3D elements, such as windows and doors, therefore enhancing the mode l quality and adding semantics to the model. For city scale exploration, these d etailed models are uploaded to a web-based service, which automatically constructs an approximate scalable multiresolution representation. This representation c an be interactively transmitted and visualized over the net to clients ranging from graphics PCs to web-enabled portable devices. The approach's characteristics and performance are illustrated using real-world city-scale data.Source: The 12th International Symposium on Virtual Reality, Archaeology and Cultural Heritage, VAST 2011, pp. 1–8, Prato, Italy, 18-21 October 2011
DOI: 10.2312/vast/vast11/001-008
Project(s): V-CITY via OpenAIRE

See at: CNR ExploRA Restricted


2011 Contribution to conference Restricted

A low-cost time-critical obstacle avoidance system for the visually impaired
Bernabei D., Ganovelli F., Di Benedetto M., Dellepiane M., Scopigno R.
We present a low cost system for unassisted mobility of blind people built with off-the-shelf technology. Our system takes as input the depth maps produced by the Kinectic device coupled with the data from its accelerometer to provide a registered point based 3D representation of the scene in front of the user. We developed a time-critical framework to analyze the scene and classify the ground and still or moving obstacles and provide the user with a constant and reliable feedback.Source: International Conference on Indoor Positioning and Indoor Navigation, IPIN 2011, Guimara, Portugal, 21-23 September 2011

See at: CNR ExploRA Restricted