31 result(s)
Page Size: 10, 20, 50
Export: bibtex, xml, json, csv
Order by:

CNR Author operator: and / or
more
Typology operator: and / or
Language operator: and / or
Date operator: and / or
more
Rights operator: and / or
2021 Journal article Open Access OPEN

Augmented virtuality using touch-sensitive 3D-printed objects
Palma G., Perry S., Cignoni P.
Virtual reality (VR) technologies have become more and more affordable and popular in the last five years thanks to hardware and software advancements. A critical issue for these technologies is finding paradigms that allow user interactions in ways that are as similar as possible to the real world, bringing physicality into the experience. Current literature has shown, with different experiments, that the mapping of real objects in virtual reality alongside haptic feedback significantly increases the realism of the experience and user engagement, leading to augmented virtuality. In this paper, we present a system to improve engagement in a VR experience using inexpensive, physical, and sensorized copies of real artefacts made with cheap 3D fabrication technologies. Based on a combination of hardware and software components, the proposed system gives the user the possibility to interact with the physical replica in the virtual environment and to see the appearance of the original cultural heritage artefact. In this way, we overcome one of the main limitations of mainstream 3D fabrication technologies: a faithful appearance reproduction. Using a consumer device for the real-time hand tracking and a custom electronic controller for the capacitive touch sensing, the system permits the creation of augmented experiences where the user with their hands can change the virtual appearance of the real replica object using a set of personalization actions selectable from a physical 3D-printed palette.Source: Remote sensing (Basel) 13 (2021). doi:10.3390/rs13112186
DOI: 10.3390/rs13112186
Project(s): EMOTIVE via OpenAIRE

See at: ISTI Repository Open Access | CNR ExploRA Open Access | www.mdpi.com Open Access


2020 Contribution to book Restricted

Il rilievo 3D per la caratterizzazione morfologica dell'opera di Raffaello
Pingi P., Siotto E., Palma G.
Tra le analisi diagnostiche non invasive effettuate in supporto al restauro della tavola raffigurante Papa Leone X de' Medici tra i cardinali Giulio de Medici e Luigi de' Rossi di Raffaello è stato eseguito un rilievo tridimensionale (3D) dell'intera opera. Il rilievo 3D, oltre ad essere usato per effettuare misure sulla forma della superficie ed essere un valido supporto per la conoscenza e lo studio dell'opera, si configura anche come un efficace mezzo per monitorarne lo stato di conservazione nel tempo. In questo caso, l'acquisizione 3D era volta alla valutazione della deformazione del supporto ligneo e allo studio del deterioramento della superficie pittorica. Per tale motivo l'intera opera (fronte, retro e bordi) è stata acquisita con un passo di campionamento medio pari a 0.3 mm. Alcune zone sono state acquisite anche ad una risoluzione di 0.16 mm al fine di mettere a punto un metodo automatico in grado di evidenziare le micro-fratture dello strato pittorico.Source: Raffaello e il ritorno del Papa Medici: restauri e scoperte sul Ritratto di Leone X con i due cardinali, edited by Marco Ciatti, Eike D. Schmidt, pp. 145–149. Firenze: Edifir - Edizioni Firenze s.r.l., 2020

See at: CNR ExploRA Restricted


2020 Contribution to book Closed Access

Una Loggia digitale al tempo del COVID-19
Palma G., Siotto E.
Il capitolo dopo aver illustrato il concepimento e l'evoluzione del progetto "Una loggia digitale per Raffaello e collaboratori in Villa Farnesina, Roma" per conto dell'Accademia Nazionale dei Lincei e del CNR-ISTI, descrive le fasi di progettazione e sviluppo del sistema digitale interattivo, come sono stati acquisiti i dati e come è stato creato il modello e, infine, offre una guida all'uso del sistema interattivo organizzato su due livelli di dettaglio (http://vcg.isti.cnr.it/farnesina/loggia/).Source: Raffaello in Villa Farnesina: Galatea e Psiche, edited by A. Sgamellotti, V. Lapenta, C. Anselmi, C. Seccaroni, pp. 89–96. Roma: Bardi Editore, 2020

See at: CNR ExploRA Restricted | vcg.isti.cnr.it Restricted


2020 Contribution to book Closed Access

A digital Loggia at the time of COVID-19
Palma G., Siotto E.
The chapter starts illustrating the conception and evolution of the project "A digital loggia for Raphael and collaborators in Villa Farnesina, Rome" on behalf of the Accademia Nazionale dei Lincei and ISTI-CNR. After it describes the design and development phases of the interactive digital system, how the data were acquired and how the model was created and, finally, offers a guide to the use of the interactive system organized on two levels of detail (http://vcg.isti.cnr.it/farnesina /loggia/).Source: Raphael in Villa Farnesina: Galatea and Psyche, edited by A. Sgamellotti, V. Lapenta, C. Anselmi, C. Seccaroni, pp. 91–98. Roma: Bardi Editore, 2020

See at: CNR ExploRA Restricted | vcg.isti.cnr.it Restricted


2019 Conference article Restricted

Analisi dei frammenti di Sectilia vitrei dalla Villa romana di Aiano-Torraccia di Chiusi (si) e studio della tecnica d'esecuzione
Cavalieri M., Landi S., Manna D., Giamello M., Fornacelli C., Bracci S., Palma G., Siotto E., Scopigno R.
The consistent amount of sectilia fragments from the late Roman Villa of Aiano (4th-5th century AD)provides important insights on the study of the diffusion of opus sectile during the Late Roman period. The extent of the corpus of glass slabs, in particular, immediately suggests interesting perspectives on both the archaeological and technological issues. Thanks to cooperation between archaeologist, conservators, IT and scientists, an in-depth study of the repertory is in progress to provide important information about the technologies and the raw materials used to produce a number of selected samples. High-resolution images have been obtained via Reflectance Transformation Imaging (RTI) to better understand all the different phases characterizing the manufacture of the more complex slabs. Due to their flexibility and low analytical costs, portable and non-invasive analytical techniques provided a fast and quite accurate definition of the chemical and mineralogical properties of each sample and the first classification of a large number of slabs in compositional clusters. Portable X-Ray Fluorescence (p-XRF) and Fiber Optics Reflectance Spectroscopy (FORS) allowed a first definition of the chemical variability within the repertory and provided indications about both manufacturing and coloring techniques.Source: Atti del XXIV Colloquio dell'associazione italiana per lo Studio e la Conservazione del Mosaico, pp. 605–617, Este, Padova, Italy, 14-17 marzo 2018

See at: CNR ExploRA Restricted | www.aiscom.it Restricted


2019 Journal article Open Access OPEN

Deformation analysis of Leonardo da Vinci's "Adorazione dei Magi" through temporal unrelated 3D digitization
Palma G., Pingi P., Siotto E., Bellucci R., Guidi G., Scopigno R.
3D scanning is an effective technology for dealing at different levels the state of conservation/deformation of a panel painting, from the micro-geometry of the craquelure to the macro-geometry of the supported used. Unfortunately, the current solutions used to analyze multiple 3D scans acquired over time are based on very controlled acquisition procedures, such as the use of target reference points that are stationary over time and fixed to the artwork, or on complex hardware setups to keep the acquisition device fixed to the artwork. These procedures are challenging when a long monitoring period is involved or during restoration when the painting may be moved several times. This paper presents a new and robust approach to observe and quantify the panel deformations of artworks by comparing 3D models acquired with different scanning devices at different times. The procedure is based on a non-rigid registration algorithm that deforms one 3D model over the other in a controlled way, extracting the real deformation field. We apply the method to the 3D scanning data of the unfinished panel painting "Adorazione dei Magi" by Leonardo da Vinci. The data were acquired in 2002 and 2015. First, we analyze the two 3D models with the classical distance from the ideal flat plane of the painting. Then we study the type of deformation of each plank of the support by fitting a quadric surface. Finally, we compare the models before and after the deformation computed by a non-rigid registration algorithm. This last comparison enables the panel deformation to be separated from the structural changes (e.g. the structural restorations on the back and the missing pieces) of the artwork in a more robust way.Source: Journal of cultural heritage 38 (2019): 174–185. doi:10.1016/j.culher.2018.11.001
DOI: 10.1016/j.culher.2018.11.001

See at: ISTI Repository Open Access | Journal of Cultural Heritage Restricted | Journal of Cultural Heritage Restricted | Journal of Cultural Heritage Restricted | CNR ExploRA Restricted | www.sciencedirect.com Restricted | Journal of Cultural Heritage Restricted


2019 Report Open Access OPEN

Augmented reality experience with physical artefacts
Palma G., Cignoni P.
This technical report presents a system to improve the engagement of the user in a virtual reality experience using economic, physical copies of real artefacts, made with cheap 3D fabrication technologies. Based on a combination of hardware and software components, the proposed system gives the user the possibility to interact with the physical replica in the virtual environment and to see the appearance of the original artefact. In this way, we overcome the current limitation of the cheap 3D fabrication technologies: a faithful appearance reproduction. Moreover using a consumer device for the real-time hand tracking and a custom electronic controller for the capacitive touch sensing, the system permits the creation of virtual experiences where the user with his hand can change the virtual appearance of the object using a set of personalization actions selectable from a physical 3D printed palette.Source: ISTI Technical reports, 2019
Project(s): EMOTIVE via OpenAIRE

See at: ISTI Repository Open Access | CNR ExploRA Open Access


2019 Journal article Open Access OPEN

High dynamic range point clouds for real-time relighting
Sabbadin M., Palma G., Banterle F., Boubekeur T., Cignoni P.
Acquired 3D point clouds make possible quick modeling of virtual scenes from the real world.With modern 3D capture pipelines, each point sample often comes with additional attributes such as normal vector and color response. Although rendering and processing such data has been extensively studied, little attention has been devoted using the light transport hidden in the recorded per-sample color response to relight virtual objects in visual effects (VFX) look-dev or augmented reality (AR) scenarios. Typically, standard relighting environment exploits global environment maps together with a collection of local light probes to reflect the light mood of the real scene on the virtual object. We propose instead a unified spatial approximation of the radiance and visibility relationships present in the scene, in the form of a colored point cloud. To do so, our method relies on two core components: High Dynamic Range (HDR) expansion and real-time Point-Based Global Illumination (PBGI). First, since an acquired color point cloud typically comes in Low Dynamic Range (LDR) format, we boost it using a single HDR photo exemplar of the captured scene that can cover part of it. We perform this expansion efficiently by first expanding the dynamic range of a set of renderings of the point cloud and then projecting these renderings on the original cloud. At this stage, we propagate the expansion to the regions not covered by the renderings or with low-quality dynamic range by solving a Poisson system. Then, at rendering time, we use the resulting HDR point cloud to relight virtual objects, providing a diffuse model of the indirect illumination propagated by the environment. To do so, we design a PBGI algorithm that exploits the GPU's geometry shader stage as well as a new mipmapping operator, tailored for G-buffers, to achieve real-time performances. As a result, our method can effectively relight virtual objects exhibiting diffuse and glossy physically-based materials in real time. Furthermore, it accounts for the spatial embedding of the object within the 3D environment. We evaluate our approach on manufactured scenes to assess the error introduced at every step from the perfect ground truth. We also report experiments with real captured data, covering a range of capture technologies, from active scanning to multiview stereo reconstruction.Source: Computer graphics forum (Online) 38 (2019): 513–525. doi:10.1111/cgf.13857
DOI: 10.1111/cgf.13857
Project(s): EMOTIVE via OpenAIRE

See at: ISTI Repository Open Access | Computer Graphics Forum Restricted | Computer Graphics Forum Restricted | Computer Graphics Forum Restricted | diglib.eg.org Restricted | Computer Graphics Forum Restricted | Computer Graphics Forum Restricted | CNR ExploRA Restricted


2018 Journal article Open Access OPEN

Enhanced visualization of detected 3D geometric differences
Palma G., Sabbadin M., Corsini M., Cignoni P.
The wide availability of 3D acquisition devices makes viable their use for shape monitoring. The current techniques for the analysis of time-varying data can efficiently detect actual significant geometric changes and rule out differences due to irrelevant variations (such as sampling, lighting and coverage). On the other hand, the effective visualization of such detected changes can be challenging when we want to show at the same time the original appearance of the 3D model. In this paper, we propose a dynamic technique for the effective visualization of detected differences between two 3D scenes. The presented approach, while retaining the original appearance, allows the user to switch between the two models in a way that enhances the geometric differences that have been detected as significant. Additionally, the same technique is able to visually hides the other negligible, yet visible, variations. The main idea is to use two distinct screen space time-based interpolation functions for the significant 3D differences and for the small variations to hide. We have validated the proposed approach in a user study on a different class of datasets, proving the objective and subjective effectiveness of the method.Source: Computer graphics forum (Online) 35 (2018): 159–171. doi:10.1111/cgf.13239
DOI: 10.1111/cgf.13239
Project(s): HARVEST4D via OpenAIRE

See at: ISTI Repository Open Access | Computer Graphics Forum Restricted | Computer Graphics Forum Restricted | Computer Graphics Forum Restricted | Computer Graphics Forum Restricted | Computer Graphics Forum Restricted | Computer Graphics Forum Restricted | onlinelibrary.wiley.com Restricted | Computer Graphics Forum Restricted | Computer Graphics Forum Restricted | CNR ExploRA Restricted


2018 Journal article Open Access OPEN

Scalable non-rigid registration for multi-view stereo data
Palma G., Boubekeur T., Ganovelli F., Cignoni P.
We propose a new non-rigid registration method for large 3D meshes from Multi-View Stereo (MVS) reconstruction characterized by low-frequency shape deformations induced by several factors, such as low sensor quality and irregular sampling object coverage. Starting from a reference model to which we want to align a new 3D mesh, our method starts by decomposing it in patches using a Lloyd clustering before running an ICP local registration for each patch. Then, we improve the alignment using few geometric constraints and finally, we build a global deformation function that blends the estimated per-patch transformations. This function is structured on top of a deformation graph derived from the dual graph of the clustering. Our algorithm is iterated until convergence, increasing progressively the number of patches in the clustering to capture smaller deformations. The method comes with a scalable multicore implementation that enables, for the first time, the alignment of meshes made of tens of millions of triangles in a few minutes. We report extensive experiments of our algorithm on several dense Multi-View Stereo models, using a 3D scan or another MVS reconstruction as reference. Beyond MVS data, we also applied our algorithm to different scenarios, exhibiting more complex and larger deformations, such as 3D motion capture dataset or 3D scans of dynamic objects. The good alignment results obtained for both datasets highlights the efficiency and the flexibility of our approach.Source: ISPRS journal of photogrammetry and remote sensing 142 (2018): 328–341. doi:10.1016/j.isprsjprs.2018.06.012
DOI: 10.1016/j.isprsjprs.2018.06.012

See at: ISTI Repository Open Access | ISPRS Journal of Photogrammetry and Remote Sensing Restricted | ISPRS Journal of Photogrammetry and Remote Sensing Restricted | ISPRS Journal of Photogrammetry and Remote Sensing Restricted | CNR ExploRA Restricted | ISPRS Journal of Photogrammetry and Remote Sensing Restricted


2018 Conference article Open Access OPEN

Soft transparency for point cloud rendering
Seemann P, Palma G., Dellepiane M., Cignoni P., Goesele M.
We propose a novel rendering framework for visualizing point data with complex structures and/or different quality of data. The point cloud can be characterized by setting a per-point scalar field associated to the aspect that differentiates the parts of the dataset (i.e. uncertainty given by local normal variation). Our rendering method uses the scalar field to render points as solid splats or semi-transparent spheres with non-uniform density to produce the final image. To that end, we derive a base model for integrating density in (intersecting) spheres for both the uniform and non-uniform setting and introduce a simple and fast approximation which yields interactive rendering speeds for millions of points. Because our method only relies on the basic OpenGL rasterization pipeline, rendering properties can be adjusted in real-time by user. The method has been tested on several datasets with different characteristics, and user studies show that a clearer understanding of the scene is possible in comparison with point splatting techniques and basic transparency rendering.Source: Eurographics Symposium on Rendering - Experimental Ideas & Implementations, pp. 95–106, Karlsruhe, Germany, 1-4 July 2018
DOI: 10.2312/sre.20181176

See at: diglib.eg.org Open Access | ISTI Repository Open Access | CNR ExploRA Open Access


2018 Report Open Access OPEN

High dynamic range expansion of point clouds for real-time relighting
Sabbadin M., Palma G., Banterle F., Boubekeur T., Cignoni P.
Acquired 3D point clouds make possible quick modeling of virtual scenes from the real world. With modern 3D capture pipelines, each point sample often comes with additional attributes such as normal vector and color response. Although rendering and processing such data has been extensively studied, little attention has been devoted using the genuine light transport hidden in the recorded per-sample color response to relight virtual objects in visual effects (VFX) look-dev or augmented reality scenarios. Typically, standard relighting environment exploits global environment maps together with a collection of local light probes to reflect the light mood of the real scene on the virtual object. We propose instead a unified spatial approximation of the radiance and visibility relationships present in the scene, in the form of a colored point cloud. To do so, our method relies on two core components: High Dynamic Range (HDR) expansion and real-time Point-Based Global Illumination (PBGI). First of all, since an acquired color point cloud typically comes in Low Dynamic Range (LDR) format, we boost it using a single HDR photo exemplar of the captured scene, that may only cover part of it. We perform efficiently this expansion by first expanding the dynamic range of a set of renderings of the point cloud and then projecting these renderings on the original cloud. At this stage, we propagate the expansion to the regions which are not covered by the renderings or with low quality dynamic range by solving a Poisson's system. Then, at rendering time, we use the resulting HDR point cloud to relight virtual objects, providing a diffuse model of the indirect illumination propagated by the environment. To do so, we design a PBGI algorithm that exploits the GPU's geometry shader stage as well as a new mipmapping operator, tailored for G-buffers, to achieve real-time performances. As a result, our method can effectively relight virtual objects exhibiting diffuse and glossy physically-based materials in real time. Furthermore, it accounts for the spatial embedding of the object within the 3D environment. We evaluate our approach on manufactured scenes to assess the error introduced at every step with respect to the perfect ground truth. We also report experiments on real captured data, covering a range of capture technologies, from active scanning to multiview stereo reconstruction.Source: ISTI Technical reports, 2018

See at: ISTI Repository Open Access | CNR ExploRA Open Access


2018 Conference article Open Access OPEN

The EMOTIVE Project - Emotive virtual cultural experiences through personalized storytelling
Katifori A., Roussou M., Perry S., Cignoni P., Malomo L., Palma G., Dretakis G., Vizcay S.
This work presents an overview of the EU-funded project EMOTIVE (Emotive virtual cultural experiences through personalized storytelling). EMOTIVE works from the premise that cultural sites are, in fact, highly emo- tional places, seedbeds not just of knowledge, but of emotional resonance and human connection. From 2016-2019, the EMOTIVE consortium will research, design, develop and evaluate methods and tools that can support the cultural and creative industries in creating narratives and experiences which draw on the power of 'emotive storytelling', both on site and virtually. This work focuses on the project objectives and results so far and presents identified challenges.Source: CI 2018 - Workshop on Cultural Informatics, co-located with the International Conference on Digital Heritage 2018 (EuroMed 2018), pp. 11–20, Nicosia, Cyprus, November 3, 2018
Project(s): EMOTIVE via OpenAIRE

See at: ceur-ws.org Open Access | ISTI Repository Open Access | CNR ExploRA Open Access


2017 Contribution to book Restricted

Realizzazione del sistema interattivo 'Loggia digitale'
Siotto E., Palma G., Scopigno R.
The VC Lab has developed, in collaboration with the Accademia Nazionale dei Lincei, the Interactive Digital System of the Loggia of Cupid and Psyche within the exhibition 'The Loggia of Cupid and Psyche - Raffaello and Giovanni da Udine - Colours of Prosperity: Fruits from the Old and New World' Villa Farnesina, Rome April 20 - July 20 2017. The system allows access to the 'digital Loggia' and permits the visitor to navigate freely through the high-resolution panoramic image of the painted ceiling, to admire it from a closer point of view and to consult the results of historical, botanical and scientific analyses performed on the selected species. The system is available online and with an interactive kiosk in the Farnesina building.Source: La Loggia di Amore e Psiche - Raffaello e Giovanni da Udine - I colori della prosperità: Frutti dal Vecchio e Nuovo Mondo, pp. 74–77, 2017

See at: CNR ExploRA Restricted | vcg.isti.cnr.it Restricted


2017 Contribution to book Restricted

Development of the interactive system 'digital Loggia'
Siotto E., Palma G., Scopigno R.
The VC Lab has developed, in collaboration with the Accademia Nazionale dei Lincei, the Interactive Digital System of the Loggia of Cupid and Psyche within the exhibition 'The Loggia of Cupid and Psyche - Raffaello and Giovanni da Udine - Colours of Prosperity: Fruits from the Old and New World' Villa Farnesina, Rome April 20 - July 20 2017. The system allows access to the 'digital Loggia' and permits the visitor to navigate freely through the high-resolution panoramic image of the painted ceiling, to admire it from a closer point of view and to consult the results of historical, botanical and scientific analyses performed on the selected species. The system is available online and with an interactive kiosk in the Farnesina building.Source: The Loggia of Cupid and Psyche - Raffaello and Giovanni da Udine - Colours of prosperity: Fruits from the Old and New World, pp. 74–77, 2017

See at: CNR ExploRA Restricted | vcg.isti.cnr.it Restricted


2017 Contribution to book Restricted

Documentazione e analisi delle deformazioni del supporto ligneo e della superficie pittorica mediante rilievo 3D
Pingi P., Siotto E., Palma G., Scopigno R.
Un dipinto su tela o su tavola, contrariamente a quanto si potrebbe pensare, non è un oggetto con una superficie perfettamente planare, ma è caratterizzato da una complessa tridimensionalità. Il colore che l'artista pone sul supporto ha una propria corposità materica, uno spessore, che, seppur millimetrico o sub-millimetrico, può essere rilevato con strumenti e applicativi di misurazione tridimensionale (3D). Allo stesso tempo, il supporto ligneo può presentare deformazioni legate a vicissitudini storiche e conservative, che possono essere facilmente rilevate e documentate. Nella fase di analisi di un'opera soggetta ad un importante intervento di restauro, come è avvenuto per l'Adorazione dei Magi di Leonardo da Vinci, un'accurata documentazione 3D della superficie pittorica è pertanto strettamente legata a quella del suo supporto ligneo. Pertanto, una scrupolosa acquisizione geometrica 3D del tavolato e dei suoi elementi di collegamento (farfalle e cavicchi) e di sostegno (traverse) può fornire elementi utili non solo per una maggiore conoscenza della fattura dell'opera e del suo stato di conservazione, ma anche per un suo monitoraggio nel corso del tempo o in fase di restauro. Inoltre, un uso appropriato delle moderne tecnologie di Computer Grafica 3D non rappresenta soltanto un valido ausilio diagnostico per la conoscenza dell'opera, ma anche un mezzo per raccogliere informazioni di carattere scientifico-divulgativo (ad esempio dati storico-artistici, tecnici, risultati di analisi chimico-fisiche) e renderle facilmente fruibili on-line agli addetti del settore e ad un pubblico più vasto, grazie a sistemi multimediali appositamente sviluppati. Nel caso del capolavoro non concluso di Leonardo, una sua completa acquisizione 3D ad alta risoluzione è stata eseguita con lo scopo di evidenziare e misurare - in fase di restauro pittorico - una mappa di deviazioni della planarità causata dalla curvatura e deformazione delle tavole lignee, consentendo di documentare la deformazione spaziale subìta dalla pittura e monitorare il suo stato di conservazione.Source: Il restauro dell'Adorazione dei Magi di Leonardo - La riscoperta di un capolavoro, edited by Marco Ciatti, Cecilia Frosinini, pp. 281–286. Firenze: Edifir - Edizioni Firenze s.r.l., 2017

See at: CNR ExploRA Restricted


2016 Report Restricted

Temporal appearance change detection using multi-view image acquisition
Palma G., Banterle F., Cignoni P.
Appearance change detection is a very important task for applications monitoring the degradation process of a surface. This is especially true in Cultural Heritage (CH), where the main goal is to control the preservation condition of an artifact. We propose an automatic solution based on the estimation of an explicit parametric reflectance model that can help the user in the detection of the regions that are affected by appearance changes. The idea is to acquire multi-view photo datasets at different times and to compute the 3D model and the Surface Light Field (SLF) of the object for each acquisition. Then, we compare the SLF in the time using a weighting scheme, which takes account of small lighting variations and small misalignments. The obtained results give several cues on the changed areas. In addition, we believe that these can be used as good starting point for further investigations.Source: ISTI Technical reports, 2016
Project(s): HARVEST4D via OpenAIRE

See at: CNR ExploRA Restricted


2016 Journal article Open Access OPEN

Detection of geometric temporal changes in point clouds
Palma G., Cignoni P., Boubekeur T., Scopigno R.
Detecting geometric changes between two 3D captures of the same location performed at different moments is a critical operation for all systems requiring a precise segmentation between change and no-change regions. Such application scenarios include 3D surface reconstruction, environment monitoring, natural events management and forensic science. Unfortunately, typical 3D scanning setups cannot provide any one-to-one mapping between measured samples in static regions: in particular, both extrinsic and intrinsic sensor parameters may vary over time while sensor noise and outliers additionally corrupt the data. In this paper, we adopt a multi-scale approach to robustly tackle these issues. Starting from two point clouds, we first remove outliers using a probabilistic operator. Then, we detect the actual change using the implicit surface defined by the point clouds under a Growing Least Square reconstruction that, compared to the classical proximity measure, offers a more robust change/no-change characterization near the temporal intersection of the scans and in the areas exhibiting different sampling density and direction. The resulting classification is enhanced with a spatial reasoning step to solve critical geometric configurations that are common in man-made environments. We validate our approach on a synthetic test case and on a collection of real data sets acquired using commodity hardware. Finally, we show how 3D reconstruction benefits from the resulting precise change/no-change segmentation.Source: Computer graphics forum (Print) 35 (2016): 33–45. doi:10.1111/cgf.12730
DOI: 10.1111/cgf.12730
Project(s): HARVEST4D via OpenAIRE

See at: ISTI Repository Open Access | Computer Graphics Forum Restricted | Computer Graphics Forum Restricted | Computer Graphics Forum Restricted | Computer Graphics Forum Restricted | Computer Graphics Forum Restricted | Hyper Article en Ligne Restricted | onlinelibrary.wiley.com Restricted | Computer Graphics Forum Restricted | CNR ExploRA Restricted


2016 Conference article Restricted

Multi-view ambient occlusion for enhancing visualization of raw scanning data
Sabbadin M., Palma G., Cignoni P., Scopigno R.
The correct understanding of the 3D shape is a crucial aspect to improve the 3D scanning process, especially in order to perform high quality and as complete as possible 3D acquisitions on the field. The paper proposes a new technique to enhance the visualization of raw scanning data based on the definition in device space of a Multi-View Ambient Occlusion (MVAO). The approach allows improving the comprehension of the 3D shape of the input geometry and, requiring almost no preprocessing, it can be directly applied to raw captured point clouds. The algorithm has been tested on different datasets: high resolution Time-of-Flight scans and streams of low quality range maps from a depth camera. The results enhance the details perception in the 3D geometry using the multi-view information to make more robust the ambient occlusion estimationSource: Eurographics Workshop on Graphics and Cultural Heritage, pp. 23–32, Genova, Italy, 5-7 ottobre 2016
DOI: 10.2312/gch.20161379
Project(s): HARVEST4D via OpenAIRE

See at: diglib.eg.org Restricted | CNR ExploRA Restricted


2015 Conference article Restricted

Digital Study and Web-based Documentation of the Colour and Gilding on Ancient Marble Artworks
Siotto E., Palma G., Potenziani M., Scopigno R.
Greek and Roman marble artworks have been deeply studied from a typological and stylistic point of view, while there is still a limited knowledge on the pigments, dyes, binders and technical expedients used by Roman artists. In a renewed scientific interest towards the ancient polychromy (colour and gilding), a digital methodological and multidisciplinary approach can provide valuable information to better investigate and understand this fundamental aspect and to get a complete sense on Greek and Roman marble artworks. Following this research direction, the paper proposes a systematic methodological process defined to detect, document and visualize the preserved (and in some cases the digital reconstructed) original colour and gilding on Roman marble sarcophagi (II-IV century AD). The process defines a working pipeline that, starting from the selection of the artefact to study, proposes a set of investigation steps to improve our knowledge of its original painting. These steps include the direct virtual inspection, the archaeological and historical research, the on-site scientific investigation by multispectral imaging, spectroscopic and elemental analysis (eventually supported by micro-invasive techniques performed in laboratory), the accurate polychrome surface acquisition by colour calibrated 2D images. All the data produced are integrated with a high-resolution 3D model to support enhanced analysis and comparison and to create a digital 3D polychrome reconstruction by virtual painting. Finally, all those data are also made accessible on the web by using a cutting edge platform for visual media publication and interactive 3D visualization. This systematic and multidisciplinary process was tested on the so-called 'Annona sarcophagus' (Museo Nazionale Romano - Palazzo Massimo, inv. no. 40799).Source: Digital Heritage International Congress, pp. 239–246, Granada, 28/09/2015-02/10/2015
DOI: 10.1109/digitalheritage.2015.7413877
Project(s): ARIADNE via OpenAIRE

See at: academic.microsoft.com Restricted | diglib.eg.org Restricted | ieeexplore.ieee.org Restricted | ieeexplore.ieee.org Restricted | CNR ExploRA Restricted | vcg.isti.cnr.it Restricted | xplorestaging.ieee.org Restricted