36 result(s)
Page Size: 10, 20, 50
Export: bibtex, xml, json, csv
Order by:

CNR Author operator: and / or
more
Typology operator: and / or
Language operator: and / or
Date operator: and / or
more
Rights operator: and / or
2008 Software Unknown
RTIViewer - a tool for remote browsing of images created with relectance transformation techniques.
Cignoni P., Corsini M., Palma G., Scopigno R.
The RTI Viewer allows you to load and examine images created with relectance transformation techniques. The tool supports these formats, collectively called RTI files: X Polynomial Texture Maps (PTM files) X Hemispherical Harmonics Maps (HSH files) X Universal Reflectance Transformation Imaging (URTI files) The viewer can display both single-view and multi-view images; a mulit-view RTI is a collection of single-view images together with optical flow data that generates intermediate views.

See at: CNR ExploRA


2010 Journal article Restricted
Dynamic shading enhancement for reflectance transformation imaging
Palma G., Corsini M., Cignoni P., Scopigno R., Mudge M.
We propose a set of dynamic shading enhancement techniques for improving the perception of details, features, and overall shape characteristics from images created with Reflectance Transformation Imaging (RTI) techniques. Selection of these perceptual enhancement filters can significantly improve the user's ability to interactively inspect the content of 2D RTI media by zooming, panning, and changing the illumination direction. In particular, we present two groups of strategies for RTI image enhancement based on two main ideas: exploiting the unsharp masking methodology in the RTI-specific context; and locally optimizing the incident light direction for improved RTI image sharpness and illumination of surface features. The Result section will present a number of datasets and compare them with existing techniques.Source: ACM journal on computing and cultural heritage (Print) 3 (2010): 1–20. doi:10.1145/1841317.1841321
DOI: 10.1145/1841317.1841321
Metrics:


See at: Journal on Computing and Cultural Heritage Restricted | CNR ExploRA


2018 Conference article Open Access OPEN
The EMOTIVE Project - Emotive virtual cultural experiences through personalized storytelling
Katifori A., Roussou M., Perry S., Cignoni P., Malomo L., Palma G., Dretakis G., Vizcay S.
This work presents an overview of the EU-funded project EMOTIVE (Emotive virtual cultural experiences through personalized storytelling). EMOTIVE works from the premise that cultural sites are, in fact, highly emo- tional places, seedbeds not just of knowledge, but of emotional resonance and human connection. From 2016-2019, the EMOTIVE consortium will research, design, develop and evaluate methods and tools that can support the cultural and creative industries in creating narratives and experiences which draw on the power of 'emotive storytelling', both on site and virtually. This work focuses on the project objectives and results so far and presents identified challenges.Source: CI 2018 - Workshop on Cultural Informatics, co-located with the International Conference on Digital Heritage 2018 (EuroMed 2018), pp. 11–20, Nicosia, Cyprus, November 3, 2018
Project(s): EMOTIVE via OpenAIRE

See at: ceur-ws.org Open Access | ISTI Repository Open Access | CNR ExploRA


2019 Report Open Access OPEN
Augmented reality experience with physical artefacts
Palma G., Cignoni P.
This technical report presents a system to improve the engagement of the user in a virtual reality experience using economic, physical copies of real artefacts, made with cheap 3D fabrication technologies. Based on a combination of hardware and software components, the proposed system gives the user the possibility to interact with the physical replica in the virtual environment and to see the appearance of the original artefact. In this way, we overcome the current limitation of the cheap 3D fabrication technologies: a faithful appearance reproduction. Moreover using a consumer device for the real-time hand tracking and a custom electronic controller for the capacitive touch sensing, the system permits the creation of virtual experiences where the user with his hand can change the virtual appearance of the object using a set of personalization actions selectable from a physical 3D printed palette.Source: ISTI Technical reports, 2019
Project(s): EMOTIVE via OpenAIRE

See at: ISTI Repository Open Access | CNR ExploRA


2020 Contribution to book Closed Access
Una Loggia digitale al tempo del COVID-19
Palma G., Siotto E.
Il capitolo dopo aver illustrato il concepimento e l'evoluzione del progetto "Una loggia digitale per Raffaello e collaboratori in Villa Farnesina, Roma" per conto dell'Accademia Nazionale dei Lincei e del CNR-ISTI, descrive le fasi di progettazione e sviluppo del sistema digitale interattivo, come sono stati acquisiti i dati e come è stato creato il modello e, infine, offre una guida all'uso del sistema interattivo organizzato su due livelli di dettaglio (http://vcg.isti.cnr.it/farnesina/loggia/).Source: Raffaello in Villa Farnesina: Galatea e Psiche, edited by A. Sgamellotti, V. Lapenta, C. Anselmi, C. Seccaroni, pp. 89–96. Roma: Bardi Editore, 2020

See at: vcg.isti.cnr.it Restricted | CNR ExploRA


2020 Contribution to book Closed Access
A digital Loggia at the time of COVID-19
Palma G., Siotto E.
The chapter starts illustrating the conception and evolution of the project "A digital loggia for Raphael and collaborators in Villa Farnesina, Rome" on behalf of the Accademia Nazionale dei Lincei and ISTI-CNR. After it describes the design and development phases of the interactive digital system, how the data were acquired and how the model was created and, finally, offers a guide to the use of the interactive system organized on two levels of detail (http://vcg.isti.cnr.it/farnesina /loggia/).Source: Raphael in Villa Farnesina: Galatea and Psyche, edited by A. Sgamellotti, V. Lapenta, C. Anselmi, C. Seccaroni, pp. 91–98. Roma: Bardi Editore, 2020

See at: vcg.isti.cnr.it Restricted | CNR ExploRA


2017 Contribution to book Restricted
Realizzazione del sistema interattivo 'Loggia digitale'
Siotto E., Palma G., Scopigno R.
The VC Lab has developed, in collaboration with the Accademia Nazionale dei Lincei, the Interactive Digital System of the Loggia of Cupid and Psyche within the exhibition 'The Loggia of Cupid and Psyche - Raffaello and Giovanni da Udine - Colours of Prosperity: Fruits from the Old and New World' Villa Farnesina, Rome April 20 - July 20 2017. The system allows access to the 'digital Loggia' and permits the visitor to navigate freely through the high-resolution panoramic image of the painted ceiling, to admire it from a closer point of view and to consult the results of historical, botanical and scientific analyses performed on the selected species. The system is available online and with an interactive kiosk in the Farnesina building.Source: La Loggia di Amore e Psiche - Raffaello e Giovanni da Udine - I colori della prosperità: Frutti dal Vecchio e Nuovo Mondo, pp. 74–77, 2017

See at: vcg.isti.cnr.it Restricted | CNR ExploRA


2017 Contribution to book Restricted
Development of the interactive system 'digital Loggia'
Siotto E., Palma G., Scopigno R.
The VC Lab has developed, in collaboration with the Accademia Nazionale dei Lincei, the Interactive Digital System of the Loggia of Cupid and Psyche within the exhibition 'The Loggia of Cupid and Psyche - Raffaello and Giovanni da Udine - Colours of Prosperity: Fruits from the Old and New World' Villa Farnesina, Rome April 20 - July 20 2017. The system allows access to the 'digital Loggia' and permits the visitor to navigate freely through the high-resolution panoramic image of the painted ceiling, to admire it from a closer point of view and to consult the results of historical, botanical and scientific analyses performed on the selected species. The system is available online and with an interactive kiosk in the Farnesina building.Source: The Loggia of Cupid and Psyche - Raffaello and Giovanni da Udine - Colours of prosperity: Fruits from the Old and New World, pp. 74–77, 2017

See at: vcg.isti.cnr.it Restricted | CNR ExploRA


2010 Conference article Unknown
Geometry-aware video registration
Palma G., Callieri M., Dellepiane M., Corsini M., Scopigno R.
We present a new method for the accurate registration of video sequences of a real object over its dense triangular mesh. The goal is to obtain an accurate video-to-geometry registration to allow the bidirectional data transfer between the 3D model and the video using the perspective projection defined by the camera model. Our solution uses two different approaches: feature-based registration by KLT video tracking, and statistic-based registration by maximizing the Mutual Information (MI) between the gradient of the frame and the gradient of the rendering of the 3D model with some illumination related properties, such as surface normals and ambient occlusion. While the first approach allows a fast registration of short sequences with simple camera movements, the MI is used to correct the drift problem that KLT tracker produces over long sequences, due to the incremental tracking and the camera motion. We demonstrate, using synthetic sequences, that the alignment error obtained with our method is smaller than the one introduced by KLT, and we show the results of some interesting and challenging real sequences of objects of different sizes, acquired under different conditions.Source: 15th International Workshop on Vision, Modeling and Visualization, pp. 1–8, Siegen, Novembre 2010

See at: CNR ExploRA


2012 Journal article Open Access OPEN
A statistical method for SVBRDF approximation from video sequences in general lighting conditions
Palma G., Callieri M., Dellepiane M., Scopigno R.
We present a statistical method for the estimation of the Spatially Varying Bidirectional Reflectance Distribution Function (SVBRDF) of an object with complex geometry, starting from video sequences acquired with fixed but general lighting conditions. The aim of this work is to define a method that simplifies the acquisition phase of the object surface appearance and allows to reconstruct an approximated SVBRDF. The final output is suitable to be used with a 3D model of the object to obtain accurate and photo-realistic renderings. The method is composed by three steps: the approximation of the environment map of the acquisition scene, using the same object as a probe; the estimation of the diffuse color of the object; the estimation of the specular components of the main materials of the object, by using a Phong model. All the steps are based on statistical analysis of the color samples projected by the video sequences on the surface of the object. Although the method presents some limitations, the trade-off between the easiness of acquisition and the obtained results makes it useful for practical applications.Source: Computer graphics forum (Online) 31 (2012): 1491–1500. doi:10.1111/j.1467-8659.2012.03145.x
DOI: 10.1111/j.1467-8659.2012.03145.x
Project(s): 3D-COFORM via OpenAIRE
Metrics:


See at: Computer Graphics Forum Open Access | Computer Graphics Forum Restricted | onlinelibrary.wiley.com Restricted | CNR ExploRA


2012 Conference article Restricted
Insourcing, outsourcing and crowdsourcing 3D collection formation: perspectives for cultural heritage sites
Kaminski J., Echavarria K. R., Arnold D., Palma G., Scopigno R., Proesmans M., Stevenson J.
This paper presents three different propositions for cultural heritage organisations on how to digitise objects in 3D. It is based on the practical evaluation of three different deployment experiments that use different methods and business models for mass 3D-acquisition. These models are: developing the skills of in-house staff within an organisation, the use of external professionals and using crowdsourcing as a mechanism for developing the 3D collection. Furthermore, the paper provides an analysis of these models, lessons learned and practical recommendations for cultural heritage organisations. The analysis includes considerations of issues such as strategy, size of the organisation, skills, equipment, object accessibility and complexity as well as the cost, time and quality of the 3D technology. The paper concludes that most organisations are able to develop 3D collections but variations in the result will be reflected by the strategic approach they place on innovative 3D technologiesSource: The 13th International Symposium on Virtual Reality, Archaeology and Intelligent Cultural Heritage, pp. 81–88, Brighton, 19-21 November 2012
DOI: 10.2312/vast/vast12/081-088
Project(s): 3D-COFORM via OpenAIRE
Metrics:


See at: diglib.eg.org Restricted | CNR ExploRA


2013 Conference article Unknown
Surface light field from video acquired in uncontrolled settings
Palma G., Desogus N., Cignoni P., Scopigno R.
This paper presents an algorithm for the estimation of the Surface Light Field using video sequences acquired moving the camera around the object. Unlike other state of the art methods, it does not require a uniform sampling density of the view directions, but it is able to build an approximation of the Surface Light Field starting from a biased video acquisition: dense along the camera path and completely missing in the other directions. The main idea is to separate the estimation of two components: the diffuse color, computed using statistical operations that allow the estimation of a rough approximation of the direction of the main light sources in the acquisition environment; the other residual Surface Light Field effects,modeled as linear combination of spherical functions. From qualitative and numerical evaluations, the final rendering results show a high fidelity and similarity with the input video frames, without ringing and banding effects.Source: Digital Heritage 2013, pp. 31–38, Marseille, France, 28 October - 1 November 2013
Project(s): HARVEST4D via OpenAIRE

See at: CNR ExploRA


2014 Report Unknown
Automatic detection of geometric changes in time varying point clouds
Palma G., Cignoni P., Tamy B., Scopigno R.
The detection of the geometric changes in 4D data is an important task for all the applications interested in the segmentation of the input geometry between the static and dynamic areas, for the example the cleaning of the input clouds from the objects that are moved or disappear in one of the time step or the analysis and the study of the dynamic part to model the type of change. In this paper we present a novel algorithm to solve this problem that takes in input two point clouds of the same environments acquired in different moments. The core of the method is the computation of the differences between the point clouds using a multi-scale comparison of the implicit surface defined using the Growing Least Square framework. Then the obtained results are further processed to make the segmentation more robust in some critical geometrical configurations that are very common in man-made environments. The final segmentation shows an accurate detection of the real changes in the scene.Source: ISTI Technical reports, 2014
Project(s): HARVEST4D via OpenAIRE

See at: CNR ExploRA


2016 Report Unknown
Temporal appearance change detection using multi-view image acquisition
Palma G., Banterle F., Cignoni P.
Appearance change detection is a very important task for applications monitoring the degradation process of a surface. This is especially true in Cultural Heritage (CH), where the main goal is to control the preservation condition of an artifact. We propose an automatic solution based on the estimation of an explicit parametric reflectance model that can help the user in the detection of the regions that are affected by appearance changes. The idea is to acquire multi-view photo datasets at different times and to compute the 3D model and the Surface Light Field (SLF) of the object for each acquisition. Then, we compare the SLF in the time using a weighting scheme, which takes account of small lighting variations and small misalignments. The obtained results give several cues on the changed areas. In addition, we believe that these can be used as good starting point for further investigations.Source: ISTI Technical reports, 2016
Project(s): HARVEST4D via OpenAIRE

See at: CNR ExploRA


2016 Conference article Restricted
Multi-view ambient occlusion for enhancing visualization of raw scanning data
Sabbadin M., Palma G., Cignoni P., Scopigno R.
The correct understanding of the 3D shape is a crucial aspect to improve the 3D scanning process, especially in order to perform high quality and as complete as possible 3D acquisitions on the field. The paper proposes a new technique to enhance the visualization of raw scanning data based on the definition in device space of a Multi-View Ambient Occlusion (MVAO). The approach allows improving the comprehension of the 3D shape of the input geometry and, requiring almost no preprocessing, it can be directly applied to raw captured point clouds. The algorithm has been tested on different datasets: high resolution Time-of-Flight scans and streams of low quality range maps from a depth camera. The results enhance the details perception in the 3D geometry using the multi-view information to make more robust the ambient occlusion estimationSource: Eurographics Workshop on Graphics and Cultural Heritage, pp. 23–32, Genova, Italy, 5-7 ottobre 2016
DOI: 10.2312/gch.20161379
Project(s): HARVEST4D via OpenAIRE
Metrics:


See at: diglib.eg.org Restricted | CNR ExploRA


2018 Journal article Open Access OPEN
Enhanced visualization of detected 3D geometric differences
Palma G., Sabbadin M., Corsini M., Cignoni P.
The wide availability of 3D acquisition devices makes viable their use for shape monitoring. The current techniques for the analysis of time-varying data can efficiently detect actual significant geometric changes and rule out differences due to irrelevant variations (such as sampling, lighting and coverage). On the other hand, the effective visualization of such detected changes can be challenging when we want to show at the same time the original appearance of the 3D model. In this paper, we propose a dynamic technique for the effective visualization of detected differences between two 3D scenes. The presented approach, while retaining the original appearance, allows the user to switch between the two models in a way that enhances the geometric differences that have been detected as significant. Additionally, the same technique is able to visually hides the other negligible, yet visible, variations. The main idea is to use two distinct screen space time-based interpolation functions for the significant 3D differences and for the small variations to hide. We have validated the proposed approach in a user study on a different class of datasets, proving the objective and subjective effectiveness of the method.Source: Computer graphics forum (Online) 35 (2018): 159–171. doi:10.1111/cgf.13239
DOI: 10.1111/cgf.13239
Project(s): HARVEST4D via OpenAIRE
Metrics:


See at: ISTI Repository Open Access | Computer Graphics Forum Restricted | onlinelibrary.wiley.com Restricted | CNR ExploRA


2018 Conference article Open Access OPEN
Soft transparency for point cloud rendering
Seemann P, Palma G., Dellepiane M., Cignoni P., Goesele M.
We propose a novel rendering framework for visualizing point data with complex structures and/or different quality of data. The point cloud can be characterized by setting a per-point scalar field associated to the aspect that differentiates the parts of the dataset (i.e. uncertainty given by local normal variation). Our rendering method uses the scalar field to render points as solid splats or semi-transparent spheres with non-uniform density to produce the final image. To that end, we derive a base model for integrating density in (intersecting) spheres for both the uniform and non-uniform setting and introduce a simple and fast approximation which yields interactive rendering speeds for millions of points. Because our method only relies on the basic OpenGL rasterization pipeline, rendering properties can be adjusted in real-time by user. The method has been tested on several datasets with different characteristics, and user studies show that a clearer understanding of the scene is possible in comparison with point splatting techniques and basic transparency rendering.Source: Eurographics Symposium on Rendering - Experimental Ideas & Implementations, pp. 95–106, Karlsruhe, Germany, 1-4 July 2018
DOI: 10.2312/sre.20181176
Metrics:


See at: diglib.eg.org Open Access | ISTI Repository Open Access | CNR ExploRA


2020 Contribution to book Unknown
Il rilievo 3D per la caratterizzazione morfologica dell'opera di Raffaello
Pingi P., Siotto E., Palma G.
Tra le analisi diagnostiche non invasive effettuate in supporto al restauro della tavola raffigurante Papa Leone X de' Medici tra i cardinali Giulio de Medici e Luigi de' Rossi di Raffaello è stato eseguito un rilievo tridimensionale (3D) dell'intera opera. Il rilievo 3D, oltre ad essere usato per effettuare misure sulla forma della superficie ed essere un valido supporto per la conoscenza e lo studio dell'opera, si configura anche come un efficace mezzo per monitorarne lo stato di conservazione nel tempo. In questo caso, l'acquisizione 3D era volta alla valutazione della deformazione del supporto ligneo e allo studio del deterioramento della superficie pittorica. Per tale motivo l'intera opera (fronte, retro e bordi) è stata acquisita con un passo di campionamento medio pari a 0.3 mm. Alcune zone sono state acquisite anche ad una risoluzione di 0.16 mm al fine di mettere a punto un metodo automatico in grado di evidenziare le micro-fratture dello strato pittorico.Source: Raffaello e il ritorno del Papa Medici: restauri e scoperte sul Ritratto di Leone X con i due cardinali, edited by Marco Ciatti, Eike D. Schmidt, pp. 145–149. Firenze: Edifir - Edizioni Firenze s.r.l., 2020

See at: CNR ExploRA


2022 Contribution to book Restricted
Temporal deformation analysis of 3D models as diagnostic tool for panel paintings
Palma G., Pingi P., Siotto E.
3D scanning is a well-known technology in the cultural heritage field for the study and monitoring of the artworks. For a panel painting, this technology facilitates the acquisition and documentation of its 3D shape at multiple scales, from the micro-geometry of craquelure to the macro-geometry of the support. All these geometric components may change over time due to the deformations induced by the conservation environment parameters. A usual method for estimating the deformation of the panel is the comparison of 3D models acquired at different times. For this purpose, the chapter presents a new approach to automatically estimate the amount of deformation between two 3D models of the same object. The proposed method is based on a nonrigid registration algorithm that deforms a 3D model on the other, enabling to separate the real panel deformation from the structural changes of the artwork. It uses only on the acquired geometric data of independent 3D acquisitions that were uncontrolled and unsupervised over time.Source: Handbook of Cultural Heritage Analysis, edited by D'Amico S., Venuti V., pp. 1915–1931. Basel: Springer Nature Switzerland, 2022
DOI: 10.1007/978-3-030-60016-7_67
Metrics:


See at: doi.org Restricted | link.springer.com Restricted | CNR ExploRA


2023 Conference article Open Access OPEN
Social and hUman ceNtered XR
Vairo C., Callieri M., Carrara F., Cignoni P., Di Benedetto M., Gennaro C., Giorgi D., Palma G., Vadicamo L., Amato G.
The Social and hUman ceNtered XR (SUN) project is focused on developing eXtended Reality (XR) solutions that integrate the physical and virtual world in a way that is convincing from a human and social perspective. In this paper, we outline the limitations that the SUN project aims to overcome, including the lack of scalable and cost-effective solutions for developing XR applications, limited solutions for mixing the virtual and physical environment, and barriers related to resource limitations of end-user devices. We also propose solutions to these limitations, including using artificial intelligence, computer vision, and sensor analysis to incrementally learn the visual and physical properties of real objects and generate convincing digital twins in the virtual environment. Additionally, the SUN project aims to provide wearable sensors and haptic interfaces to enhance natural interaction with the virtual environment and advanced solutions for user interaction. Finally, we describe three real-life scenarios in which we aim to demonstrate the proposed solutions.Source: Ital-IA 2023 - Workshop su AI per l'industria, Pisa, Italy, 29-31/05/2023

See at: ceur-ws.org Open Access | ISTI Repository Open Access | ISTI Repository Open Access | CNR ExploRA