40 result(s)
Page Size: 10, 20, 50
Export: bibtex, xml, json, csv
Order by:

CNR Author operator: and / or
more
Typology operator: and / or
Language operator: and / or
Date operator: and / or
more
Rights operator: and / or
2024 Journal article Open Access OPEN
Capacitive touch sensing on general 3D surfaces
Palma G., Pourjafarian N., Steimle J., Cignoni P.
Mutual-capacitive sensing is the most common technology for detecting multi-touch, especially on flat and simple curvature surfaces. Its extension to a more complex shape is still challenging, as a uniform distribution of sensing electrodes is required for consistent touch sensitivity across the surface. To overcome this problem, we propose a method to adapt the sensor layout of common capacitive multi-touch sensors to more complex 3D surfaces, ensuring high-resolution, robust multi-touch detection. The method automatically computes a grid of transmitter and receiver electrodes with as regular distribution as possible over a general 3D shape. It starts with the computation of a proxy geometry by quad meshing used to place the electrodes through the dual-edge graph. It then arranges electrodes on the surface to minimize the number of touch controllers required for capacitive sensing and the number of input/output pins to connect the electrodes with the controllers. We reach these objectives using a new simplification and clustering algorithm for a regular quad-patch layout. The reduced patch layout is used to optimize the routing of all the structures (surface grooves and internal pipes) needed to host all electrodes on the surface and inside the object's volume, considering the geometric constraints of the 3D shape. Finally, we print the 3D object prototype ready to be equipped with the electrodes. We analyze the performance of the proposed quad layout simplification and clustering algorithm using different quad meshing and characterize the signal quality and accuracy of the capacitive touch sensor for different non-planar geometries. The tested prototypes show precise and robust multi-touch detection with good Signal-to-Noise Ratio and spatial accuracy of about 1mm.Source: ACM TRANSACTIONS ON GRAPHICS, vol. 43 (issue 4)
DOI: 10.1145/3658185
Metrics:


See at: IRIS Cnr Open Access | IRIS Cnr Open Access | IRIS Cnr Open Access | ACM Transactions on Graphics Restricted | CNR IRIS Restricted | CNR IRIS Restricted


2024 Patent Restricted
Procedimento per fornire un rilevamento del tocco ad alta risoluzione su oggetti 3D
Palma G., Cignoni P., Pourjafarian N., Steimle J.
A computer-implemented method is disclosed for designing a touch sensing arrangement for a 3D object (B), the method includes (i) performing a quad meshing of the surface of the body, the quad mesh including a plurality of quadrangular areas (Q) having side dimensions matching the mutual spacing between pairs of adjacent transmit electrode lines (T) and between pairs of adjacent receive electrode lines (R) of the touch sensing arrangement, (ii) computing a quad patch layout comprising a plurality of quad patches (P), each including a plurality of adjacent quadrangular areas (Q) of the quad mesh, selectively grouping the quad patches (P) in- to a plurality of clusters (C) of adjacent patches (P) and packing the clusters (C) into a plurality of cluster sets, each cluster set being associated with a respective touch controller (TC), wherein the transmit electrode lines (T) and the receive electrode lines (R) of the touch sensing arrange- ment are designed according to a dual edge graph of the quad mesh, and interconnecting conducting paths are established between transmit and receive elec- trode lines (T, R) of each cluster set and the re- spective touch controller (TC) through the internal volume of the object (B).

See at: CNR IRIS Restricted | CNR IRIS Restricted


2024 Journal article Open Access OPEN
Creating high-quality 3D assets for realistic XR solutions
Callieri M., Giorgi D., Maggiordomo A., Palma G.
Successful Extended Reality (XR) applications require 3D contents able to provide rich sensory feedback to the users. In the European project SUN, the Visual Computing Lab at CNR-ISTI is investigating novel techniques for 3D asset creation for XR solutions, with 3D objects featuring both accurate appearance and estimated mechanical properties. Our cutting-edge research leverages on Artificial Intelligence (AI), computer graphics, and modern sensing and Computational Fabrication techniques. The application fields include XR-mediated training environments for industries; remote social interaction for psychosocial rehabilitation; and personalised physical rehabilitation.Source: ERCIM NEWS, vol. 137, pp. 19-20
Project(s): SUN via OpenAIRE

See at: ercim-news.ercim.eu Open Access | CNR IRIS Open Access | CNR IRIS Restricted


2024 Journal article Open Access OPEN
Touch-sensing 3D replica for augmented virtuality
Palma G., Cignoni P.
We introduce a system designed to enhance engagement in VR experiences using sensorised replicas of real objects created through 3D printing. This system lets users interact with physical replicas within the virtual environment while visualising the original object's appearance. Additionally, it facilitates the creation of augmented experiences to manipulate the virtual appearance of the physical replica using personalisation actions, such as painting over the object’s surface or attaching additional virtual objects, taking advantage of its tactile feedback.Source: ERCIM NEWS, vol. 137, pp. 13-14
Project(s): EMOTIVE via OpenAIRE

See at: ercim-news.ercim.eu Open Access | CNR IRIS Open Access | CNR IRIS Restricted


2023 Conference article Open Access OPEN
Social and hUman ceNtered XR
Vairo C, Callieri M, Carrara F, Cignoni P, Di Benedetto M, Gennaro C, Giorgi D, Palma G, Vadicamo L, Amato G
The Social and hUman ceNtered XR (SUN) project is focused on developing eXtended Reality (XR) solutions that integrate the physical and virtual world in a way that is convincing from a human and social perspective. In this paper, we outline the limitations that the SUN project aims to overcome, including the lack of scalable and cost-effective solutions for developing XR applications, limited solutions for mixing the virtual and physical environment, and barriers related to resource limitations of end-user devices. We also propose solutions to these limitations, including using artificial intelligence, computer vision, and sensor analysis to incrementally learn the visual and physical properties of real objects and generate convincing digital twins in the virtual environment. Additionally, the SUN project aims to provide wearable sensors and haptic interfaces to enhance natural interaction with the virtual environment and advanced solutions for user interaction. Finally, we describe three real-life scenarios in which we aim to demonstrate the proposed solutions.Source: CEUR WORKSHOP PROCEEDINGS. Pisa, Italy, 29-31/05/2023

See at: ceur-ws.org Open Access | CNR IRIS Open Access | ISTI Repository Open Access | ISTI Repository Open Access | CNR IRIS Restricted | CNR IRIS Restricted


2022 Contribution to book Restricted
Temporal deformation analysis of 3D models as diagnostic tool for panel paintings
Palma G, Pingi P, Siotto E
3D scanning is a well-known technology in the cultural heritage field for the study and monitoring of the artworks. For a panel painting, this technology facilitates the acquisition and documentation of its 3D shape at multiple scales, from the micro-geometry of craquelure to the macro-geometry of the support. All these geometric components may change over time due to the deformations induced by the conservation environment parameters. A usual method for estimating the deformation of the panel is the comparison of 3D models acquired at different times. For this purpose, the chapter presents a new approach to automatically estimate the amount of deformation between two 3D models of the same object. The proposed method is based on a nonrigid registration algorithm that deforms a 3D model on the other, enabling to separate the real panel deformation from the structural changes of the artwork. It uses only on the acquired geometric data of independent 3D acquisitions that were uncontrolled and unsupervised over time.DOI: 10.1007/978-3-030-60016-7_67
Metrics:


See at: doi.org Restricted | CNR IRIS Restricted | CNR IRIS Restricted | link.springer.com Restricted


2021 Journal article Open Access OPEN
Augmented virtuality using touch-sensitive 3D-printed objects
Palma G, Perry S, Cignoni P
Virtual reality (VR) technologies have become more and more affordable and popular in the last five years thanks to hardware and software advancements. A critical issue for these technologies is finding paradigms that allow user interactions in ways that are as similar as possible to the real world, bringing physicality into the experience. Current literature has shown, with different experiments, that the mapping of real objects in virtual reality alongside haptic feedback significantly increases the realism of the experience and user engagement, leading to augmented virtuality. In this paper, we present a system to improve engagement in a VR experience using inexpensive, physical, and sensorized copies of real artefacts made with cheap 3D fabrication technologies. Based on a combination of hardware and software components, the proposed system gives the user the possibility to interact with the physical replica in the virtual environment and to see the appearance of the original cultural heritage artefact. In this way, we overcome one of the main limitations of mainstream 3D fabrication technologies: a faithful appearance reproduction. Using a consumer device for the real-time hand tracking and a custom electronic controller for the capacitive touch sensing, the system permits the creation of augmented experiences where the user with their hands can change the virtual appearance of the real replica object using a set of personalization actions selectable from a physical 3D-printed palette.Source: REMOTE SENSING (BASEL), vol. 13 (issue 11)
DOI: 10.3390/rs13112186
Project(s): EMOTIVE via OpenAIRE
Metrics:


See at: CNR IRIS Open Access | ISTI Repository Open Access | www.mdpi.com Open Access | CNR IRIS Restricted


2021 Contribution to book Restricted
Documentation and analysis of the deformations of the panel and painted surface with 3D scanner
Pingi P, Siotto E, Palma G, Scopigno R
Despite what might be assumed, the surface of a painting on a canvas or a wooden panel is not perfectly flat, but rather is characterized by a complex three-dimensionality. The paint the artist lays down on the support possesses its own body and thickness, that, even on a millimeter or sub-millimeter scale, may be detected using instruments and three-dimensional measurement applications. At the same time, a wooden panel may be affected by deformations from historical or restoration changes, which may be readily revealed and documented. When analyzing a work of art subjected to an important restoration treatment, as was the case of the Adoration of the Magi by Leonardo da Vinci, a precise 3D documentation of the painted surface is thus strictly linked to the state of its wooden support. As is well-known, a panel painting is a layered structure (usually a protective varnish coating, paint layers, usually preparatory or ground layers, and the wooden panel), each with different physical and chemical compositions. Therefore, an accurate geometric 3D acquisition of the board structure and its connecting components (butterfly joins or dowel inserts) and the auxiliary support system (crossbars) may supply information to not only improve understanding of how the painting was made and its condition, but also permit it to be monitored over time or during restoration. Furthermore, the application of modern 3D computer graphics technologies is not only a valid diagnostic aid to acquiring knowledge about the work, but also a way to map information and share it (for example art historical information, technical data, and the results of chemical and physical analyses), making them easily accessible online both to experts in the sector and a wider public, thanks to specifically developed multimedia systems.1 In the case of the unfinished masterpiece by Leonardo, a complete high resolution 3D acquisition was performed in order to show and measure--at the moment during conservation treatment--a map of deviations from planarity caused by the curvature and warp of the wooden boards, permitting the documenting of the spatial deformation of the painted surface and the monitoring of its state of preservation.Source: PROBLEMI DI CONSERVAZIONE E RESTAURO, pp. 281-286

See at: CNR IRIS Restricted | CNR IRIS Restricted


2020 Contribution to book Restricted
Il rilievo 3D per la caratterizzazione morfologica dell'opera di Raffaello
Pingi P, Siotto E, Palma G
Tra le analisi diagnostiche non invasive effettuate in supporto al restauro della tavola raffigurante Papa Leone X de' Medici tra i cardinali Giulio de Medici e Luigi de' Rossi di Raffaello è stato eseguito un rilievo tridimensionale (3D) dell'intera opera. Il rilievo 3D, oltre ad essere usato per effettuare misure sulla forma della superficie ed essere un valido supporto per la conoscenza e lo studio dell'opera, si configura anche come un efficace mezzo per monitorarne lo stato di conservazione nel tempo. In questo caso, l'acquisizione 3D era volta alla valutazione della deformazione del supporto ligneo e allo studio del deterioramento della superficie pittorica. Per tale motivo l'intera opera (fronte, retro e bordi) è stata acquisita con un passo di campionamento medio pari a 0.3 mm. Alcune zone sono state acquisite anche ad una risoluzione di 0.16 mm al fine di mettere a punto un metodo automatico in grado di evidenziare le micro-fratture dello strato pittorico.

See at: CNR IRIS Restricted | CNR IRIS Restricted


2020 Contribution to book Restricted
Una Loggia digitale al tempo del COVID-19
Palma G, Siotto E
The chapter starts illustrating the conception and evolution of the project "A digital loggia for Raphael and collaborators in Villa Farnesina, Rome" on behalf of the Accademia Nazionale dei Lincei and ISTI-CNR. After it describes the design and development phases of the interactive digital system, how the data were acquired and how the model was created and, finally, offers a guide to the use of the interactive system organized on two levels of detail (http://vcg.isti.cnr.it/farnesina /loggia/).

See at: CNR IRIS Restricted | CNR IRIS Restricted | vcg.isti.cnr.it Restricted


2020 Contribution to book Restricted
A digital Loggia at the time of COVID-19
Palma G, Siotto E
The chapter starts illustrating the conception and evolution of the project "A digital loggia for Raphael and collaborators in Villa Farnesina, Rome" on behalf of the Accademia Nazionale dei Lincei and ISTI-CNR. After it describes the design and development phases of the interactive digital system, how the data were acquired and how the model was created and, finally, offers a guide to the use of the interactive system organized on two levels of detail (http://vcg.isti.cnr.it/farnesina /loggia/).

See at: CNR IRIS Restricted | CNR IRIS Restricted | vcg.isti.cnr.it Restricted


2019 Conference article Restricted
Analisi dei frammenti di Sectilia vitrei dalla Villa romana di Aiano-Torraccia di Chiusi (si) e studio della tecnica d'esecuzione
Cavalieri M, Landi S, Manna D, Giamello M, Fornacelli C, Bracci S, Palma G, Siotto E, Scopigno R
The consistent amount of sectilia fragments from the late Roman Villa of Aiano (4th-5th century AD)provides important insights on the study of the diffusion of opus sectile during the Late Roman period. The extent of the corpus of glass slabs, in particular, immediately suggests interesting perspectives on both the archaeological and technological issues. Thanks to cooperation between archaeologist, conservators, IT and scientists, an in-depth study of the repertory is in progress to provide important information about the technologies and the raw materials used to produce a number of selected samples. High-resolution images have been obtained via Reflectance Transformation Imaging (RTI) to better understand all the different phases characterizing the manufacture of the more complex slabs. Due to their flexibility and low analytical costs, portable and non-invasive analytical techniques provided a fast and quite accurate definition of the chemical and mineralogical properties of each sample and the first classification of a large number of slabs in compositional clusters. Portable X-Ray Fluorescence (p-XRF) and Fiber Optics Reflectance Spectroscopy (FORS) allowed a first definition of the chemical variability within the repertory and provided indications about both manufacturing and coloring techniques.

See at: CNR IRIS Restricted | CNR IRIS Restricted | www.aiscom.it Restricted


2019 Journal article Open Access OPEN
Deformation analysis of Leonardo da Vinci's "Adorazione dei Magi" through temporal unrelated 3D digitization
Palma G, Pingi P, Siotto E, Bellucci R, Guidi G, Scopigno R
3D scanning is an effective technology for dealing at different levels the state of conservation/deformation of a panel painting, from the micro-geometry of the craquelure to the macro-geometry of the supported used. Unfortunately, the current solutions used to analyze multiple 3D scans acquired over time are based on very controlled acquisition procedures, such as the use of target reference points that are stationary over time and fixed to the artwork, or on complex hardware setups to keep the acquisition device fixed to the artwork. These procedures are challenging when a long monitoring period is involved or during restoration when the painting may be moved several times. This paper presents a new and robust approach to observe and quantify the panel deformations of artworks by comparing 3D models acquired with different scanning devices at different times. The procedure is based on a non-rigid registration algorithm that deforms one 3D model over the other in a controlled way, extracting the real deformation field. We apply the method to the 3D scanning data of the unfinished panel painting "Adorazione dei Magi" by Leonardo da Vinci. The data were acquired in 2002 and 2015. First, we analyze the two 3D models with the classical distance from the ideal flat plane of the painting. Then we study the type of deformation of each plank of the support by fitting a quadric surface. Finally, we compare the models before and after the deformation computed by a non-rigid registration algorithm. This last comparison enables the panel deformation to be separated from the structural changes (e.g. the structural restorations on the back and the missing pieces) of the artwork in a more robust way.Source: JOURNAL OF CULTURAL HERITAGE, vol. 38, pp. 174-185
DOI: 10.1016/j.culher.2018.11.001
Metrics:


See at: CNR IRIS Open Access | ISTI Repository Open Access | www.sciencedirect.com Open Access | Journal of Cultural Heritage Restricted | CNR IRIS Restricted | CNR IRIS Restricted


2019 Other Open Access OPEN
Augmented reality experience with physical artefacts
Palma G, Cignoni P
This technical report presents a system to improve the engagement of the user in a virtual reality experience using economic, physical copies of real artefacts, made with cheap 3D fabrication technologies. Based on a combination of hardware and software components, the proposed system gives the user the possibility to interact with the physical replica in the virtual environment and to see the appearance of the original artefact. In this way, we overcome the current limitation of the cheap 3D fabrication technologies: a faithful appearance reproduction. Moreover using a consumer device for the real-time hand tracking and a custom electronic controller for the capacitive touch sensing, the system permits the creation of virtual experiences where the user with his hand can change the virtual appearance of the object using a set of personalization actions selectable from a physical 3D printed palette.Project(s): EMOTIVE via OpenAIRE

See at: CNR IRIS Open Access | ISTI Repository Open Access | CNR IRIS Restricted


2019 Journal article Open Access OPEN
High dynamic range point clouds for real-time relighting
Sabbadin M, Palma G, Banterle F, Boubekeur T, Cignoni P
Acquired 3D point clouds make possible quick modeling of virtual scenes from the real world.With modern 3D capture pipelines, each point sample often comes with additional attributes such as normal vector and color response. Although rendering and processing such data has been extensively studied, little attention has been devoted using the light transport hidden in the recorded per-sample color response to relight virtual objects in visual effects (VFX) look-dev or augmented reality (AR) scenarios. Typically, standard relighting environment exploits global environment maps together with a collection of local light probes to reflect the light mood of the real scene on the virtual object. We propose instead a unified spatial approximation of the radiance and visibility relationships present in the scene, in the form of a colored point cloud. To do so, our method relies on two core components: High Dynamic Range (HDR) expansion and real-time Point-Based Global Illumination (PBGI). First, since an acquired color point cloud typically comes in Low Dynamic Range (LDR) format, we boost it using a single HDR photo exemplar of the captured scene that can cover part of it. We perform this expansion efficiently by first expanding the dynamic range of a set of renderings of the point cloud and then projecting these renderings on the original cloud. At this stage, we propagate the expansion to the regions not covered by the renderings or with low-quality dynamic range by solving a Poisson system. Then, at rendering time, we use the resulting HDR point cloud to relight virtual objects, providing a diffuse model of the indirect illumination propagated by the environment. To do so, we design a PBGI algorithm that exploits the GPU's geometry shader stage as well as a new mipmapping operator, tailored for G-buffers, to achieve real-time performances. As a result, our method can effectively relight virtual objects exhibiting diffuse and glossy physically-based materials in real time. Furthermore, it accounts for the spatial embedding of the object within the 3D environment. We evaluate our approach on manufactured scenes to assess the error introduced at every step from the perfect ground truth. We also report experiments with real captured data, covering a range of capture technologies, from active scanning to multiview stereo reconstruction.Source: COMPUTER GRAPHICS FORUM (ONLINE), vol. 38 (issue 7), pp. 513-525
DOI: 10.1111/cgf.13857
Project(s): EMOTIVE via OpenAIRE
Metrics:


See at: diglib.eg.org Open Access | CNR IRIS Open Access | ISTI Repository Open Access | Computer Graphics Forum Restricted | CNR IRIS Restricted | CNR IRIS Restricted


2018 Journal article Open Access OPEN
Enhanced visualization of detected 3D geometric differences
Palma G, Sabbadin M, Corsini M, Cignoni P
The wide availability of 3D acquisition devices makes viable their use for shape monitoring. The current techniques for the analysis of time-varying data can efficiently detect actual significant geometric changes and rule out differences due to irrelevant variations (such as sampling, lighting and coverage). On the other hand, the effective visualization of such detected changes can be challenging when we want to show at the same time the original appearance of the 3D model. In this paper, we propose a dynamic technique for the effective visualization of detected differences between two 3D scenes. The presented approach, while retaining the original appearance, allows the user to switch between the two models in a way that enhances the geometric differences that have been detected as significant. Additionally, the same technique is able to visually hides the other negligible, yet visible, variations. The main idea is to use two distinct screen space time-based interpolation functions for the significant 3D differences and for the small variations to hide. We have validated the proposed approach in a user study on a different class of datasets, proving the objective and subjective effectiveness of the method.Source: COMPUTER GRAPHICS FORUM (ONLINE), vol. 35 (issue 1), pp. 159-171
DOI: 10.1111/cgf.13239
Project(s): HARVEST4D via OpenAIRE
Metrics:


See at: CNR IRIS Open Access | onlinelibrary.wiley.com Open Access | ISTI Repository Open Access | Computer Graphics Forum Restricted | CNR IRIS Restricted | CNR IRIS Restricted


2018 Journal article Open Access OPEN
Scalable non-rigid registration for multi-view stereo data
Palma G, Boubekeur T, Ganovelli F, Cignoni P
We propose a new non-rigid registration method for large 3D meshes from Multi-View Stereo (MVS) reconstruction characterized by low-frequency shape deformations induced by several factors, such as low sensor quality and irregular sampling object coverage. Starting from a reference model to which we want to align a new 3D mesh, our method starts by decomposing it in patches using a Lloyd clustering before running an ICP local registration for each patch. Then, we improve the alignment using few geometric constraints and finally, we build a global deformation function that blends the estimated per-patch transformations. This function is structured on top of a deformation graph derived from the dual graph of the clustering. Our algorithm is iterated until convergence, increasing progressively the number of patches in the clustering to capture smaller deformations. The method comes with a scalable multicore implementation that enables, for the first time, the alignment of meshes made of tens of millions of triangles in a few minutes. We report extensive experiments of our algorithm on several dense Multi-View Stereo models, using a 3D scan or another MVS reconstruction as reference. Beyond MVS data, we also applied our algorithm to different scenarios, exhibiting more complex and larger deformations, such as 3D motion capture dataset or 3D scans of dynamic objects. The good alignment results obtained for both datasets highlights the efficiency and the flexibility of our approach.Source: ISPRS JOURNAL OF PHOTOGRAMMETRY AND REMOTE SENSING, vol. 142, pp. 328-341
DOI: 10.1016/j.isprsjprs.2018.06.012
Metrics:


See at: CNR IRIS Open Access | ISTI Repository Open Access | www.sciencedirect.com Open Access | ISPRS Journal of Photogrammetry and Remote Sensing Restricted | CNR IRIS Restricted | CNR IRIS Restricted


2018 Conference article Open Access OPEN
Soft transparency for point cloud rendering
Seemann P, Palma G, Dellepiane M, Cignoni P, Goesele M
We propose a novel rendering framework for visualizing point data with complex structures and/or different quality of data. The point cloud can be characterized by setting a per-point scalar field associated to the aspect that differentiates the parts of the dataset (i.e. uncertainty given by local normal variation). Our rendering method uses the scalar field to render points as solid splats or semi-transparent spheres with non-uniform density to produce the final image. To that end, we derive a base model for integrating density in (intersecting) spheres for both the uniform and non-uniform setting and introduce a simple and fast approximation which yields interactive rendering speeds for millions of points. Because our method only relies on the basic OpenGL rasterization pipeline, rendering properties can be adjusted in real-time by user. The method has been tested on several datasets with different characteristics, and user studies show that a clearer understanding of the scene is possible in comparison with point splatting techniques and basic transparency rendering.DOI: 10.2312/sre.20181176
Metrics:


See at: diglib.eg.org Open Access | CNR IRIS Open Access | ISTI Repository Open Access | CNR IRIS Restricted


2018 Other Open Access OPEN
High dynamic range expansion of point clouds for real-time relighting
Sabbadin M, Palma G, Banterle F, Boubekeur T, Cignoni P
Acquired 3D point clouds make possible quick modeling of virtual scenes from the real world. With modern 3D capture pipelines, each point sample often comes with additional attributes such as normal vector and color response. Although rendering and processing such data has been extensively studied, little attention has been devoted using the genuine light transport hidden in the recorded per-sample color response to relight virtual objects in visual effects (VFX) look-dev or augmented reality scenarios. Typically, standard relighting environment exploits global environment maps together with a collection of local light probes to reflect the light mood of the real scene on the virtual object. We propose instead a unified spatial approximation of the radiance and visibility relationships present in the scene, in the form of a colored point cloud. To do so, our method relies on two core components: High Dynamic Range (HDR) expansion and real-time Point-Based Global Illumination (PBGI). First of all, since an acquired color point cloud typically comes in Low Dynamic Range (LDR) format, we boost it using a single HDR photo exemplar of the captured scene, that may only cover part of it. We perform efficiently this expansion by first expanding the dynamic range of a set of renderings of the point cloud and then projecting these renderings on the original cloud. At this stage, we propagate the expansion to the regions which are not covered by the renderings or with low quality dynamic range by solving a Poisson's system. Then, at rendering time, we use the resulting HDR point cloud to relight virtual objects, providing a diffuse model of the indirect illumination propagated by the environment. To do so, we design a PBGI algorithm that exploits the GPU's geometry shader stage as well as a new mipmapping operator, tailored for G-buffers, to achieve real-time performances. As a result, our method can effectively relight virtual objects exhibiting diffuse and glossy physically-based materials in real time. Furthermore, it accounts for the spatial embedding of the object within the 3D environment. We evaluate our approach on manufactured scenes to assess the error introduced at every step with respect to the perfect ground truth. We also report experiments on real captured data, covering a range of capture technologies, from active scanning to multiview stereo reconstruction.

See at: CNR IRIS Open Access | ISTI Repository Open Access | CNR IRIS Restricted


2018 Conference article Open Access OPEN
The EMOTIVE Project - Emotive virtual cultural experiences through personalized storytelling
Katifori A, Roussou M, Perry S, Cignoni P, Malomo L, Palma G, Dretakis G, Vizcay S
This work presents an overview of the EU-funded project EMOTIVE (Emotive virtual cultural experiences through personalized storytelling). EMOTIVE works from the premise that cultural sites are, in fact, highly emo- tional places, seedbeds not just of knowledge, but of emotional resonance and human connection. From 2016-2019, the EMOTIVE consortium will research, design, develop and evaluate methods and tools that can support the cultural and creative industries in creating narratives and experiences which draw on the power of 'emotive storytelling', both on site and virtually. This work focuses on the project objectives and results so far and presents identified challenges.Source: CEUR WORKSHOP PROCEEDINGS, pp. 11-20. Nicosia, Cyprus, November 3, 2018
Project(s): EMOTIVE via OpenAIRE

See at: ceur-ws.org Open Access | CNR IRIS Open Access | ISTI Repository Open Access | CNR IRIS Restricted