101 result(s)
Page Size: 10, 20, 50
Export: bibtex, xml, json, csv
Order by:

CNR Author operator: and / or
more
Typology operator: and / or
Language operator: and / or
Date operator: and / or
more
Rights operator: and / or
2020 Journal article Restricted

Turning a Smartphone Selfie into a Studio Portrait
Capece N., Banterle F., Cignoni P., Ganovelli F., Erra U., Potel M.
We introduce a novel algorithm that turns a flash selfie taken with a smartphone into a studio-like photograph with uniform lighting. Our method uses a convolutional neural network trained on a set of pairs of photographs acquired in a controlled environment. For each pair, we have one photograph of a subject's face taken with the camera flash enabled and another one of the same subject in the same pose illuminated using a photographic studio-lighting setup. We show how our method can amend lighting artifacts introduced by a close-up camera flash, such as specular highlights, shadows, and skin shine.Source: IEEE computer graphics and applications 40 (2020): 140–147. doi:10.1109/MCG.2019.2958274
DOI: 10.1109/mcg.2019.2958274

See at: IEEE Computer Graphics and Applications Restricted | IEEE Computer Graphics and Applications Restricted | ieeexplore.ieee.org Restricted | IEEE Computer Graphics and Applications Restricted | IEEE Computer Graphics and Applications Restricted | CNR ExploRA Restricted | IEEE Computer Graphics and Applications Restricted | IEEE Computer Graphics and Applications Restricted | IEEE Computer Graphics and Applications Restricted


2020 Contribution to conference Open Access OPEN

Automatic 3D Reconstruction of Structured Indoor Environments
Pintore G., Mura C., Ganovelli F., Fuentes-perez L., Pajarola R., Gobbetti E.
Creating high-level structured 3D models of real-world indoor scenes from captured data is a fundamental task which has important applications in many fields. Given the complexity and variability of interior environments and the need to cope with noisy and partial captured data, many open research problems remain, despite the substantial progress made in the past decade. In this tutorial, we provide an up-to-date integrative view of the field, bridging complementary views coming from computer graphics and computer vision. After providing a characterization of input sources, we define the structure of output models and the priors exploited to bridge the gap between imperfect sources and desired output. We then identify and discuss the main components of a structured reconstruction pipeline, and review how they are combined in scalable solutions working at the building level. We finally point out relevant research issues and analyze research trends.Source: SIGGRAPH '20 - ACM SIGGRAPH 2020 Courses, Online Conference, August 24-28, 2020
DOI: 10.1145/3388769.3407469
DOI: 10.5167/uzh-190473
Project(s): EVOCATION via OpenAIRE, ENCORE via OpenAIRE

See at: CRS4 Open Archive Open Access | www.zora.uzh.ch Open Access | academic.microsoft.com Restricted | dblp.uni-trier.de Restricted | dl.acm.org Restricted | dl.acm.org Restricted | dl.acm.org Restricted | CNR ExploRA Restricted | vic.crs4.it Restricted | www.crs4.it Restricted


2020 Journal article Restricted

State-of-the-art in Automatic 3D Reconstruction of Structured Indoor Environments
Pintore G., Mura C., Ganovelli F., Fuentes-perez L., Pajarola R., Gobbetti E.
Creating high-level structured 3D models of real-world indoor scenes from captured data is a fundamental task which has important applications in many fields. Given the complexity and variability of interior environments and the need to cope with noisy and partial captured data, many open research problems remain, despite the substantial progress made in the past decade. In this survey, we provide an up-to-date integrative view of the field, bridging complementary views coming from computer graphics and computer vision. After providing a characterization of input sources, we define the structure of output models and the priors exploited to bridge the gap between imperfect sources and desired output. We then identify and discuss the main components of a structured reconstruction pipeline, and review how they are combined in scalable solutions working at the building level. We finally point out relevant research issues and analyze research trends.Source: Computer graphics forum (Print) 39 (2020): 667–699. doi:10.1111/cgf.14021
DOI: 10.1111/cgf.14021
Project(s): EVOCATION via OpenAIRE, ENCORE via OpenAIRE

See at: Computer Graphics Forum Restricted | Computer Graphics Forum Restricted | Computer Graphics Forum Restricted | Computer Graphics Forum Restricted | Computer Graphics Forum Restricted | Computer Graphics Forum Restricted | Computer Graphics Forum Restricted | Computer Graphics Forum Restricted | CNR ExploRA Restricted | Computer Graphics Forum Restricted | Computer Graphics Forum Restricted


2019 Report Open Access OPEN

SOROS: Sciadro online reconstruction by odometry and stereo-matching
Ganovelli F., Malomo L., Scopigno R.
In this report we show how to interactively create 3D models for scenes seen by a common off-the-shelf smartphone. Our approach combines Visual Odometry with IMU sensors in order to achieve interactive 3D reconstruction of the scene as seen from the camera.Source: ISTI Technical reports, 2019

See at: ISTI Repository Open Access | CNR ExploRA Open Access


2019 Journal article Open Access OPEN

DeepFlash: turning a flash selfie into a studio portrait
Capece N., Banterle F., Cignoni P., Ganovelli F., Scopigno R., Erra U.
We present a method for turning a flash selfie taken with a smartphone into a photograph as if it was taken in a studio setting with uniform lighting. Our method uses a convolutional neural network trained on a set of pairs of photographs acquired in an ad-hoc acquisition campaign. Each pair consists of one photograph of a subject's face taken with the camera flash enabled and another one of the same subject in the same pose illuminated using a photographic studio-lighting setup. We show how our method can amend defects introduced by a close-up camera flash, such as specular highlights, shadows, skin shine, and flattened images.Source: Signal processing. Image communication 77 (2019): 28–39. doi:10.1016/j.image.2019.05.013
DOI: 10.1016/j.image.2019.05.013

See at: arXiv.org e-Print Archive Open Access | Signal Processing Image Communication Open Access | ISTI Repository Open Access | Signal Processing Image Communication Restricted | Signal Processing Image Communication Restricted | Signal Processing Image Communication Restricted | Signal Processing Image Communication Restricted | Signal Processing Image Communication Restricted | Signal Processing Image Communication Restricted | Signal Processing Image Communication Restricted | Signal Processing Image Communication Restricted | CNR ExploRA Restricted | Signal Processing Image Communication Restricted | Signal Processing Image Communication Restricted | Signal Processing Image Communication Restricted


2019 Journal article Open Access OPEN

Automatic modeling of cluttered multi-room floor plans from panoramic images
Pintore G., Ganovelli F., Villanueva A. J., Gobbetti E.
We present a novel and light-weight approach to capture and reconstruct structured 3D models of multi-room floor plans. Starting from a small set of registered panoramic images, we automatically generate a 3D layout of the rooms and of all the main objects inside. Such a 3D layout is directly suitable for use in a number of real-world applications, such as guidance, location, routing, or content creation for security and energy management. Our novel pipeline introduces several contributions to indoor reconstruction from purely visual data. In particular, we automatically partition panoramic images in a connectivity graph, according to the visual layout of the rooms, and exploit this graph to support object recovery and rooms boundaries extraction. Moreover, we introduce a plane-sweeping approach to jointly reason about the content of multiple images and solve the problem of object inference in a top-down 2D domain. Finally, we combine these methods in a fully automated pipeline for creating a structured 3D model of a multi-room floor plan and of the location and extent of clutter objects. These contribution make our pipeline able to handle cluttered scenes with complex geometry that are challenging to existing techniques. The effectiveness and performance of our approach is evaluated on both real-world and synthetic models.Source: Computer graphics forum (Print) 38 (2019): 347–358. doi:10.1111/cgf.13842
DOI: 10.1111/cgf.13842

See at: ISTI Repository Open Access | Computer Graphics Forum Restricted | Computer Graphics Forum Restricted | Computer Graphics Forum Restricted | Computer Graphics Forum Restricted | Computer Graphics Forum Restricted | Computer Graphics Forum Restricted | CNR ExploRA Restricted | Computer Graphics Forum Restricted


2018 Journal article Open Access OPEN

Scalable non-rigid registration for multi-view stereo data
Palma G., Boubekeur T., Ganovelli F., Cignoni P.
We propose a new non-rigid registration method for large 3D meshes from Multi-View Stereo (MVS) reconstruction characterized by low-frequency shape deformations induced by several factors, such as low sensor quality and irregular sampling object coverage. Starting from a reference model to which we want to align a new 3D mesh, our method starts by decomposing it in patches using a Lloyd clustering before running an ICP local registration for each patch. Then, we improve the alignment using few geometric constraints and finally, we build a global deformation function that blends the estimated per-patch transformations. This function is structured on top of a deformation graph derived from the dual graph of the clustering. Our algorithm is iterated until convergence, increasing progressively the number of patches in the clustering to capture smaller deformations. The method comes with a scalable multicore implementation that enables, for the first time, the alignment of meshes made of tens of millions of triangles in a few minutes. We report extensive experiments of our algorithm on several dense Multi-View Stereo models, using a 3D scan or another MVS reconstruction as reference. Beyond MVS data, we also applied our algorithm to different scenarios, exhibiting more complex and larger deformations, such as 3D motion capture dataset or 3D scans of dynamic objects. The good alignment results obtained for both datasets highlights the efficiency and the flexibility of our approach.Source: ISPRS journal of photogrammetry and remote sensing 142 (2018): 328–341. doi:10.1016/j.isprsjprs.2018.06.012
DOI: 10.1016/j.isprsjprs.2018.06.012

See at: ISTI Repository Open Access | ISPRS Journal of Photogrammetry and Remote Sensing Restricted | ISPRS Journal of Photogrammetry and Remote Sensing Restricted | ISPRS Journal of Photogrammetry and Remote Sensing Restricted | CNR ExploRA Restricted | ISPRS Journal of Photogrammetry and Remote Sensing Restricted


2018 Conference article Open Access OPEN

Recovering 3D indoor floorplans by exploiting low-cost 360 spherical photography
Pintore G., Ganovelli F., Pintus R., Scopigno R., Gobbetti E.
We present a vision-based approach to automatically recover the 3D \emph{existing-conditions} information of an indoor structure, starting from a small set of overlapping spherical images. The recovered 3D model includes the \emph{as-built} 3D room layout with the position of important functional elements located on room boundaries. We first recover the underlying 3D structure as interconnected rooms bounded by walls. This is done by combining geometric reasoning under an Augmented Manhattan World model and Structure-from-Motion. Then, we create, from the original registered spherical images, 2D rectified and metrically scaled images of the room boundaries. Using those undistorted images and the associated 3D data, we automatically detect the 3D position and shape of relevant wall-, floor-, and ceiling-mounted objects, such as electric outlets, light switches, air-vents and light points. As a result, our system is able to quickly and automatically draft an as-built model coupled with its existing conditions using only commodity mobile devices. We demonstrate the effectiveness and performance of our approach on real-world indoor scenes and publicly available datasets.Source: Pacific Graphics, Hong Kong, 8-11 October 2018

See at: ISTI Repository Open Access | CNR ExploRA Open Access | vcg.isti.cnr.it Open Access


2018 Journal article Open Access OPEN

Recovering 3D existing-conditions of indoor structures from spherical images
Pintore G., Pintus R., Ganovelli F., Scopigno R., Gobbetti E.
We present a vision-based approach to automatically recover the 3D existing-conditions information of an indoor structure, starting from a small set of overlapping spherical images. The recovered 3D model includes the as-built 3D room layout with the position of important functional elements located on room boundaries. We first recover the underlying 3D structure as interconnected rooms bounded by walls. This is done by combining geometric reasoning under an Augmented Manhattan World model and Structure-from-Motion. Then, we create, from the original registered spherical images, 2D rectified and metrically scaled images of the room boundaries. Using those undistorted images and the associated 3D data, we automatically detect the 3D position and shape of relevant wall-, floor-, and ceiling-mounted objects, such as electric outlets, light switches, air-vents and light points. As a result, our system is able to quickly and automatically draft an as-built model coupled with its existing conditions using only commodity mobile devices. We demonstrate the effectiveness and performance of our approach on real-world indoor scenes and publicly available datasets.Source: Computers & graphics 77 (2018): 16–29. doi:10.1016/j.cag.2018.09.013
DOI: 10.1016/j.cag.2018.09.013

See at: ISTI Repository Open Access | Computers & Graphics Restricted | Computers & Graphics Restricted | Computers & Graphics Restricted | Computers & Graphics Restricted | Computers & Graphics Restricted | CNR ExploRA Restricted | Computers & Graphics Restricted | Computers & Graphics Restricted | www.sciencedirect.com Restricted


2018 Journal article Open Access OPEN

3D floor plan recovery from overlapping spherical images
Pintore G., Ganovelli F., Pintus R., Scopigno R., Gobbetti E.
We present a novel approach to automatically recover, from a small set of partially overlapping spherical images, an indoor structure representation in terms of a 3D floor plan registered with a set of 3D environment maps. % We introduce several improvements over previous approaches based on color/spatial reasoning exploiting \emph{Manhattan World} priors. In particular, we introduce a new method for geometric context extraction based on a 3D facets representation, which combines color distribution analysis of individual images with sparse multi-view clues. Moreover, we introduce an efficient method to combine the facets from different points of view in a single consistent model, considering the reliability of the facets contribution. The resulting capture and reconstruction pipeline automatically generates 3D multi-room environments where most of the other previous approaches fail, such as in presence of hidden corners and large clutter, even without involving additional dense 3D data or tools. % We demonstrate the effectiveness and performance of our approach on different real-world indoor scenes. Our test data will be released to allow for further studies and comparisons.Source: Computational visual media (Beijing. Print) 4 (2018): 367–383. doi:10.1007/s41095-018-0125-9
DOI: 10.1007/s41095-018-0125-9

See at: Computational Visual Media Open Access | Computational Visual Media Open Access | Computational Visual Media Open Access | Computational Visual Media Open Access | Computational Visual Media Open Access | Computational Visual Media Open Access | ISTI Repository Open Access | ISTI Repository Open Access | CNR ExploRA Open Access | Computational Visual Media Open Access


2018 Conference article Open Access OPEN

Reconstructing power lines from images
Ganovelli F., Malomo L., Scopigno R.
We present a method for reconstructing overhead power lines from images. The solution to this problem has a deep impact over the strategies adopted to monitor the many thousand of kilometers of power lines where nowadays the only effective solution requires a high-end laser scanner. The difficulty with image based algorithms is that images of wires of the power lines typically do not have point features to match among different images. We use a Structure from Motion algorithm to retrieve the approximate camera poses and then formulate a minimization problem aimed to refine the camera poses so that the image of the wires project consistently on a 3D hypothesis.Source: IVCNZ 2018 - International Conference on Image and Vision Computing New Zealand, Auckland, New Zealand, 19-21 November 2018
DOI: 10.1109/ivcnz.2018.8634765

See at: ISTI Repository Open Access | academic.microsoft.com Restricted | dblp.uni-trier.de Restricted | doi.org Restricted | ieeexplore.ieee.org Restricted | CNR ExploRA Restricted | xplorestaging.ieee.org Restricted


2017 Journal article Open Access OPEN

Presentation of 3D scenes through video example
Baldacci A., Ganovelli F., Corsini M., Scopigno R.
Using synthetic videos to present a 3D scene is a common requirement for architects, designers, engineers or Cultural Heritage professionals however it is usually time consuming and, in order to obtain high quality results, the support of a film maker/computer animation expert is necessary. We introduce an alternative approach that takes the 3D scene of interest and an example video as input, and automatically produces a video of the input scene that resembles the given video example. In other words, our algorithm allows the user to "replicate" an existing video, on a different 3D scene. We build on the intuition that a video sequence of a static environment is strongly characterized by its optical flow, or, in other words, that two videos are similar if their optical flows are similar. We therefore recast the problem as producing a video of the input scene whose optical flow is similar to the optical flow of the input video. Our intuition is supported by a user-study specifically designed to verify this statement. We have successfully tested our approach on several scenes and input videos, some of which are reported in the accompanying material of this paper.Source: IEEE transactions on visualization and computer graphics 23 (2017): 2096–2107. doi:10.1109/TVCG.2016.2608828
DOI: 10.1109/tvcg.2016.2608828

See at: ISTI Repository Open Access | IEEE Transactions on Visualization and Computer Graphics Restricted | IEEE Transactions on Visualization and Computer Graphics Restricted | IEEE Transactions on Visualization and Computer Graphics Restricted | CNR ExploRA Restricted | IEEE Transactions on Visualization and Computer Graphics Restricted | IEEE Transactions on Visualization and Computer Graphics Restricted


2017 Conference article Restricted

Mobile metric capture and reconstruction in indoor environments
Pintore G., Ganovelli F., Scopigno R., Gobbetti E.
Mobile devices have become progressively more attractive for solving environment sensing problems. Thanks to their multi-modal acquisition capabilities and their growing processing power, they can perform increasingly sophisticated computer vision and data fusion tasks. In this context, we summarize our recent advances in the acquisition and reconstruction of indoor structures, describing the evolution of the methods from current single-view approaches to novel mobile multi-view methodologies. Starting from an overview on the features and capabilities of current hardware (ranging from commodity smartphones to recent 360 degree cameras), we present in details specific real-world cases which exploit modern devices to acquire structural, visual and metric information.Source: SIGGRAPH Asia Symposium on Mobile Graphics and Interactive Applications, Bangkok, Thailandia, 27-30 November 2017
DOI: 10.1145/3132787.3139202

See at: academic.microsoft.com Restricted | dblp.uni-trier.de Restricted | dl.acm.org Restricted | doi.acm.org Restricted | doi.org Restricted | CNR ExploRA Restricted | www.crs4.it Restricted | www.crs4.it Restricted


2016 Journal article Embargo

3D reconstruction for featureless scenes with curvature hints
Baldacci A., Bernabei D., Corsini M., Ganovelli F., Scopigno R.
We present a novel interactive framework for improving 3D reconstruction starting from incomplete or noisy results obtained through image-based reconstruction algorithms. The core idea is to enable the user to provide localized hints on the curvature of the surface, which are turned into constraints during an energy minimization reconstruction. To make this task simple, we propose two algorithms. The first is a multi-view segmentation algorithm that allows the user to propagate the foreground selection of one or more images both to all the images of the input set and to the 3D points, to accurately select the part of the scene to be reconstructed. The second is a fast GPU-based algorithm for the reconstruction of smooth surfaces from multiple views, which incorporates the hints provided by the user. We show that our framework can turn a poor-quality reconstruction produced with state of the art image-based reconstruction methods into a high- quality one.Source: The visual computer 32 (2016): 1605–1620. doi:10.1007/s00371-015-1144-5
DOI: 10.1007/s00371-015-1144-5
Project(s): HARVEST4D via OpenAIRE

See at: The Visual Computer Restricted | The Visual Computer Restricted | The Visual Computer Restricted | The Visual Computer Restricted | The Visual Computer Restricted | link.springer.com Restricted | The Visual Computer Restricted | The Visual Computer Restricted | The Visual Computer Restricted | The Visual Computer Restricted | CNR ExploRA Restricted | The Visual Computer Restricted | The Visual Computer Restricted


2016 Conference article Open Access OPEN

Assessing the security of buildings: a virtual studio solution
Ahmad A., Balet O., Boin A., Castet J., Donnelley M., Ganovelli F., Kokkinis G., Pintore G.
This paper presents an innovative IT solution, a virtual studio, enabling security professionals to formulate, test and adjust security measures to enhance the security of critical buildings. The concept is to virtualize the environment, enabling experts to examine and assess and improve on a buildingâEUR(TM)s security in a cost-effective and risk-free way. Our virtual studio solution makes use of the latest advances in computer graphics to reconstruct accurate blueprints as well as 3D representations of entire buildings in a very short timeframe. In addition, our solution enables the creation and simulation of multiple threat situations, allowing users to assess security procedures and various responses. Furthermore, we present a novel device, tailored to support collaborative security planning needs. Security experts from various disciplines evaluated our virtual studio solution, and their analysis is presented in this paper.Source: International Conference on Information Systems for Crisis Response and Management, Rio de Janeiro, Brazil, 22-25 May 2016
Project(s): VASCO via OpenAIRE

See at: CNR ExploRA Open Access


2016 Conference article Restricted

Omnidirectional image capture on mobile devices for fast automatic generation of 2.5D indoor maps
Pintore G., Garro V., Ganovelli F., Gobbetti E., Agus M.
We introduce a light-weight automatic method to quickly capture and recover 2.5D multi-room indoor environments scaled to real-world metric dimensions. To minimize the user effort required, we capture and analyze a single omni-directional image per room using widely available mobile devices. Through a simple tracking of the user movements between rooms, we iterate the process to map and reconstruct entire floor plans. In order to infer 3D clues with a minimal processing and without relying on the presence of texture or detail, we define a specialized spatial transform based on catadioptric theory to highlight the room's structure in a virtual projection. From this information, we define a parametric model of each room to formalize our problem as a global optimization solved by Levenberg-Marquardt iterations. The effectiveness of the method is demonstrated on several challenging real-world multi-room indoor scenes.Source: IEEE Winter Conference on Applications of Computer Vision, Lake Placid, NY, USA, 7-10 March 2016
DOI: 10.1109/wacv.2016.7477631
Project(s): VASCO via OpenAIRE

See at: academic.microsoft.com Restricted | dblp.uni-trier.de Restricted | ieeexplore.ieee.org Restricted | ieeexplore.ieee.org Restricted | CNR ExploRA Restricted | www.crs4.it Restricted | xplorestaging.ieee.org Restricted


2016 Conference article Open Access OPEN

Mobile mapping and visualization of indoor structures to simplify scene understanding and location awareness
Pintore G., Ganovelli F., Gobbetti E., Scopigno R.
We present a technology to capture, reconstruct and explore multi-room indoor structures from panorama images generated with the aid of commodity mobile devices. Our approach is motivated by the need for fast and effective systems to simplify indoor data acquisition, as required in many real-world cases where mapping the structure is more important than capturing 3D details, such as the design of smart houses. We combine and extend state-of-the-art results to obtain indoor models scaled to their real-world metric dimension, making them available for online exploration. Moreover, since our target is to assist end-users not necessarily skilled in virtual reality and 3D objects interaction, we introduce a client-server image-based navigation system, exploiting this simplified indoor structure to support a low-degree-of-freedom user interface. We tested our approach in several indoor environments and carried out a preliminary user study to assess the usability of the system by people without a specific technical background.Source: European Conference on Computer Vision, pp. 130–145, Amsterdam, The Netherlands, 8-10 October 2016
DOI: 10.1007/978-3-319-48881-3_10
Project(s): VASCO via OpenAIRE

See at: ISTI Repository Open Access | academic.microsoft.com Restricted | dblp.uni-trier.de Restricted | link.springer.com Restricted | link.springer.com Restricted | CNR ExploRA Restricted | rd.springer.com Restricted | www.crs4.it Restricted | www.crs4.it Restricted


2016 Conference article Restricted

Mobile reconstruction and exploration of indoor structures exploiting omnidirectional images
Pintore G., Ganovelli F., Gobbetti E., Scopigno R.
We summarize our recent advances in acquisition, reconstruction and exploration of indoor environments with the aid of mobile devices. % Our methods enable casual users to quickly capture and recover multi-room structures coupled with their visual appearance, starting from panorama images generated with the built-in capabilities of modern mobile devices, as well as emerging low-cost 360$^circ$ cameras. % After introducing the reconstruction algorithms at the base of our approach, we show how to build applications able to generate 3D floor plans scaled to their real-world metric dimensions and capable to manage scene not necessary limited by emph{Manhattan World} assumptions. % Then, exploiting the resulting structural and visual model, we propose a client-server interactive exploration system implementing a low-DOF navigation interface, specifically developed for touch interaction on smartphones and tablets.Source: SIGGRAPH ASIA 2016 Mobile Graphics and Interactive Applications, pp. 1–4, Macao, China, 5-8/12/ 2016
DOI: 10.1145/2999508.2999526
Project(s): VASCO via OpenAIRE

See at: academic.microsoft.com Restricted | dblp.uni-trier.de Restricted | dl.acm.org Restricted | dl.acm.org Restricted | dl.acm.org Restricted | CNR ExploRA Restricted | www.crs4.it Restricted | www.crs4.it Restricted


2016 Conference article Restricted

Fast metric acquisition with mobile devices
Garro V., Pintore G., Ganovelli F., Gobbetti E., Scopigno R.
We present a novel algorithm for fast metric reconstruction on mobile devices using a combination of image and inertial acceleration data. In contrast to previous approaches to this problem, our algorithm does not require a long acquisition time or intensive data processing and can be implemented entirely on common IMU-enabled tablet and smartphones. The method recovers real world units by comparing the acceleration values from the inertial sensors with the ones inferred from images. In order to cope with IMU signal noise, we propose a novel RANSAC-like strategy which helps to remove the outliers. We demonstrate the effectiveness and the accuracy of our method through an integrated mobile system returning point clouds in metric scale.Source: Vision, Modeling and Visualization 2016, Bayreuth, Germany, 10-12 October 2016
DOI: 10.2312/vmv.20161339
Project(s): VASCO via OpenAIRE

See at: diglib.eg.org Restricted | CNR ExploRA Restricted


2015 Journal article Restricted

Fast and simple automatic alignment of large sets of range maps
Pingi P., Corsini M., Ganovelli F., Scopigno R.
We present a very fast and simple-to-implement algorithm for the automatic registration of a large number of range maps. The proposed algorithm exploits a compact and GPU-friendly descriptor specifically designed for the alignment of this type of data. This pairwise registration algorithm, which also includes a simple mechanism to avoid to get false positives, is part of a system capable to align a sequence of up to hundreds of range maps in few minutes. In order to reduce the number of pairs to align in the case of unordered range maps we use a prioritization strategy based on the fast computation of the correlation between range maps through FFT. The proposed system does not need any user input and it was tested successfully on a large variety of datasets coming from real acquisition campaigns.Source: Computers & graphics 47 (2015): 78–88. doi:10.1016/j.cag.2014.12.002
DOI: 10.1016/j.cag.2014.12.002
Project(s): HARVEST4D via OpenAIRE

See at: Computers & Graphics Restricted | Computers & Graphics Restricted | Computers & Graphics Restricted | Computers & Graphics Restricted | Computers & Graphics Restricted | Computers & Graphics Restricted | CNR ExploRA Restricted | Computers & Graphics Restricted