72 result(s)
Page Size: 10, 20, 50
Export: bibtex, xml, json, csv
Order by:

CNR Author operator: and / or
more
Typology operator: and / or
Language operator: and / or
Date operator: and / or
more
Rights operator: and / or
2020 Article Open Access OPEN

A State of the Art Technology in Large Scale Underwater Monitoring
Pavoni G., Corsini M., Cignoni P.
In recent decades, benthic populations have been subjected to recurrent episodes of mass mortality. These events have been blamed in part on declining water quality and elevated water temperatures (see Figure 1) correlated to global climate change. Ecosystems are enhanced by the presence of species with three-dimensional growth. The study of the growth, resilience, and recovery capability of those species provides valuable information on the conservation status of entire habitats. We discuss here a state-of-the art solution to speed up the monitoring of benthic population through the automatic or assisted analysis of underwater visual data.Source: ERCIM news 2020 (2020): 17–18.

See at: ercim-news.ercim.eu Open Access | ISTI Repository Open Access | CNR ExploRA Open Access


2020 Article Open Access OPEN

On improving the training of models for the semantic segmentation of benthic communities from orthographic imagery
Pavoni G., Corsini M., Callieri M., Fiameni G., Edwards C., Cignoni P.
The semantic segmentation of underwater imagery is an important step in the ecological analysis of coral habitats. To date, scientists produce fine-scale area annotations manually, an exceptionally time-consuming task that could be efficiently automatized by modern CNNs. This paper extends our previous work presented at the 3DUW'19 conference, outlining the workflow for the automated annotation of imagery from the first step of dataset preparation, to the last step of prediction reassembly. In particular, we propose an ecologically inspired strategy for an efficient dataset partition, an over-sampling methodology targeted on ortho-imagery, and a score fusion strategy. We also investigate the use of different loss functions in the optimization of a Deeplab V3+ model, to mitigate the class-imbalance problem and improve prediction accuracy on coral instance boundaries. The experimental results demonstrate the effectiveness of the ecologically inspired split in improving model performance, and quantify the advantages and limitations of the proposed over-sampling strategy. The extensive comparison of the loss functions gives numerous insights on the segmentation task; the Focal Tversky, typically used in the context of medical imaging (but not in remote sensing), results in the most convenient choice. By improving the accuracy of automated ortho image processing, the results presented here promise to meet the fundamental challenge of increasing the spatial and temporal scale of coral reef research, allowing researchers greater predictive ability to better manage coral reef resilience in the context of a changing environment.Source: Remote sensing (Basel) 12 (2020). doi:10.3390/RS12183106
DOI: 10.3390/RS12183106
DOI: 10.3390/rs12183106

See at: Remote Sensing Open Access | Remote Sensing Open Access | Remote Sensing Open Access | ISTI Repository Open Access | CNR ExploRA Open Access | Remote Sensing Open Access | Remote Sensing Open Access


2020 Article Embargo

Foreword to the special section on smart tool and applications for graphics (STAG 2019)
Agus M., Corsini M., Pintus R.
Source: Computers & graphics 91 (2020): A3–A4. doi:10.1016/j.cag.2020.05.027
DOI: 10.1016/j.cag.2020.05.027

See at: Computers & Graphics Restricted | Computers & Graphics Restricted | Computers & Graphics Restricted | CNR ExploRA Restricted | www.sciencedirect.com Restricted | Computers & Graphics Restricted


2020 Conference object Open Access OPEN

Another Brick in the Wall: Improving the Assisted Semantic Segmentation of Masonry Walls
Pavoni G., Giuliani F., De Falco A., Corsini M., Ponchio F., Callieri M., Cignoni P.
In Architectural Heritage, the masonry's interpretation is an essential instrument for analyzing the construction phases, the assessment of structural properties, and the monitoring of its state of conservation. This work is generally carried out by specialists that, based on visual observation and their knowledge, manually annotate ortho-images of the masonry generated by photogrammetric surveys. This results in vectorial thematic maps segmented according to their construction technique (isolating areas of homogeneous materials/structure/texture) or state of conservation, including degradation areas and damaged parts. This time-consuming manual work, often done with tools that have not been designed for this purpose, represents a bottleneck in the documentation and management workflow and is a severely limiting factor in monitoring large-scale monuments (e.g.city walls). This paper explores the potential of AI-based solutions to improve the efficiency of masonry annotation in Architectural Heritage. This experimentation aims at providing interactive tools that support and empower the current workflow, benefiting from specialists' expertise.Source: 18th Eurographics Workshop on Graphics and Cultural Heritage, pp. 43–51, Online event, 18-19/11/2020
DOI: 10.2312/gch.20201291

See at: DOI Resolver Open Access | ISTI Repository Open Access | CNR ExploRA Open Access


2020 Article Restricted

Challenges in the deep learning-based semantic segmentation of benthic communities from Ortho-images
Pavoni G., Corsini M., Pedersen N., Petrovic V., Cignoni P.
Since the early days of the low-cost camera development, the collection of visual data has become a common practice in the underwater monitoring field. Nevertheless, video and image sequences are a trustworthy source of knowledge that remains partially untapped. Human-based image analysis is a time-consuming task that creates a bottleneck between data collection and extrapolation. Nowadays, the annotation of biologically meaningful information from imagery can be efficiently automated or accelerated by convolutional neural networks (CNN). Presenting our case studies, we offer an overview of the potentialities and difficulties of accurate automatic recognition and segmentation of benthic species. This paper focuses on the application of deep learning techniques to multi-view stereo reconstruction by-products (registered images, point clouds, ortho-projections), considering the proliferation of these techniques among the marine science community. Of particular importance is the need to semantically segment imagery in order to generate demographic data vital to understand and explore the changes happening within marine communities.Source: Applied geomatics (Print) (2020). doi:10.1007/s12518-020-00331-6
DOI: 10.1007/s12518-020-00331-6

See at: Applied Geomatics Restricted | Applied Geomatics Restricted | Applied Geomatics Restricted | Applied Geomatics Restricted | CNR ExploRA Restricted


2019 Article Open Access OPEN

Semantic segmentation of Benthic communities from ortho-mosaic maps
Pavoni G., Corsini M., Callieri M., Palma M., Scopigno R.
Visual sampling techniques represent a valuable resource for a rapid, non-invasive data acquisition for underwater monitoring purposes. Long-term monitoring projects usually requires the collection of large quantities of data, and the visual analysis of a human expert operator remains, in this context, a very time consuming task. It has been estimated that only the 1-2% of the acquired images are later analyzed by scientists (Beijbom et al., 2012). Strategies for the automatic recognition of benthic communities are required to effectively exploit all the information contained in visual data. Supervised learning methods, the most promising classification techniques in this field, are commonly affected by two recurring issues: the wide diversity of marine organism, and the small amount of labeled data. In this work, we discuss the advantages offered by the use of annotated high resolution ortho-mosaics of seabed to classify and segment the investigated specimens, and we suggest several strategies to obtain a considerable per-pixel classification performance although the use of a reduced training dataset composed by a single ortho-mosaic. The proposed methodology can be applied to a large number of different species, making the procedure of marine organism identification an highly adaptable task.Source: ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences (ISPRS Annals) 42 (2019): 151–158. doi:10.5194/isprs-archives-XLII-2-W10-151-2019
DOI: 10.5194/isprs-archives-XLII-2-W10-151-2019
DOI: 10.5194/isprs-archives-xlii-2-w10-151-2019
Project(s): GreenBubbles via OpenAIRE

See at: The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences Open Access | The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences Open Access | The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences Open Access | The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences Open Access | ISTI Repository Open Access | CNR ExploRA Open Access | The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences Open Access | ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences Open Access | The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences Open Access | www.int-arch-photogramm-remote-sens-spatial-inf-sci.net Open Access


2019 Conference object Restricted

Image sets compression via patch redundancy
Corsini M., Banterle F., Ponchio F., Cignoni P.
In the last years, the development of compression algorithms for image collections (e.g., photo albums) has become very popular due to the enormous diffusion of digital photographs. Typically, current solutions create an image sequence from images of the photo album to make them suitable for compression using a High Performance Video Coding (HEVC) encoder. In this study, we investigated a different approach to compress a collection of similar images. Our main idea is to exploit the inter- and intra- patch redundancy to compress the entire set of images. In practice, our approach is equivalent to compress the image set with Vector Quantization (VQ) using a global codebook. Our tests show that our clusterization algorithm is effective for a large number of images.Source: EUVIP 2019 - 8th European Workshop on Visual Information Processing, pp. 10–15, Roma, Italy, 28-31 October 2019
DOI: 10.1109/EUVIP47703.2019.8946237
DOI: 10.1109/euvip47703.2019.8946237

See at: Unknown Repository Restricted | Unknown Repository Restricted | ieeexplore.ieee.org Restricted | Unknown Repository Restricted | CNR ExploRA Restricted | Unknown Repository Restricted


2019 Article Restricted

RELIGHT: a compact and accurate RTI representation for the web
Ponchio F., Corsini M., Scopigno R.
Relightable images have been widely used as a valuable tool in Cultural Heritage (CH) artifacts, including coins, bas-reliefs, paintings, and epigraphs. Reflection Transformation Imaging (RTI), a commonly used type of relightable images, consists of a per-pixel function which encodes the reflection behavior, estimated from a set of digital photographs acquired from a fixed view. Web visualisation tools for RTI images currently require to transmit substantial quantities of data in order to achieve high fidelity renderings. We propose a web-friendly compact representation for RTI images based on a joint interpolation-compression scheme that combines a PCA-based data reduction with a Gaussian Radial Basis Function (RBF) interpolation exhibiting superior performance in terms of quality/size ratio. This approach can be adapted also to other data interpolation schemes, and it is not limited to Gaussian RBF. The rendering part is simple to implement and computationally efficient allowing real-time rendering on low-end devices.Source: Graphical models (Print) 105 (2019). doi:10.1016/j.gmod.2019.101040
DOI: 10.1016/j.gmod.2019.101040

See at: Graphical Models Restricted | Graphical Models Restricted | Graphical Models Restricted | Graphical Models Restricted | Graphical Models Restricted | CNR ExploRA Restricted | Graphical Models Restricted


2019 Conference object Open Access OPEN

A complete framework operating spatially-oriented RTI in a 3D/2D cultural heritage documentation and analysis tool
Pamart A., Ponchio F., Abergel V., Alaoui M'darhri A., Corsini M., Dellepiane M., Morlet F., Scopign R., De Luca L.
Close-Range Photogrammetry (CRP) and Reflectance Transformation Imaging (RTI) are two of the most used image-based techniques when documenting and analyzing Cultural Heritage (CH) objects. Nevertheless, their potential impact in supporting study and analysis of conservation status of CH assets is reduced as they remain mostly applied and analyzed separately. This is mostly because we miss easy-to-use tools for of a spatial registration of multimodal data and features for joint visualisation gaps. The aim of this paper is to describe a complete framework for an effective data fusion and to present a user friendly viewer enabling the joint visual analysis of 2D/3D data and RTI images. This contribution is framed by the on-going implementation of automatic multimodal registration (3D, 2D RGB and RTI) into a collaborative web platform (AIOLI) enabling the management of hybrid representations through an intuitive visualization framework and also supporting semantic enrichment through spatialized 2D/3D annotations.Source: 8th International Workshop 3D-ARCH "3D Virtual Reconstruction and Visualization of Complex Architectures", pp. 573–580, Bergamo, Italy, 6-8 February 2019
DOI: 10.5194/isprs-archives-XLII-2-W9-573-2019
DOI: 10.5194/isprs-archives-xlii-2-w9-573-2019

See at: The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences Open Access | The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences Open Access | The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences Open Access | Mémoires en Sciences de l'Information et de la Communication Open Access | The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences Open Access | CNR ExploRA Open Access | The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences Open Access | The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences Open Access


2019 Conference object Open Access OPEN

A Validation Tool For Improving Semantic Segmentation of Complex Natural Structures
Pavoni G., Corsini M., Palma M., Scopigno R.
The automatic recognition of natural structures is a challenging task in the supervised learning field. Complex morphologies are difficult to detect both from the networks, that may suffer from generalization issues, and from human operators, affecting the consistency of training datasets. The task of manual annotating biological structures is not comparable to a generic task of detecting an object (a car, a cat, or a flower) within an image. Biological structures are more similar to textures, and specimen borders exhibit intricate shapes. In this specific context, manual labelling is very sensitive to human error. The interactive validation of the predictions is a valuable resource to improve the network performance and address the inaccuracy caused by the lack of annotation consistency of human operators reported in literature. The proposed tool, inspired by the Yes/No Answer paradigm, integrates the semantic segmentation results coming from a CNN with the previous human labeling, allowing a more accurate annotation of thousands of instances in a short time. At the end of the validation, it is possible to obtain corrected statistics or export the integrated dataset and re-train the network.Source: Eurographics 2019, pp. 57–60, Genova, 6/5/2019-10/5/2019
DOI: 10.2312/egs.20191014

See at: diglib.eg.org Open Access | ISTI Repository Open Access | CNR ExploRA Open Access


2018 Article Open Access OPEN

Enhanced visualization of detected 3D geometric differences
Palma G., Sabbadin M., Corsini M., Cignoni P.
The wide availability of 3D acquisition devices makes viable their use for shape monitoring. The current techniques for the analysis of time-varying data can efficiently detect actual significant geometric changes and rule out differences due to irrelevant variations (such as sampling, lighting and coverage). On the other hand, the effective visualization of such detected changes can be challenging when we want to show at the same time the original appearance of the 3D model. In this paper, we propose a dynamic technique for the effective visualization of detected differences between two 3D scenes. The presented approach, while retaining the original appearance, allows the user to switch between the two models in a way that enhances the geometric differences that have been detected as significant. Additionally, the same technique is able to visually hides the other negligible, yet visible, variations. The main idea is to use two distinct screen space time-based interpolation functions for the significant 3D differences and for the small variations to hide. We have validated the proposed approach in a user study on a different class of datasets, proving the objective and subjective effectiveness of the method.Source: Computer graphics forum (Online) 35 (2018): 159–171. doi:10.1111/cgf.13239
DOI: 10.1111/cgf.13239
Project(s): HARVEST4D via OpenAIRE

See at: ISTI Repository Open Access | Computer Graphics Forum Restricted | Computer Graphics Forum Restricted | Computer Graphics Forum Restricted | Computer Graphics Forum Restricted | Computer Graphics Forum Restricted | onlinelibrary.wiley.com Restricted | Computer Graphics Forum Restricted | Computer Graphics Forum Restricted | CNR ExploRA Restricted


2018 Conference object Restricted

A Compact Representation of Relightable Images for the Web
Ponchio F., Corsini M., Scopigno R.
Relightable images have demonstrated to be a valuable tool for the study and the analysis of coins, bas-relief, paintings, and epigraphy in the Cultural Heritage (CH) field. Reflection Transformation Imaging (RTI) are the most diffuse type of relightable images. An RTI image consists in a per-pixel function which encodes the reflection behavior, estimated from a set of digital photographs acquired from a fixed view. Even if web visualization tools for RTI images are available, high fidelity of the relighted images still requires a high amount of data to be transmitted. To overcome this limit, we propose a web-friendly compact representation for RTI images which allows very high quality of the rendered images with a relatively small amount of data required (in the order of 6-9 standard JPEG color images). The proposed approach is based on a joint interpolation-compression scheme that combines a PCA-based data reduction with a Gaussian Radial Basis Function (RBF) interpolation. We will see that the proposed approach can be adapted also to other data interpolation schemes, and it is not limited to Gaussian RBF. The proposed approach has been compared with several techniques, demonstrating its superior performance in terms of quality/size ratio. Additionally, the rendering part is simple to implement and very efficient in terms of computational cost. This allows real-time rendering also on low-end devices.Source: Web3D '18 - 23rd International ACM Conference on 3D Web Technology, Poznan, Poland, 20-22 June, 2018
DOI: 10.1145/3208806.3208820
Project(s): PARTHENOS via OpenAIRE

See at: Unknown Repository Restricted | Unknown Repository Restricted | dl.acm.org Restricted | Unknown Repository Restricted | Unknown Repository Restricted | CNR ExploRA Restricted


2017 Article Restricted

Presentation of 3D scenes through video example
Baldacci A., Ganovelli F., Corsini M., Scopigno R.
Using synthetic videos to present a 3D scene is a common requirement for architects, designers, engineers or Cultural Heritage professionals however it is usually time consuming and, in order to obtain high quality results, the support of a film maker/computer animation expert is necessary. We introduce an alternative approach that takes the 3D scene of interest and an example video as input, and automatically produces a video of the input scene that resembles the given video example. In other words, our algorithm allows the user to "replicate" an existing video, on a different 3D scene. We build on the intuition that a video sequence of a static environment is strongly characterized by its optical flow, or, in other words, that two videos are similar if their optical flows are similar. We therefore recast the problem as producing a video of the input scene whose optical flow is similar to the optical flow of the input video. Our intuition is supported by a user-study specifically designed to verify this statement. We have successfully tested our approach on several scenes and input videos, some of which are reported in the accompanying material of this paper.Source: IEEE transactions on visualization and computer graphics 23 (2017): 2096–2107. doi:10.1109/TVCG.2016.2608828
DOI: 10.1109/TVCG.2016.2608828
DOI: 10.1109/tvcg.2016.2608828

See at: IEEE Transactions on Visualization and Computer Graphics Restricted | IEEE Transactions on Visualization and Computer Graphics Restricted | IEEE Transactions on Visualization and Computer Graphics Restricted | CNR ExploRA Restricted | IEEE Transactions on Visualization and Computer Graphics Restricted | IEEE Transactions on Visualization and Computer Graphics Restricted


2016 Conference object Open Access OPEN

Practical-HDR: a simple and effective method for merging high dynamic range videos
Akçora D. E., Banterle F., Corsini M., Akyuz A. O., Scopigno R.
We introduce a novel algorithm for obtaining High Dynamic Range (HDR) videos from Standard Dynamic Range (SDR) videos with varying shutter speed or ISO per frame. This capturing technique represents today one of the most pop- ular HDR video acquisition methods, thanks to the avail- ability and the low cost of the equipment required; i.e., an off-the-shelf DSLR camera. However, na Ì^ıvely merging SDR frames into an HDR video can produce artifacts such as ghosts (when the scene is dynamic), and blurry edges (when the camera moves). In this work, we present a straightfor- ward, easy to implement, and fast technique that produces reasonable results in a short time. This is key for having quick previews of the captured videos without waiting for a long processing time. This is extremely important, espe- cially when capturing videos on modern mobile devices such as smartphones and/or tablets.Source: CVMP 2016 - 13th European Conference on Visual Media Production, London, United Kingdom, 12-13 December 2016
DOI: 10.1145/2998559.2998568

See at: OpenMETU Open Access | Unknown Repository Restricted | Unknown Repository Restricted | Unknown Repository Restricted | dl.acm.org Restricted | Unknown Repository Restricted | CNR ExploRA Restricted


2015 Article Restricted

Fast and simple automatic alignment of large sets of range maps
Pingi P., Corsini M., Ganovelli F., Scopigno R.
We present a very fast and simple-to-implement algorithm for the automatic registration of a large number of range maps. The proposed algorithm exploits a compact and GPU-friendly descriptor specifically designed for the alignment of this type of data. This pairwise registration algorithm, which also includes a simple mechanism to avoid to get false positives, is part of a system capable to align a sequence of up to hundreds of range maps in few minutes. In order to reduce the number of pairs to align in the case of unordered range maps we use a prioritization strategy based on the fast computation of the correlation between range maps through FFT. The proposed system does not need any user input and it was tested successfully on a large variety of datasets coming from real acquisition campaigns.Source: Computers & graphics 47 (2015): 78–88. doi:10.1016/j.cag.2014.12.002
DOI: 10.1016/j.cag.2014.12.002
Project(s): HARVEST4D via OpenAIRE

See at: Computers & Graphics Restricted | Computers & Graphics Restricted | Computers & Graphics Restricted | Computers & Graphics Restricted | Computers & Graphics Restricted | Computers & Graphics Restricted | CNR ExploRA Restricted | Computers & Graphics Restricted


2015 Article Restricted

3DHOP: 3D heritage online presenter
Potenziani M., Callieri M., Dellepiane M., Corsini M., Ponchio F., Scopigno R.
3D Heritage Online Presenter (3DHOP) is a framework for the creation of advanced web-based visual presentations of high-resolution 3D content 3DHOP has been designed to cope with the specific needs of the Cultural Heritage (CH) field. By using multiresolution encoding, it is able to efficiently stream high-resolution 3D models (such as the sampled models usually employed in CH applications); it provides a series of ready-to-use templates and examples tailored for the presentation of CH artifacts; it interconnects the 3D visualization with the rest of the webpage DOM, making it possible to create integrated presentations schemes (3D + multimedia). In its design and development, we paid particular attention to three factors: easiness of use, smooth learning curve and performances. Thanks to its modular nature and a declarative-like setup, it is easy to learn, configure, and customize at different levels, depending on the programming skills of the user. This allows people with different background to always obtain the required power and flexibility from the framework. 3DHOP is written in JavaScript and it is based on the SpiderGL library, which employs the WebGL subset of HTML5, implementing plugin-free 3D rendering on many web browsers. In this paper we present the capabilities and characteristics of the 3DHOP framework, using different examples based on concrete projects. (C) 2015 Elsevier Ltd. All rights reserved.Source: Computers & graphics 52 (2015): 129–141. doi:10.1016/j.cag.2015.07.001
DOI: 10.1016/j.cag.2015.07.001
Project(s): ARIADNE via OpenAIRE

See at: Computers & Graphics Restricted | Computers & Graphics Restricted | Computers & Graphics Restricted | Computers & Graphics Restricted | Computers & Graphics Restricted | Computers & Graphics Restricted | CNR ExploRA Restricted | Computers & Graphics Restricted | Computers & Graphics Restricted | Computers & Graphics Restricted


2015 Other Unknown

3D Heritage online presenter
Callieri M., Potenziani M., Dellepiane M., Corsini M., Ponchio F., Scopigno R.
3D Heritage Online Presenter (3DHOP) is a framework for the creation of advanced web-based visual presentations of high-resolution 3D content. 3DHOP has been designed to cope with the specific needs of the Cultural Heritage (CH) field. By using multiresolution encoding, it is able to efficiently stream high-resolution 3D models (such as the sampled models usually employed in CH applications); it provides a series of ready-to-use templates and examples tailored for the presentation of CH artifacts; it interconnects the 3D visualization with the rest of the webpage DOM, making it possible to create integrated presentations schemes (3D + multimedia). Thanks to its modular nature and a declarative-like setup, it is easy to learn, configure, and customize at different levels, depending on the programming skills of the user. This allows people with different background to always obtain the required power and flexibility from the framework. 3DHOP is written in JavaScript and it is based on the SpiderGL library, which employs the WebGL subset of HTML5, implementing plugin-free 3D rendering on many web browsers.Project(s): ARIADNE via OpenAIRE

See at: 3dhop.net | CNR ExploRA


2015 Article Restricted

3D reconstruction for featureless scenes with curvature hints
Baldacci A., Bernabei D., Corsini M., Ganovelli F., Scopigno R.
We present a novel interactive framework for improving 3D reconstruction starting from incomplete or noisy results obtained through image-based reconstruction algorithms. The core idea is to enable the user to provide localized hints on the curvature of the surface, which are turned into constraints during an energy minimization reconstruction. To make this task simple, we propose two algorithms. The first is a multi-view segmentation algorithm that allows the user to propagate the foreground selection of one or more images both to all the images of the input set and to the 3D points, to accurately select the part of the scene to be reconstructed. The second is a fast GPU-based algorithm for the reconstruction of smooth surfaces from multiple views, which incorporates the hints provided by the user. We show that our framework can turn a poor-quality reconstruction produced with state of the art image-based reconstruction methods into a high- quality one.Source: The visual computer 16 (2015). doi:10.1007/s00371-015-1144-5
DOI: 10.1007/s00371-015-1144-5
Project(s): HARVEST4D via OpenAIRE

See at: The Visual Computer Restricted | The Visual Computer Restricted | The Visual Computer Restricted | The Visual Computer Restricted | link.springer.com Restricted | The Visual Computer Restricted | The Visual Computer Restricted | The Visual Computer Restricted | The Visual Computer Restricted | CNR ExploRA Restricted | The Visual Computer Restricted | The Visual Computer Restricted


2015 Article Open Access OPEN

3DHOP una piattaforma flessibile per la pubblicazione e visualizzazione su Web dei risultati di digitalizzazioni 3D
Potenziani M., Callieri M., Dellepiane M., Corsini M., Ponchio F., Scopigno R.
3DHOP (3D Heritage Online Presenter) is an innovative technological solution for the advanced presentation of high-resolution 3D content on the Web. The design of this tool has been focused towards the Cultural Heritage (CH) field, even though its versatility makes it a general-purpose instrument. 3DHOP is particularly suitable for the online presentation of CH artifacts due to its main features: the capability to efficiently stream high-resolution 3D models (as the ones coming from 3D scanning which are usually employed in CH); the possibility to build integrated presentations schemes by interconnecting the viewer to the rest of web pages elements; and, finally, the ready-to-use templates and examples of configuration focused towards CH applications. In its design and development, we put particular attention on three factors: easiness of use, smooth learning curve and performances. 3DHOP is written in JavaScript and it uses the WebGL subset of HTML5 for efficient rendering. Thanks to its modular nature, and a declarative-like setup, it is easy to learn and may be configured and customized at different levels, making it accessible for people without skilled knowledge in Computer Graphics (CG) programming. In this paper we present capabilities and characteristics of the third release of this tool, using some examples based on real-world projects.Source: Archeomatica (Roma) 6 (2015): 6–11. doi:10.48258/arc.v6i4.1216
DOI: 10.48258/arc.v6i4.1216

See at: issuu.com Open Access | CNR ExploRA Open Access


2014 Conference object Open Access OPEN

Painting with Bob: assisted creativity for novices
Benedetti L., Winnemoeller H., Corsini M., Scopigno R.
Current digital painting tools are primarily targeted at professionals and are often overwhelmingly complex for use by novices. At the same time, simpler tools may not invoke the user creatively, or are limited to plain styles that lack visual sophistication. There are many people who are not art professionals, yet would like to partake in digital creative expression. Challenges and rewards for novices differ greatly from those for professionals. In this paper, we leverage existing works in Creativity and Creativity Support Tools (CST) to formulate design goals specifically for digital art creation tools for novices. We implemented these goals within a digital painting system, called Painting with Bob. We evaluate the efficacy of the design and our prototype with a user study, and we find that users are highly satisfied with the user experience, as well as the paintings created with our system.Source: ACM Symposium on User Interface Software and Technology 2014 (UIST 2014), pp. 419–428, Honolulu, USA, 05-08 October 2014
DOI: 10.1145/2642918.2647415

See at: Unknown Repository Open Access | Unknown Repository Restricted | Unknown Repository Restricted | Unknown Repository Restricted | dl.acm.org Restricted | Unknown Repository Restricted | Unknown Repository Restricted | Unknown Repository Restricted | Unknown Repository Restricted | Unknown Repository Restricted | CNR ExploRA Restricted | University of Bath's research portal Restricted | Unknown Repository Restricted