54 result(s)
Page Size: 10, 20, 50
Export: bibtex, xml, json, csv
Order by:

CNR Author operator: and / or
more
Typology operator: and / or
Language operator: and / or
Date operator: and / or
more
Rights operator: and / or
2021 Contribution to book Restricted

Virtual clones for cultural heritage applications
Potenziani M., Banterle F., Callieri M., Dellepiane M., Ponchio F., Scopigno R.
Digital technologies are now mature for producing high quality digital replicas of Cultural Heritage (CH) artifacts. The research results produced in the last decade have shown an impressive evolution and consolidation of the technologies for acquiring high-quality digital 3D models, encompassing both geometry and color (or, better, surface reflectance properties). Some recent technologies for constructing 3D models enriched by a high-quality encoding of the color attribute will be presented. The focus of this paper is to show and discuss practical solutions, which could be deployed without requiring the installation of a specific or sophisticated acquisition lab setup. In the second part of this paper, we focus on new solutions for the interactive visualization of complex models, adequate for modern communication channels such as the web and the mobile platforms. Together with the algorithms and approaches, we show also some practical examples where high-quality 3D models have been used in CH research, restoration and conservation.Source: From Pen to Pixel - Studies of the Roman Forum and the Digital Future of World Heritage, edited by Fortini Patrizia, Krusche Krupali, pp. 225–233. Roma: L'Erma di Bretschneider, 2021

See at: CNR ExploRA Restricted | www.lerma.it Restricted


2021 Conference article Open Access OPEN

Collaborative Visual Environments for Evidence Taking in Digital Justice: a Design Concept
Erra U., Capece N., Lettieri N., Fabiani E., Banterle F., Cignoni P., Dazzi P., Aleotti J., Monica R.
In recent years, Spatial Computing (SC) has emerged as a novel paradigm thanks to the advancements in Extended Reality (XR), remote sensing, and artificial intelligence. Computers are nowadays more and more aware of physical environments (i.e. objects shape, size, location and movement) and can use this knowledge to blend technology into reality seamlessly, merge digital and real worlds, and connect users by providing innovative interaction methods. Criminal and civil trials offer an ideal scenario to exploit Spatial Computing. The taking of evidence, indeed, is a complex activity that not only involves several actors (judges, lawyers, clerks, advi- sors) but it often requires accurate topographic surveys of places and objects. Moreover, another essential means of proof, the "judi- cial experiments" - reproductions of real-world events (e.g. a road accident) the judge uses to evaluate if and how a given fact has taken place - could be usefully carried out in virtual environments. In this paper we propose a novel approach for digital justice based on a multi-user, multimodal virtual collaboration platform that enables technology-enhanced acquisition and analysis of trial evidence.Source: FRAME'21 - 1st Workshop on Flexible Resource and Application Management on the Edge, Sweden, Virtual Event, 25/06/2021
DOI: 10.1145/3452369.3463820
Project(s): ACCORDION via OpenAIRE

See at: ISTI Repository Open Access | CNR ExploRA Open Access


2020 Journal article Restricted

Turning a Smartphone Selfie into a Studio Portrait
Capece N., Banterle F., Cignoni P., Ganovelli F., Erra U., Potel M.
We introduce a novel algorithm that turns a flash selfie taken with a smartphone into a studio-like photograph with uniform lighting. Our method uses a convolutional neural network trained on a set of pairs of photographs acquired in a controlled environment. For each pair, we have one photograph of a subject's face taken with the camera flash enabled and another one of the same subject in the same pose illuminated using a photographic studio-lighting setup. We show how our method can amend lighting artifacts introduced by a close-up camera flash, such as specular highlights, shadows, and skin shine.Source: IEEE computer graphics and applications 40 (2020): 140–147. doi:10.1109/MCG.2019.2958274
DOI: 10.1109/mcg.2019.2958274

See at: IEEE Computer Graphics and Applications Restricted | IEEE Computer Graphics and Applications Restricted | ieeexplore.ieee.org Restricted | IEEE Computer Graphics and Applications Restricted | IEEE Computer Graphics and Applications Restricted | CNR ExploRA Restricted | IEEE Computer Graphics and Applications Restricted | IEEE Computer Graphics and Applications Restricted | IEEE Computer Graphics and Applications Restricted


2020 Conference article Open Access OPEN

Nor-Vdpnet: a no-reference high dynamic range quality metric trained on Hdr-Vdp 2
Banterle F., Artusi A., Moreo A., Carrara F.
HDR-VDP 2 has convincingly shown to be a reliable metric for image quality assessment, and it is currently playing a remarkable role in the evaluation of complex image processing algorithms. However, HDR-VDP 2 is known to be computationally expensive (both in terms of time and memory) and is constrained to the availability of a ground-truth image (the so-called reference) against to which the quality of a processed imaged is quantified. These aspects impose severe limitations on the applicability of HDR-VDP 2 to realworld scenarios involving large quantities of data or requiring real-time responses. To address these issues, we propose Deep No-Reference Quality Metric (NoR-VDPNet), a deeplearning approach that learns to predict the global image quality feature (i.e., the mean-opinion-score index Q) that HDRVDP 2 computes. NoR-VDPNet is no-reference (i.e., it operates without a ground truth reference) and its computational cost is substantially lower when compared to HDR-VDP 2 (by more than an order of magnitude). We demonstrate the performance of NoR-VDPNet in a variety of scenarios, including the optimization of parameters of a denoiser and JPEG-XT.Source: IEEE International Conference on Image Processing (ICIP 2020), pp. 126–130, Abu Dhabi, United Arab Emirates, United Arab Emirates, 25/10/2020-28/10/2020
DOI: 10.1109/icip40778.2020.9191202
Project(s): EVOCATION via OpenAIRE, ENCORE via OpenAIRE

See at: ISTI Repository Open Access | ISTI Repository Open Access | academic.microsoft.com Restricted | dblp.uni-trier.de Restricted | ieeexplore.ieee.org Restricted | CNR ExploRA Restricted | xplorestaging.ieee.org Restricted


2019 Journal article Open Access OPEN

DeepFlash: turning a flash selfie into a studio portrait
Capece N., Banterle F., Cignoni P., Ganovelli F., Scopigno R., Erra U.
We present a method for turning a flash selfie taken with a smartphone into a photograph as if it was taken in a studio setting with uniform lighting. Our method uses a convolutional neural network trained on a set of pairs of photographs acquired in an ad-hoc acquisition campaign. Each pair consists of one photograph of a subject's face taken with the camera flash enabled and another one of the same subject in the same pose illuminated using a photographic studio-lighting setup. We show how our method can amend defects introduced by a close-up camera flash, such as specular highlights, shadows, skin shine, and flattened images.Source: Signal processing. Image communication 77 (2019): 28–39. doi:10.1016/j.image.2019.05.013
DOI: 10.1016/j.image.2019.05.013

See at: arXiv.org e-Print Archive Open Access | Signal Processing Image Communication Open Access | ISTI Repository Open Access | Signal Processing Image Communication Restricted | Signal Processing Image Communication Restricted | Signal Processing Image Communication Restricted | Signal Processing Image Communication Restricted | Signal Processing Image Communication Restricted | Signal Processing Image Communication Restricted | Signal Processing Image Communication Restricted | Signal Processing Image Communication Restricted | CNR ExploRA Restricted | Signal Processing Image Communication Restricted | Signal Processing Image Communication Restricted | Signal Processing Image Communication Restricted


2019 Journal article Open Access OPEN

High dynamic range point clouds for real-time relighting
Sabbadin M., Palma G., Banterle F., Boubekeur T., Cignoni P.
Acquired 3D point clouds make possible quick modeling of virtual scenes from the real world.With modern 3D capture pipelines, each point sample often comes with additional attributes such as normal vector and color response. Although rendering and processing such data has been extensively studied, little attention has been devoted using the light transport hidden in the recorded per-sample color response to relight virtual objects in visual effects (VFX) look-dev or augmented reality (AR) scenarios. Typically, standard relighting environment exploits global environment maps together with a collection of local light probes to reflect the light mood of the real scene on the virtual object. We propose instead a unified spatial approximation of the radiance and visibility relationships present in the scene, in the form of a colored point cloud. To do so, our method relies on two core components: High Dynamic Range (HDR) expansion and real-time Point-Based Global Illumination (PBGI). First, since an acquired color point cloud typically comes in Low Dynamic Range (LDR) format, we boost it using a single HDR photo exemplar of the captured scene that can cover part of it. We perform this expansion efficiently by first expanding the dynamic range of a set of renderings of the point cloud and then projecting these renderings on the original cloud. At this stage, we propagate the expansion to the regions not covered by the renderings or with low-quality dynamic range by solving a Poisson system. Then, at rendering time, we use the resulting HDR point cloud to relight virtual objects, providing a diffuse model of the indirect illumination propagated by the environment. To do so, we design a PBGI algorithm that exploits the GPU's geometry shader stage as well as a new mipmapping operator, tailored for G-buffers, to achieve real-time performances. As a result, our method can effectively relight virtual objects exhibiting diffuse and glossy physically-based materials in real time. Furthermore, it accounts for the spatial embedding of the object within the 3D environment. We evaluate our approach on manufactured scenes to assess the error introduced at every step from the perfect ground truth. We also report experiments with real captured data, covering a range of capture technologies, from active scanning to multiview stereo reconstruction.Source: Computer graphics forum (Online) 38 (2019): 513–525. doi:10.1111/cgf.13857
DOI: 10.1111/cgf.13857
Project(s): EMOTIVE via OpenAIRE

See at: ISTI Repository Open Access | Computer Graphics Forum Restricted | Computer Graphics Forum Restricted | Computer Graphics Forum Restricted | diglib.eg.org Restricted | Computer Graphics Forum Restricted | Computer Graphics Forum Restricted | CNR ExploRA Restricted


2019 Conference article Open Access OPEN

HMD-TMO: A Tone Mapping Operator for 360 degrees HDR Images Visualization for Head Mounted Displays
Goude I., Cozot R., Banterle F.
We propose a Tone Mapping Operator, denoted HMD-TMO, dedicated to the visualization of 360 degrees High Dynamic Range images on Head Mounted Displays. The few existing studies about this topic have shown that the existing Tone Mapping Operators for classic 2D images are not adapted to 360 degrees High Dynamic Range images. Consequently, several dedicated operators have been proposed. Instead of operating on the entire 360 degrees image, they only consider the part of the image currently viewed by the user. Tone mapping a part of the 360 degrees image is less challenging as it does not preserve the global luminance dynamic of the scene. To cope with this problem, we propose a novel tone mapping operator which takes advantage of both a view-dependant tone mapping that enhances the contrast, and a Tone Mapping Operator applied to the entire 360 degrees image that preserves global coherency. Furthermore, we present a subjective study to model lightness perception in a Head Mounted Display.Source: Computer Graphics International Conference (CGI 2019), pp. 216–227, Calgary, Canada, 17/06/2019 - 20/06/2019
DOI: 10.1007/978-3-030-22514-8_18

See at: hal.archives-ouvertes.fr Open Access | ISTI Repository Open Access | CNR ExploRA Open Access | vcg.isti.cnr.it Open Access | academic.microsoft.com Restricted | dblp.uni-trier.de Restricted | hal.archives-ouvertes.fr Restricted | link.springer.com Restricted | link.springer.com Restricted


2019 Journal article Open Access OPEN

Efficient Evaluation of Image Quality via Deep-Learning Approximation of Perceptual Metrics
Artusi A., Banterle F., Moreo A., Carrara F.
Image metrics based on Human Visual System (HVS) play a remarkable role in the evaluation of complex image processing algorithms. However, mimicking the HVS is known to be complex and computationally expensive (both in terms of time and memory), and its usage is thus limited to a few applications and to small input data. All of this makes such metrics not fully attractive in real-world scenarios. To address these issues, we propose Deep Image Quality Metric ( DIQM ), a deep-learning approach to learn the global image quality feature ( mean-opinion-score ). DIQM can emulate existing visual metrics efficiently, reducing the computational costs by more than an order of magnitude with respect to existing implementations.Source: IEEE transactions on image processing (Online) 29 (2019): 1843–1855. doi:10.1109/TIP.2019.2944079
DOI: 10.1109/tip.2019.2944079
Project(s): ENCORE via OpenAIRE, RISE via OpenAIRE

See at: ISTI Repository Open Access | ZENODO Open Access | IEEE Transactions on Image Processing Restricted | IEEE Transactions on Image Processing Restricted | IEEE Transactions on Image Processing Restricted | IEEE Transactions on Image Processing Restricted | CNR ExploRA Restricted | IEEE Transactions on Image Processing Restricted | IEEE Transactions on Image Processing Restricted | vcg.isti.cnr.it Restricted | IEEE Transactions on Image Processing Restricted | IEEE Transactions on Image Processing Restricted


2019 Journal article Open Access OPEN

Developing the ArchAIDE application: A digital workflow for identifying, organising and sharing archaeological pottery using automated image recognition
Anichini F., Banterle F., Buxeda I Garrigós J., Calleri M., Dershowitz N., Diaz D. L., Evans T., Gattiglia G., Gualandi M. L., Hervas M. A., Itkin B., Madrid I Fernandez M, Miguel Gascón E., Remmy M., Richards J., Scopigno R., Vila L., Wolf L., Wright H., Zallocco M.
Every day, archaeologists are working to discover and tell stories using objects from the past, investing considerable time, effort and funding to identify and characterise individual finds. Pottery is of fundamental importance for the comprehension and dating of archaeological contexts, and for understanding the dynamics of production, trade flows, and social interactions. Today, characterisation and classification of ceramics are carried out manually, through the expertise of specialists and the use of analogue catalogues held in archives and libraries. While not seeking to replace the knowledge and expertise of specialists, the ArchAIDE project (archaide.eu) worked to optimise and economise identification process, developing a new system that streamlines the practice of pottery recognition in archaeology, using the latest automatic image recognition technology. At the same time, ArchAIDE worked to ensure archaeologists remained at the heart of the decision-making process within the identification workflow, and focussed on optimising tasks that were repetitive and time consuming. Specifically, ArchAIDE worked to support the essential classification and interpretation work of archaeologists (during both fieldwork and post-excavation analysis) with an innovative app for tablets and smartphones. This paper summarises the work of this three-year project, funded by the European Union's Horizon 2020 Research and Innovation Programme under grant agreement N.693548, with a consortium of partners which has representing both the academic and industry-led ICT domains, and the academic and development-led archaeology domains. The collaborative work of the archaeological and technical partners created a pipeline where potsherds are photographed, their characteristics compared against a trained neural network, and the results returned with suggested matches from a comparative collection with typical pottery types and characteristics. Once the correct type is identified, all relevant information for that type is linked to the new sherd and stored within a database that can be shared online.Source: Internet archaeology 52 (2019). doi:10.11141/ia.52.7
DOI: 10.11141/ia.52.7
Project(s): ArchAIDE via OpenAIRE

See at: Internet Archaeology Open Access | Internet Archaeology Open Access | Internet Archaeology Open Access | Internet Archaeology Open Access | ISTI Repository Open Access | CNR ExploRA Open Access


2019 Conference article Restricted

Image sets compression via patch redundancy
Corsini M., Banterle F., Ponchio F., Cignoni P.
In the last years, the development of compression algorithms for image collections (e.g., photo albums) has become very popular due to the enormous diffusion of digital photographs. Typically, current solutions create an image sequence from images of the photo album to make them suitable for compression using a High Performance Video Coding (HEVC) encoder. In this study, we investigated a different approach to compress a collection of similar images. Our main idea is to exploit the inter- and intra- patch redundancy to compress the entire set of images. In practice, our approach is equivalent to compress the image set with Vector Quantization (VQ) using a global codebook. Our tests show that our clusterization algorithm is effective for a large number of images.Source: EUVIP 2019 - 8th European Workshop on Visual Information Processing, pp. 10–15, Roma, Italy, 28-31 October 2019
DOI: 10.1109/euvip47703.2019.8946237

See at: academic.microsoft.com Restricted | dblp.uni-trier.de Restricted | ieeexplore.ieee.org Restricted | iris.unimore.it Restricted | CNR ExploRA Restricted | xplorestaging.ieee.org Restricted


2018 Report Open Access OPEN

High dynamic range expansion of point clouds for real-time relighting
Sabbadin M., Palma G., Banterle F., Boubekeur T., Cignoni P.
Acquired 3D point clouds make possible quick modeling of virtual scenes from the real world. With modern 3D capture pipelines, each point sample often comes with additional attributes such as normal vector and color response. Although rendering and processing such data has been extensively studied, little attention has been devoted using the genuine light transport hidden in the recorded per-sample color response to relight virtual objects in visual effects (VFX) look-dev or augmented reality scenarios. Typically, standard relighting environment exploits global environment maps together with a collection of local light probes to reflect the light mood of the real scene on the virtual object. We propose instead a unified spatial approximation of the radiance and visibility relationships present in the scene, in the form of a colored point cloud. To do so, our method relies on two core components: High Dynamic Range (HDR) expansion and real-time Point-Based Global Illumination (PBGI). First of all, since an acquired color point cloud typically comes in Low Dynamic Range (LDR) format, we boost it using a single HDR photo exemplar of the captured scene, that may only cover part of it. We perform efficiently this expansion by first expanding the dynamic range of a set of renderings of the point cloud and then projecting these renderings on the original cloud. At this stage, we propagate the expansion to the regions which are not covered by the renderings or with low quality dynamic range by solving a Poisson's system. Then, at rendering time, we use the resulting HDR point cloud to relight virtual objects, providing a diffuse model of the indirect illumination propagated by the environment. To do so, we design a PBGI algorithm that exploits the GPU's geometry shader stage as well as a new mipmapping operator, tailored for G-buffers, to achieve real-time performances. As a result, our method can effectively relight virtual objects exhibiting diffuse and glossy physically-based materials in real time. Furthermore, it accounts for the spatial embedding of the object within the 3D environment. We evaluate our approach on manufactured scenes to assess the error introduced at every step with respect to the perfect ground truth. We also report experiments on real captured data, covering a range of capture technologies, from active scanning to multiview stereo reconstruction.Source: ISTI Technical reports, 2018

See at: ISTI Repository Open Access | CNR ExploRA Open Access


2018 Journal article Open Access OPEN

Automatic saturation correction for dynamic range management algorithms
Artusi A., Pouli T., Banterle F., Akyuz A. O.
High dynamic range (HDR) images require tone reproduction to match the range of values to the capabilities of a display. For computational reasons and given the absence of fully calibrated imagery, rudimentary color reproduction is often added as a post-processing step rather than integrated into tone reproduction algorithms. In the general case, this currently requires manual parameter tuning, and can be automated only for some global tone reproduction operators by inferring parameters from the tone curve. We present a novel and fully automatic saturation correction technique, suitable for any tone reproduction operator (including inverse tone reproduction), which exhibits fewer distortions in hue and luminance reproduction than the current state-of-the-art. We validated its comparative effectiveness through subjective experiments and objective metrics. Our experiments confirm that saturation correction significantly contributes toward the perceptually plausible color reproduction of tonemapped content and would, therefore, be useful in any color-critical application.Source: Signal processing. Image communication 63 (2018): 100–112. doi:10.1016/j.image.2018.01.011
DOI: 10.1016/j.image.2018.01.011
Project(s): KIOS CoE via OpenAIRE

See at: Signal Processing Image Communication Open Access | ISTI Repository Open Access | ZENODO Open Access | Signal Processing Image Communication Restricted | Signal Processing Image Communication Restricted | Signal Processing Image Communication Restricted | Signal Processing Image Communication Restricted | Signal Processing Image Communication Restricted | Signal Processing Image Communication Restricted | Signal Processing Image Communication Restricted | CNR ExploRA Restricted | www.sciencedirect.com Restricted | Signal Processing Image Communication Restricted


2018 Journal article Open Access OPEN

Fine-grained detection of inverse tone mapping in HDR images
Fan W., Valenzise G., Banterle F., Dufaux F.
High dynamic range (HDR) imaging enables to capture the full range of physical luminance of a real world scene, and is expected to progressively replace traditional low dynamic range (LDR) pictures and videos. Despite the increasing HDR popularity, very little attention has been devoted to new forensic problems that are characteristic to this content. In this paper, we address for the first time such kind of problem, by identifying the source of an HDR picture. Specifically, we consider the two currently most common techniques to generate an HDR image: by fusing multiple LDR images with different exposure time, or by inverse tone mapping an LDR picture. We show that, in order to apply conventional forensic tools to HDR images, they need to be properly preprocessed, and we propose and evaluate a few simple HDR forensic preprocessing strategies for this purpose. In addition, we propose a new forensic feature based on Fisher scores, calculated under Gaussian mixture models. We show that the proposed feature outperforms the popular SPAM features in classifying the HDR image source on image blocks as small as 3 x 3, which makes our method suitable to detect composite forgeries combining HDR patches originating from different acquisition processes.Source: Signal processing (Print) 152 (2018): 178–188. doi:10.1016/j.sigpro.2018.05.028
DOI: 10.1016/j.sigpro.2018.05.028

See at: Signal Processing Open Access | ISTI Repository Open Access | Signal Processing Restricted | Signal Processing Restricted | Signal Processing Restricted | Signal Processing Restricted | Hyper Article en Ligne Restricted | Hyper Article en Ligne Restricted | Hyper Article en Ligne Restricted | CNR ExploRA Restricted | www.sciencedirect.com Restricted | Signal Processing Restricted


2017 Contribution to book Restricted

Creating HDR video using retargeting
Banterle F., Unger J.
This chapter presented an overview of two methods for augmenting SDR video sequences with HDR information in order to create HDR videos. The goal of the two methods is to fill in saturated regions in the SDR video frames by retargeting non-saturated image data from a sparse set of HDR images.Source: High Dynamic Range Video: Concepts, Technologies and Applications, edited by Chalmers, A.; Campisi, P.; Shirley, P.; Olaizola, I., pp. 45–59, 2017
DOI: 10.1016/b978-0-12-809477-8.00002-9

See at: academic.microsoft.com Restricted | api.elsevier.com Restricted | api.elsevier.com Restricted | CNR ExploRA Restricted | www.sciencedirect.com Restricted


2017 Conference article Open Access OPEN

The ArchAIDE project: results and perspectives after the first year
Banterle F., Dellepiane M., Evans T., Gattiglia G., Itkin B., Zallocco M.
The ArchAIDE project is a Horizon 2020 project that has the main goal to digitally support the day-to-day operations on the field of archaeologists. This allows them to reduce time and costs of delivering an accurate and quick classification of ancient pottery artifacts. To effectively reach such ambitious goal, the project has several sub-goals: (semi-)automatic digitalization of archaeological catalogs, a mobile app to be used on site for live classification of sherds with the generation of a complete potsherds identity card (ready for print), and an on-line database with real-time visualization of data. In this paper, we describe the work carried out during the first year of life of this project. The main focus is on the procedure for digitizing paper catalogs in an automatic way, and more precisely we will discuss: archeologist's methodologies, digitalization of text, vectorization of technical drawings, and shape-based classification of virtual fragments.Source: 15th Eurographics Workshop on Graphics and Cultural Heritage, pp. 161–164, Graz, Austria, 27-29 September 2017
DOI: 10.2312/gch.20171308
Project(s): ArchAIDE via OpenAIRE

See at: ISTI Repository Open Access | diglib.eg.org Restricted | CNR ExploRA Restricted


2017 Conference article Open Access OPEN

From paper to web: automatic generation of a web-accessible 3D repository of pottery types
Dellepiane M., Callieri M., Banterle F., Arenga D., Zallocco M., Scopigno R.
3D web repositories are a hot topic for the research community in general. In the Cultural Heritage (CH) context, 3D repositories pose a difficult challenge due to the complexity and variability of models and to the need of structured and coherent metadata for browsing and searching. This paper presents one of the efforts of the ArchAIDE project: to create a structured and semantically-rich 3D database of pottery types, usable by archaeologists and other communities. For example, researchers working on shape-based analysis and automatic classification. The automated workflow described here starts from pages of a printed catalog, extracts the textual and graphical description of a pottery type, and processes those data to produce structured metadata information and a 3D representation. These information are then ingested in the database, where they become accessible by the community using dynamically-created web presentation pages, showing in a common context: 3D, 2D and metadata information.Source: Eurographics Workshop on Graphics and Cultural Heritage, pp. 65–70, Graz, Austria, 27-29 September 2017
DOI: 10.2312/gch.20171293
Project(s): ArchAIDE via OpenAIRE

See at: ISTI Repository Open Access | ISTI Repository Open Access | diglib.eg.org Restricted | CNR ExploRA Restricted


2017 Book Closed Access

Advanced high dynamic range imaging
Banterle F., Artusi A., Debattista K., Chalmers A.
This book explores the methods needed for creating and manipulating HDR content. HDR is a step change from traditional imaging; more closely matching what we see with our eyes. In the years since the first edition of this book appeared, HDR has become much more widespread, moving from a research concept to a standard imaging method. This new edition incorporates all the many developments in HDR since the first edition and once again emphasizes practical tips, including the authors' popular HDR Toolbox (available on the authors' website) for MATLAB and gives readers the tools they need to develop and experiment with new techniques for creating compelling HDR content.

See at: CNR ExploRA Restricted | www.taylorfrancis.com Restricted


2017 Conference article Open Access OPEN

VaseSketch: Automatic 3D representation of pottery from paper catalog drawings
Banterle F., Dellepiane M., Callieri M., Scopigno R., Itkin B., Wolf L., Dershowitz N.
We describe an automated pipeline for digitization of catalog drawings of pottery types. This work is aimed at extracting a structured description of the main geometric features and a 3D representation of each class. The pipeline includes methods for understanding a 2D drawing and using it for constructing a 3D model of the pottery. These will be used to populate a reference database for classification of potsherds. Furthermore, we extend the pipeline with methods for breaking the 3D model to obtain synthetic sherds and methods for capturing images of these sherds in a way that matches the imaging methodology of archaeologists. These will serve to build a massive set of synthetic sherd images that will help train and test future automated classification systems.Source: ICDAR 2017 - 14th IAPR International Conference on Document Analysis and Recognition, pp. 683–690, Kyoto, Japan, 9-15 Novembre 2017
DOI: 10.1109/icdar.2017.117
Project(s): ArchAIDE via OpenAIRE

See at: ISTI Repository Open Access | zenodo.org Open Access | academic.microsoft.com Restricted | dblp.uni-trier.de Restricted | ieeexplore.ieee.org Restricted | CNR ExploRA Restricted | xplorestaging.ieee.org Restricted


2016 Report Restricted

Temporal appearance change detection using multi-view image acquisition
Palma G., Banterle F., Cignoni P.
Appearance change detection is a very important task for applications monitoring the degradation process of a surface. This is especially true in Cultural Heritage (CH), where the main goal is to control the preservation condition of an artifact. We propose an automatic solution based on the estimation of an explicit parametric reflectance model that can help the user in the detection of the regions that are affected by appearance changes. The idea is to acquire multi-view photo datasets at different times and to compute the 3D model and the Surface Light Field (SLF) of the object for each acquisition. Then, we compare the SLF in the time using a weighting scheme, which takes account of small lighting variations and small misalignments. The obtained results give several cues on the changed areas. In addition, we believe that these can be used as good starting point for further investigations.Source: ISTI Technical reports, 2016
Project(s): HARVEST4D via OpenAIRE

See at: CNR ExploRA Restricted


2016 Report Open Access OPEN

ProgettISTI 2016
Banterle F., Barsocchi P., Candela L., Carlini E., Carrara F., Cassarà P., Ciancia V., Cintia P., Dellepiane M., Esuli A., Gabrielli L., Germanese D., Girardi M., Girolami M., Kavalionak H., Lonetti F., Lulli A., Moreo Fernandez A., Moroni D., Nardini F. M., Monteiro De Lira V. C., Palumbo F., Pappalardo L., Pascali M. A., Reggianini M., Righi M., Rinzivillo S., Russo D., Siotto E., Villa A.
ProgettISTI research project grant is an award for members of the Institute of Information Science and Technologies (ISTI) to provide support for innovative, original and multidisciplinary projects of high quality and potential. The choice of theme and the design of the research are entirely up to the applicants yet (i) the theme must fall under the ISTI research topics, (ii) the proposers of each project must be of diverse laboratories of the Institute and must contribute different expertise to the project idea, and (iii) project proposals should have a duration of 12 months. This report documents the procedure, the proposals and the results of the 2016 edition of the award. In this edition, ten project proposals have been submitted and three of them have been awarded.Source: ISTI Technical reports, 2016

See at: ISTI Repository Open Access | CNR ExploRA Open Access