99 result(s)
Page Size: 10, 20, 50
Export: bibtex, xml, json, csv
Order by:

CNR Author operator: and / or
more
Typology operator: and / or
Language operator: and / or
Date operator: and / or
more
Rights operator: and / or
2024 Software Restricted
TagLab 2024.12.2
Corsini M., Pavoni G., Ponchio F., Muntoni A., Saraceni Q., Cignoni P.
TagLab is an AI-powered segmentation software designed to support the analysis of large orthographic images generated through the photogrammetric pipeline.DOI: 10.5281/zenodo.14258304
Metrics:


See at: CNR IRIS Restricted | CNR IRIS Restricted | taglab.isti.cnr.it Restricted


2024 Journal article Open Access OPEN
Integrating widespread coral reef monitoring tools for managing both area and point annotations
Pavoni G., Pierce J., Edwards C. B., Corsini M., Petrovic V., Cignoni P.
Large-area image acquisition techniques are essential in underwater investigations: high-resolution 3D image-based reconstructions have improved coral reef monitoring by enabling novel seascape ecological analysis. Artificial intelligence (AI) offers methods for significantly accelerating image data interpretation, such as automatically recognizing, enumerating, and measuring organisms. However, the rapid proliferation of these technological achievements has led to a relative lack of standardization of methods. Remarkably, there are notable differences in procedures for generating human and AI annotations, and there is also a scarcity of publicly available datasets and shared machine-learning models. The lack of standard procedures makes it challenging to compare and reproduce scientific findings. One way to overcome this problem is to make the most used platforms by coral reef scientists interoperable so that the analyses can all be exported into a common format. This paper introduces functionality to promote interoperability between three popular open-source software tools dedicated to the digital study of coral reefs: TagLab, CoralNet, and Viscore. As users of each platform may have different analysis pipelines, we discuss several workflows for managing and processing point and area annotations, improving collaboration among these tools. Our work sets the foundation for a more seamless ecosystem that maintains the established investigation procedures of various laboratories but allows for easier result sharing.Source: INTERNATIONAL ARCHIVES OF THE PHOTOGRAMMETRY, REMOTE SENSING AND SPATIAL INFORMATION SCIENCES, vol. XLVIII-2-2024 (issue 2), pp. 327-333
DOI: 10.5194/isprs-archives-xlviii-2-2024-327-2024
Project(s): ReefSurvAI: Verso un’infrastruttura web che supporti l’uso dell’intelligenza artificiale per il monitoraggio delle barriere coralline
Metrics:


See at: The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences Open Access | The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences Open Access | IRIS Cnr Open Access | IRIS Cnr Open Access | CNR IRIS Restricted | Copernicus Publications Restricted


2024 Journal article Open Access OPEN
Evaluating image-based interactive 3D modeling tools
Siddique A., Cignoni P., Corsini M., Banterle F.
Structure from Motion (SfM) is a computer vision technique used to reconstruct three-dimensional (3D) structures from a series of two-dimensional (2D) images or video frames. However, SfM tools struggle with transparent objects, reflective surfaces, and low-resolution frames. In such situations, image-based interactive 3D modeling software packages are employed to model 3D objects and measure dimensions. Our contributions to this work are twofold. First, we have introduced new tools to improve 3D modeling software packages; such tools are aimed at easing the workload for users. Second, we have conducted a comprehensive user study to evaluate the efficacy of popular 3d modeling software packages. The task is to measure certain dimensions for which ground truth measurements are already known. A relative error is calculated for every measurement. The evaluation of each software tool is done through survey form, event logs, and measurement relative error. The results of this user study clearly show that our approach to 3D modeling using multiple images has a lower relative error and produces higher quality 3D models than other software packages. In addition, it shows our new tools reduce the required time for completing a task.Source: IEEE ACCESS, vol. 12, pp. 104138-104152
DOI: 10.1109/access.2024.3434584
Project(s): "Photogrammetric Method for Determining BWR Internals Dimensions, EVOCATION via OpenAIRE
Metrics:


See at: IEEE Access Open Access | IEEE Access Open Access | IRIS Cnr Open Access | IRIS Cnr Open Access | CNR IRIS Restricted


2023 Journal article Open Access OPEN
MoReLab: a software for user-assisted 3D reconstruction
Siddique A, Banterle F, Corsini M, Cignoni P, Sommerville D, Joffe C
We present MoReLab, a tool for user-assisted 3D reconstruction. This reconstruction requires an understanding of the shapes of the desired objects. Our experiments demonstrate that existing Structure from Motion (SfM) software packages fail to estimate accurate 3D models in low-quality videos due to several issues such as low resolution, featureless surfaces, low lighting, etc. In such scenarios, which are common for industrial utility companies, user assistance becomes necessary to create reliable 3D models. In our system, the user first needs to add features and correspondences manually on multiple video frames. Then, classic camera calibration and bundle adjustment are applied. At this point, MoReLab provides several primitive shape tools such as rectangles, cylinders, curved cylinders, etc., to model different parts of the scene and export 3D meshes. These shapes are essential for modeling industrial equipment whose videos are typically captured by utility companies with old video cameras (low resolution, compression artifacts, etc.) and in disadvantageous lighting conditions (low lighting, torchlight attached to the video camera, etc.). We evaluate our tool on real industrial case scenarios and compare it against existing approaches. Visual comparisons and quantitative results show that MoReLab achieves superior results with regard to other user-interactive 3D modeling tools.Source: SENSORS (BASEL), vol. 23 (issue 14)
DOI: 10.3390/s23146456
Project(s): EVOCATION via OpenAIRE
Metrics:


See at: Sensors Open Access | CNR IRIS Open Access | ISTI Repository Open Access | www.mdpi.com Open Access | CNR IRIS Restricted


2023 Journal article Open Access OPEN
Quantifying the loss of coral from a bleaching event using underwater photogrammetry and AI-Assisted Image Segmentation
Kopecky K. L., Pavoni G., Nocerino E., Brooks A. J., Corsini M., Menna F., Gallagher J. P., Capra A., Castagnetti C., Rossi P., Gruen A., Neyer F., Muntoni A., Ponchio F., Cignoni P., Troyer M., Holbrook S. J., Schmitt R. J.
Detecting the impacts of natural and anthropogenic disturbances that cause declines in organisms or changes in community composition has long been a focus of ecology. However, a tradeoff often exists between the spatial extent over which relevant data can be collected, and the resolution of those data. Recent advances in underwater photogrammetry, as well as computer vision and machine learning tools that employ artificial intelligence (AI), offer potential solutions with which to resolve this tradeoff. Here, we coupled a rigorous photogrammetric survey method with novel AI-assisted image segmentation software in order to quantify the impact of a coral bleaching event on a tropical reef, both at an ecologically meaningful spatial scale and with high spatial resolution. In addition to outlining our workflow, we highlight three key results: (1) dramatic changes in the three-dimensional surface areas of live and dead coral, as well as the ratio of live to dead colonies before and after bleaching; (2) a size-dependent pattern of mortality in bleached corals, where the largest corals were disproportionately affected, and (3) a significantly greater decline in the surface area of live coral, as revealed by our approximation of the 3D shape compared to the more standard planar area (2D) approach. The technique of photogrammetry allows us to turn 2D images into approximate 3D models in a flexible and efficient way. Increasing the resolution, accuracy, spatial extent, and efficiency with which we can quantify effects of disturbances will improve our ability to understand the ecological consequences that cascade from small to large scales, as well as allow more informed decisions to be made regarding the mitigation of undesired impacts.Source: REMOTE SENSING (BASEL), vol. 15 (issue 16)
DOI: 10.3390/rs15164077
Metrics:


See at: CNR IRIS Open Access | ISTI Repository Open Access | www.mdpi.com Open Access | CNR IRIS Restricted


2023 Software Open Access OPEN
MeshLab 2023.12
Muntoni A., Callieri M., Corsini M., Cignoni P.
MeshLab is an open source, portable, and extensible system for the processing and editing of unstructured large 3D triangular meshes. It is aimed to help the processing of the typical not-so-small unstructured models arising in 3D scanning, providing a set of tools for editing, cleaning, healing, inspecting, rendering and converting this kind of meshes.DOI: 10.5281/zenodo.10362278
Metrics:


See at: CNR IRIS Open Access | www.meshlab.net Open Access | CNR IRIS Restricted


2022 Journal article Open Access OPEN
TagLab: AI-assisted annotation for the fast and accurate semantic segmentation of coral reef orthoimages
Pavoni G, Corsini M, Ponchio F, Muntoni A, Edwards C, Pedersen N, Sandin S, Cignoni P
Semantic segmentation is a widespread image analysis task; in some applications, it requires such high accuracy that it still has to be done manually, taking a long time. Deep learning-based approaches can significantly reduce such times, but current automated solutions may produce results below expert standards. We propose agLab, an interactive tool for the rapid labelling and analysis of orthoimages that speeds up semantic segmentation. TagLab follows a human-centered artificial intelligence approach that, by integrating multiple degrees of automation, empowers human capabilities. We evaluated TagLab's efficiency in annotation time and accuracy through a user study based on a highly challenging task: the semantic segmentation of coral communities in marine ecology. In the assisted labelling of corals, TagLab increased the annotation speed by approximately 90% for nonexpert annotators while preserving the labelling accuracy. Furthermore, human-machine interaction has improved the accuracy of fully automatic predictions by about 7% on average and by 14% when the model generalizes poorly. Considering the experience done through the user study, TagLab has been improved, and preliminary investigations suggest a further significant reduction in annotation times.Source: JOURNAL OF FIELD ROBOTICS, vol. 39 (issue 3), pp. 246-262
DOI: 10.1002/rob.22049
Metrics:


See at: CNR IRIS Open Access | onlinelibrary.wiley.com Open Access | CNR IRIS Restricted


2022 Journal article Open Access OPEN
On assisting and automatizing the semantic segmentation of masonry walls
Pavoni G, Giuliani F, De Falco A, Corsini M, Ponchio F, Callieri M, Cignoni P
In Architectural Heritage, the masonry's interpretation is an essential instrument for analysing the construction phases, the assessment of structural properties, and the monitoring of its state of conservation. This work is generally carried out by specialists that, based on visual observation and their knowledge, manually annotate ortho-images of the masonry generated by photogrammetric surveys. This results in vector thematic maps segmented according to their construction technique (isolating areas of homogeneous materials/structure/texture or each individual constituting block of the masonry) or state of conservation, including degradation areas and damaged parts.This time-consuming manual work, often done with tools that have not been designed for this purpose, represents a bottleneck in the documentation and management workflow and is a severely limiting factor in monitoring large-scale monuments (e.g., city walls). This article explores the potential of AI-based solutions to improve the efficiency of masonry annotation in Architectural Heritage. This experimentation aims at providing interactive tools that support and empower the current workflow, benefiting from specialists' expertise.Source: JOURNAL ON COMPUTING AND CULTURAL HERITAGE, vol. 15 (issue 2)
DOI: 10.1145/3477400
Metrics:


See at: dl.acm.org Open Access | CNR IRIS Open Access | ISTI Repository Open Access | Journal on Computing and Cultural Heritage Restricted | CNR IRIS Restricted | CNR IRIS Restricted


2021 Journal article Open Access OPEN
Needs and gaps in optical underwater technologies and methods for the investigation of marine animal forest 3D-structural complexity
Rossi P, Ponti M, Righi S, Castagnetti C, Simonini R, Mancini F, Agrafiotis P, Bassani L, Bruno F, Cerrano C, Cignoni P, Corsini M, Drap P, Dubbini M, Garrabou J, Gori A, Gracias N, Ledoux Jb, Linares C, Mantas Tp, Menna F, Nocerino E, Palma M, Pavoni G, Ridolfi A, Rossi S, Skarlatos D, Treibitz T, Turicchia E, Yuval M, Capra A
Marine animal forests are benthic communities dominated by sessile suspension feeders (such as sponges, corals, and bivalves) able to generate three-dimensional (3D) frameworks with high structural complexity. The biodiversity and functioning of marine animal forests are strictly related to their 3D complexity. The present paper aims at providing new perspectives in underwater optical surveys. Starting from the current gaps in data collection and analysis that critically limit the study and conservation of marine animal forests, we discuss the main technological and methodological needs for the investigation of their 3D structural complexity at different spatial and temporal scales. Despite recent technological advances, it seems that several issues in data acquisition and processing need to be solved, to properly map the different benthic habitats in which marine animal forests are present, their health status and to measure structural complexity. Proper precision and accuracy should be chosen and assured in relation to the biological and ecological processes investigated. Besides, standardized methods and protocols are strictly necessary to meet the FAIR (findability, accessibility, interoperability, and reusability) data principles for the stewardship of habitat mapping and biodiversity, biomass, and growth data.Source: FRONTIERS IN MARINE SCIENCE, vol. 8 (issue 591292)
DOI: 10.3389/fmars.2021.591292
Metrics:


See at: Frontiers in Marine Science Open Access | Recolector de Ciencia Abierta, RECOLECTA Open Access | Archivio istituzionale della ricerca - Alma Mater Studiorum Università di Bologna Open Access | CNR IRIS Open Access | Flore (Florence Research Repository) Open Access | Diposit Digital de la Universitat de Barcelona Open Access | Ktisis Open Access | ISTI Repository Open Access | Frontiers in Marine Science Open Access | Frontiers in Marine Science Open Access | CNR IRIS Restricted


2021 Journal article Open Access OPEN
CHARITY: Cloud for holography and cross reality
Dazzi P, Corsini M
ISTI-CNR is involved in the H2020 CHARITY project (Cloud for HologrAphy and Cross RealITY), which started in January 2021. The project aims to leverage the benefits of intelligent, autonomous orchestration of a heterogeneous set of cloud, edge, and network resources, to create a symbiotic relationship between low and high latency infrastructures that will facilitate the needs of emerging applications.Source: ERCIM NEWS, vol. 126, pp. 46-47

See at: ercim-news.ercim.eu Open Access | CNR IRIS Open Access | ISTI Repository Open Access | CNR IRIS Restricted


2021 Journal article Open Access OPEN
Multimodal attention networks for low-level vision-and-language navigation
Landi F, Baraldi L, Cornia M, Corsini M, Cucchiara R
Vision-and-Language Navigation (VLN) is a challenging task in which an agent needs to follow a language-specified path to reach a target destination. The goal gets even harder as the actions available to the agent get simpler and move towards low-level, atomic interactions with the environment. This setting takes the name of low-level VLN. In this paper, we strive for the creation of an agent able to tackle three key issues: multi-modality, long-term dependencies, and adaptability towards different locomotive settings. To that end, we devise "Perceive, Transform, and Act" (PTA): a fully-attentive VLN architecture that leaves the recurrent approach behind and the first Transformer-like architecture incorporating three different modalities -- natural language, images, and low-level actions for the agent control. In particular, we adopt an early fusion strategy to merge lingual and visual information efficiently in our encoder. We then propose to refine the decoding phase with a late fusion extension between the agent's history of actions and the perceptual modalities. We experimentally validate our model on two datasets: PTA achieves promising results in low-level VLN on R2R and achieves good performance in the recently proposed R4R benchmark.Source: COMPUTER VISION AND IMAGE UNDERSTANDING, vol. 210
DOI: 10.1016/j.cviu.2021.103255
Metrics:


See at: CNR IRIS Open Access | ISTI Repository Open Access | www.sciencedirect.com Open Access | CNR IRIS Restricted | CNR IRIS Restricted


2021 Conference article Open Access OPEN
Evaluating deep learning methods for low resolution point cloud registration in outdoor scenarios
Siddique A, Corsini M, Ganovelli Fand Cignoni P
Point cloud registration is a fundamental task in 3D reconstruction and environment perception. We explore the performance of modern Deep Learning-based registration techniques, in particular Deep Global Registration (DGR) and Learning Multi-view Registration (LMVR), on an outdoor real world data consisting of thousands of range maps of a building acquired by a Velodyne LIDAR mounted on a drone. We used these pairwise registration methods in a sequential pipeline to obtain an initial rough registration. The output of this pipeline can be further globally refined. This simple registration pipeline allow us to assess if these modern methods are able to deal with this low quality data. Our experiments demonstrated that, despite some design choices adopted to take into account the peculiarities of the data, more work is required to improve the results of the registration.DOI: 10.2312/stag.20211489
Project(s): EVOCATION via OpenAIRE, ENCORE via OpenAIRE
Metrics:


See at: diglib.eg.org Open Access | CNR IRIS Open Access | ISTI Repository Open Access | CNR IRIS Restricted


2021 Conference article Open Access OPEN
A deep learning method for frame selection in videos for structure from motion pipelines
Banterle F, Gong R, Corsini M, Ganovelli F, Van Gool L, Cignoni P
Structure-from-Motion (SfM) using the frames of a video sequence can be a challenging task because there is a lot of redundant information, the computational time increases quadratically with the number of frames, there would be low-quality images (e.g., blurred frames) that can decrease the final quality of the reconstruction, etc. To overcome all these issues, we present a novel deep-learning architecture that is meant for speeding up SfM by selecting frames using predicted sub-sampling frequency. This architecture is general and can learn/distill the knowledge of any algorithm for selecting frames from a video for generating high-quality reconstructions. One key advantage is that we can run our architecture in real-time saving computations while keeping high-quality results.Source: PROCEEDINGS - INTERNATIONAL CONFERENCE ON IMAGE PROCESSING, pp. 3667-3671. Anchorage, Alaska, USA, 19-22/09/2021
DOI: 10.1109/icip42928.2021.9506227
Project(s): ENCORE via OpenAIRE
Metrics:


See at: CNR IRIS Open Access | ieeexplore.ieee.org Open Access | ISTI Repository Open Access | doi.org Restricted | CNR IRIS Restricted | CNR IRIS Restricted


2021 Conference article Open Access OPEN
Cloud for holography and augmented reality
Makris A, Boudi A, Coppola M, Cordeiro L, Corsini M, Dazzi P, Andilla Fd, Gonzalez Rozas Y, Kamarianakis M, Pateraki M, Pham Tl, Protopsaltis A, Raman A, Romussi A, Rosa L, Spatafora E, Taleb T, Theodoropoulos T, Tserpes K, Zschau E, Herzog U
The paper introduces the CHARITY framework, a novel framework which aspires to leverage the benefits of intelligent, network continuum autonomous orchestration of cloud, edge, and network resources, to create a symbiotic relationship between low and high latency infrastructures. These infrastructures will facilitate the needs of emerging applications such as holographic events, virtual reality training, and mixed reality entertainment. The framework relies on different enablers and technologies related to cloud and edge for offering a suitable environment in order to deliver the promise of ubiquitous computing to the NextGen application clients. The paper discusses the main pillars that support the CHARITY vision, and provide a description of the planned use cases that are planned to demonstrate CHARITY capabilities.DOI: 10.1109/cloudnet53349.2021.9657125
Project(s): CHARITY via OpenAIRE
Metrics:


See at: CNR IRIS Open Access | ieeexplore.ieee.org Open Access | ZENODO Open Access | doi.org Restricted | CNR IRIS Restricted | CNR IRIS Restricted


2021 Conference article Open Access OPEN
TagLab: A human-centric AI system for interactive semantic segmentation
Pavoni G, Corsini M, Ponchio F, Muntoni A, Cignoni P
Fully automatic semantic segmentation of highly specific semantic classes and complex shapes may not meet the accuracy standards demanded by scientists. In such cases, human-centered AI solutions, able to assist operators while preserving human control over complex tasks, are a good trade-off to speed up image labeling while maintaining high accuracy levels. TagLab is an open-source AI-assisted software for annotating large orthoimages which takes advantage of different degrees of automation; it speeds up image annotation from scratch through assisted tools, creates custom fully automatic semantic segmentation models, and, finally, allows the quick edits of automatic predictions. Since the orthoimages analysis applies to several scientific disciplines, TagLab has been designed with a flexible labeling pipeline. We report our results in two different scenarios, marine ecology, and architectural heritage.DOI: 10.48550/arxiv.2112.12702
Metrics:


See at: arXiv.org e-Print Archive Open Access | CNR IRIS Open Access | ISTI Repository Open Access | doi.org Restricted | CNR IRIS Restricted


2021 Conference article Open Access OPEN
Watch your strokes: improving handwritten text recognition with deformable convolutions
Cojocaru I., Cascianelli S., Baraldi L., Corsini M., Cucchiara R.
Handwritten Text Recognition (HTR) in free-layout pages is a valuable yet challenging task which aims to automatically understand handwritten texts. State-of-the-art approaches in this field usually encode input images with Convolutional Neural Networks, whose kernels are typically defined on a fixed grid and focus on all input pixels independently. However, this is in contrast with the sparse nature of handwritten pages, in which only pixels representing the ink of the writing are useful for the recognition task. Furthermore, the standard convolution operator is not explicitly designed to take into account the great variability in shape, scale, and orientation of handwritten characters. To overcome these limitations, we investigate the use of deformable convolutions for handwriting recognition. This type of convolution deform the convolution kernel according to the content of the neighborhood, and can therefore be more adaptable to geometric variations and other deformations of the text. Experiments conducted on the IAM and RIMES datasets demonstrate that the use of deformable convolutions is a promising direction for the design of novel architectures for handwritten text recognition.DOI: 10.1109/icpr48806.2021.9412392
Metrics:


See at: IRIS UNIMORE - Archivio istituzionale della ricerca - Università di Modena e Reggio Emilia Open Access | IRIS UNIMORE - Archivio istituzionale della ricerca - Università di Modena e Reggio Emilia Open Access | iris.unimore.it Open Access | doi.org Restricted | CNR IRIS Restricted | ieeexplore.ieee.org Restricted | CNR IRIS Restricted | iris.unimore.it Restricted


2020 Journal article Open Access OPEN
A State of the Art Technology in Large Scale Underwater Monitoring
Pavoni G, Corsini M, Cignoni P
In recent decades, benthic populations have been subjected to recurrent episodes of mass mortality. These events have been blamed in part on declining water quality and elevated water temperatures (see Figure 1) correlated to global climate change. Ecosystems are enhanced by the presence of species with three-dimensional growth. The study of the growth, resilience, and recovery capability of those species provides valuable information on the conservation status of entire habitats. We discuss here a state-of-the art solution to speed up the monitoring of benthic population through the automatic or assisted analysis of underwater visual data.Source: ERCIM NEWS, vol. 2020 (issue 121), pp. 17-18

See at: ercim-news.ercim.eu Open Access | CNR IRIS Open Access | ISTI Repository Open Access | CNR IRIS Restricted


2020 Journal article Open Access OPEN
On improving the training of models for the semantic segmentation of benthic communities from orthographic imagery
Pavoni G, Corsini M, Callieri M, Fiameni G, Edwards C, Cignoni P
The semantic segmentation of underwater imagery is an important step in the ecological analysis of coral habitats. To date, scientists produce fine-scale area annotations manually, an exceptionally time-consuming task that could be efficiently automatized by modern CNNs. This paper extends our previous work presented at the 3DUW'19 conference, outlining the workflow for the automated annotation of imagery from the first step of dataset preparation, to the last step of prediction reassembly. In particular, we propose an ecologically inspired strategy for an efficient dataset partition, an over-sampling methodology targeted on ortho-imagery, and a score fusion strategy. We also investigate the use of different loss functions in the optimization of a Deeplab V3+ model, to mitigate the class-imbalance problem and improve prediction accuracy on coral instance boundaries. The experimental results demonstrate the effectiveness of the ecologically inspired split in improving model performance, and quantify the advantages and limitations of the proposed over-sampling strategy. The extensive comparison of the loss functions gives numerous insights on the segmentation task; the Focal Tversky, typically used in the context of medical imaging (but not in remote sensing), results in the most convenient choice. By improving the accuracy of automated ortho image processing, the results presented here promise to meet the fundamental challenge of increasing the spatial and temporal scale of coral reef research, allowing researchers greater predictive ability to better manage coral reef resilience in the context of a changing environment.Source: REMOTE SENSING (BASEL), vol. 12 (issue 18)
DOI: 10.3390/rs12183106
Metrics:


See at: Remote Sensing Open Access | CNR IRIS Open Access | ISTI Repository Open Access | Remote Sensing Open Access | Remote Sensing Open Access | CNR IRIS Restricted


2020 Journal article Open Access OPEN
Foreword to the special section on smart tool and applications for graphics (STAG 2019)
Agus M, Corsini M, Pintus R
Source: COMPUTERS & GRAPHICS, vol. 91, pp. A3-A4
DOI: 10.1016/j.cag.2020.05.027
Metrics:


See at: CNR IRIS Open Access | www.sciencedirect.com Open Access | Computers & Graphics Restricted | CNR IRIS Restricted


2020 Conference article Open Access OPEN
Another Brick in the Wall: Improving the Assisted Semantic Segmentation of Masonry Walls
Pavoni G, Giuliani F, De Falco A, Corsini M, Ponchio F, Callieri M, Cignoni P
In Architectural Heritage, the masonry's interpretation is an essential instrument for analyzing the construction phases, the assessment of structural properties, and the monitoring of its state of conservation. This work is generally carried out by specialists that, based on visual observation and their knowledge, manually annotate ortho-images of the masonry generated by photogrammetric surveys. This results in vectorial thematic maps segmented according to their construction technique (isolating areas of homogeneous materials/structure/texture) or state of conservation, including degradation areas and damaged parts. This time-consuming manual work, often done with tools that have not been designed for this purpose, represents a bottleneck in the documentation and management workflow and is a severely limiting factor in monitoring large-scale monuments (e.g.city walls). This paper explores the potential of AI-based solutions to improve the efficiency of masonry annotation in Architectural Heritage. This experimentation aims at providing interactive tools that support and empower the current workflow, benefiting from specialists' expertise.DOI: 10.2312/gch.20201291
Metrics:


See at: CNR IRIS Open Access | ISTI Repository Open Access | CNR IRIS Restricted