119 result(s)
Page Size: 10, 20, 50
Export: bibtex, xml, json, csv
Order by:

CNR Author operator: and / or
more
Typology operator: and / or
Language operator: and / or
Date operator: and / or
more
Rights operator: and / or
2026 Conference article Open Access OPEN
JoinPap: Learning-based matching for the reconstruction of fragmentary papyri
Carrara Fabio, Corsini Massimiliano, Falchi Fabrizio, Messina Nicola
Reconstructing ancient papyri from fragmented pieces is a demanding task, posing significant challenges for papyrologists due to degraded material, subtle texture cues, and a lack of distinct landmarks. This paper introduces JoinPap, an intelligent interactive system designed to foster human-machine collaboration in this specialized domain. JoinPap leverages a self-supervised convolutional autoencoder, trained with a contrastive learning objective on high-resolution papyri scans, to acquire robust and discriminative texture-aware embeddings. These representations capture the continuity of fiber patterns across fragments, enabling a specialized matching algorithm to propose optimal vertical and horizontal alignments. We elaborate on data preparation, network design, training methodology, and integration of the matcher into a user-centered interface that supports fragment manipulation and annotation. JoinPap effectively supports expert-in-the-loop reconstruction by offering high-quality alignment suggestions grounded in visual texture continuity.Source: LECTURE NOTES IN COMPUTER SCIENCE, vol. 16170, pp. 296-306. Roma, Italy, 15–19 september 2025
DOI: 10.1007/978-3-032-11381-8_25
Project(s): FAIR - "Future Artificial Intelligence Research" - Spoke 1 "Human-centered AI", JoinPap – Reconstructing Fragmentary Papyri through Human-Machine Interaction
Metrics:


See at: CNR IRIS Open Access | link.springer.com Open Access | CNR IRIS Restricted | CNR IRIS Restricted


2025 Journal article Open Access OPEN
Removing dead coral after marine heatwaves can mitigate coral–algae competition and increase viable coral recruitment
Kopecky K. L., Pavoni G., Corsini M., Brooks A. J., Difiore B. P., Menna F., Nocerino E.
Ecological disturbance regimes are shifting and leaving behind novel legacies, like the remnant structures of dead foundation species, which have poorly known impacts on ecosystem resilience. We explored how dead coral skeletons produced by marine heatwaves—material legacies of increasingly common disturbances on coral reefs—influence spatial competition between corals and macroalgae, focusing on whether removing dead branching skeletons stimulates recovery of coral after disturbance. Following a marine heatwave, we removed dead skeletons from reef patches and then used underwater photogrammetry and AI-powered image analysis to quantify trajectories of coral and macroalgae. After four years, removal of dead skeletons resulted in 1.6 times more live coral remaining and reduced development of macroalgae by half, relative to patches where skeletons were left intact. Dead skeletons acted as an alternate substrate type that facilitated macroalgae development, and greater macroalgal abundance caused steeper declines in live coral. Lastly, removal of dead skeletons led to five times greater densities of coral recruits on stable (primary) reef substrate than on comparatively unstable branching coral skeletons. Our findings identify a promising avenue to manage for coral resilience (on reefs where carbonate budgets are not in a deficit) and reveal how material legacies of changing disturbance regimes can alter physical environments to sway the outcomes of spatial competition.Source: ECOLOGICAL APPLICATIONS, vol. 35 (issue 5)
DOI: 10.1002/eap.70077
Project(s): NSF Moorea Coral Reef Long Term Ecological Research
Metrics:


See at: esajournals.onlinelibrary.wiley.com Open Access | CNR IRIS Open Access | CNR IRIS Restricted


2025 Other Open Access OPEN
ISTI-day 2025 Proceedings
Del Corso G., Pedrotti A., Federico G., Gennaro C., Carrara F., Amato G., Di Benedetto M., Gabrielli E., Belli D., Matrullo Z., Miori V., Tolomei G., Waheed T., Marchetti E., Calabrò A., Rossetti G., Stella M., Cazabet R., Abramski K., Cau E., Citraro S., Failla A., Mesina V., Morini V., Pansanella V., Colantonio S., Germanese D., Pascali M. A., Bianchi L., Messina N., Falchi F., Barsellotti L., Pacini G., Cassese M., Puccetti G., Esuli A., Volpi L., Moreo A., Sebastiani F., Sperduti G., Nguyen D., Broccia G., Ter Beek M. H., Ferrari A., Massink M., Belmonte G., Ciancia V., Papini O., Canapa G., Catricalà B., Manca M., Paternò F., Santoro C., Zedda E., Gallo S., Maenza S., Mattioli A., Simeoli L., Rucci D., Carlini E., Dazzi P., Kavalionak H., Mordacchini M., Rulli C., Muntean Cristina Ioana, Nardini F. M., Perego R., Rocchietti G., Lettich F., Renso C., Pugliese C., Casini G., Haldimann J., Meyer T., Assante M., Candela L., Dell'Amico A., Frosini L., Mangiacrapa F., Oliviero A., Pagano P., Panichi G., Peccerillo B., Procaccini M., Mannocci A., Manghi P., Lonetti F., Kang D., Di Giandomenico F., Jee E., Lazzini G., Conti F., Scopigno R., D'Acunto M., Moroni D., Cafiso M., Paradisi P., Callieri M., Pavoni G., Corsini M., De Falco A., Sala F., Saraceni Q., Gattiglia G.
ISTI-Day is an annual information and networking event organized by the Institute of Information Science and Technologies "A. Faedo" (ISTI) of the Italian National Research Council (CNR). This event features an opening talk of the Director of the Dept. DIITET (Emilio F. Campana) as well as an overview of the Institute's activities presented by the ISTI Director (Roberto Scopigno). Those institutional segments are complemented by dedicated presentations and round tables featuring former staff members, as well as internal and external collaborators. To foster a network of knowledge and collaboration among newcomers, the 2025 ISTI Day edition also includes a large poster session that provides a comprehensive overview of current research activities. Each of the 13 laboratories contributes 1–3 posters, highlighting the most innovative work and offering early-career researchers a platform for discussion. Thus these proceedings include the posters selected for ISTI-Day 2025, reflecting the diverse and innovative nature of the Institute's research.

See at: CNR IRIS Open Access | www.isti.cnr.it Open Access | CNR IRIS Restricted


2025 Contribution to conference Open Access OPEN
Automatic image-based coral polyp analysis through multi-view instance segmentation
Dutta S., Pavoni G., Cattini S., Rovati L., Capra A., Castagnetti C., Corsini M., Ganovelli F., Cignoni P., Rossi P., Cenni E., Simonini R., Grassi F., Cassanelli D.
We present an automated framework for counting and measuring the polyps of Cladocora caespitosa, a Mediterranean reefbuilding coral. To our knowledge, the most practical method for counting polyps currently involves ecologists’ visual inspection of a 3D model. However, measuring polyps from the model can lead to inaccuracies due to distortions in the reconstruction. Our method integrates deep learning-based instance segmentation on 2D images with 3D models for unique polyp identification, ensuring precise biometric extraction. The proposed pipeline automates polyp detection, counting, and measurement while overcoming the limitations of manual in situ methods. Laboratory validation demonstrates its accuracy and efficiency, paving the way for scalable, high-resolution phenotyping, and field monitoring of Mediterranean coral populations.DOI: 10.2312/egp.20251022
Project(s): Enhancing Underwater PHotogrammetRy, fluOreScence imagerY
Metrics:


See at: diglib.eg.org Open Access | CNR IRIS Open Access | CNR IRIS Restricted | CNR IRIS Restricted


2025 Conference article Open Access OPEN
AI-driven specular removal for 3D asset creation
Callieri M., Corsini M., Dutta S., Giorgi D., Sorrenti M.
Specular highlights negatively affect photogram-metric 3D reconstructions. To mitigate this problem, we developed an AI-driven image processing technique able to remove specular highlights. We created a synthetic image dataset that reflects the objects, viewpoints, and specular behaviors found in real-world photogrammetric campaigns, and used it to train a U-Net model that can batch-process input images for photogrammetric reconstruction. The process was tested on both synthetic and real-world photos, demonstrating superior results compared to existing models in the literature.DOI: 10.1109/dsp65409.2025.11075117
Project(s): SUN via OpenAIRE
Metrics:


See at: CNR IRIS Open Access | ieeexplore.ieee.org Open Access | doi.org Restricted | CNR IRIS Restricted | CNR IRIS Restricted


2024 Software Restricted
TagLab 2024.12.2
Corsini M., Pavoni G., Ponchio F., Muntoni A., Saraceni Q., Cignoni P.
TagLab is an AI-powered segmentation software designed to support the analysis of large orthographic images generated through the photogrammetric pipeline.DOI: 10.5281/zenodo.14258304
Metrics:


See at: CNR IRIS Restricted | CNR IRIS Restricted | taglab.isti.cnr.it Restricted


2024 Journal article Open Access OPEN
Integrating widespread coral reef monitoring tools for managing both area and point annotations
Pavoni G., Pierce J., Edwards C. B., Corsini M., Petrovic V., Cignoni P.
Large-area image acquisition techniques are essential in underwater investigations: high-resolution 3D image-based reconstructions have improved coral reef monitoring by enabling novel seascape ecological analysis. Artificial intelligence (AI) offers methods for significantly accelerating image data interpretation, such as automatically recognizing, enumerating, and measuring organisms. However, the rapid proliferation of these technological achievements has led to a relative lack of standardization of methods. Remarkably, there are notable differences in procedures for generating human and AI annotations, and there is also a scarcity of publicly available datasets and shared machine-learning models. The lack of standard procedures makes it challenging to compare and reproduce scientific findings. One way to overcome this problem is to make the most used platforms by coral reef scientists interoperable so that the analyses can all be exported into a common format. This paper introduces functionality to promote interoperability between three popular open-source software tools dedicated to the digital study of coral reefs: TagLab, CoralNet, and Viscore. As users of each platform may have different analysis pipelines, we discuss several workflows for managing and processing point and area annotations, improving collaboration among these tools. Our work sets the foundation for a more seamless ecosystem that maintains the established investigation procedures of various laboratories but allows for easier result sharing.Source: INTERNATIONAL ARCHIVES OF THE PHOTOGRAMMETRY, REMOTE SENSING AND SPATIAL INFORMATION SCIENCES, vol. XLVIII-2-2024 (issue 2), pp. 327-333
DOI: 10.5194/isprs-archives-xlviii-2-2024-327-2024
Project(s): ReefSurvAI: Verso un’infrastruttura web che supporti l’uso dell’intelligenza artificiale per il monitoraggio delle barriere coralline
Metrics:


See at: The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences Open Access | The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences Open Access | IRIS Cnr Open Access | IRIS Cnr Open Access | CNR IRIS Restricted | Copernicus Publications Restricted


2024 Journal article Open Access OPEN
Evaluating image-based interactive 3D modeling tools
Siddique A., Cignoni P., Corsini M., Banterle F.
Structure from Motion (SfM) is a computer vision technique used to reconstruct three-dimensional (3D) structures from a series of two-dimensional (2D) images or video frames. However, SfM tools struggle with transparent objects, reflective surfaces, and low-resolution frames. In such situations, image-based interactive 3D modeling software packages are employed to model 3D objects and measure dimensions. Our contributions to this work are twofold. First, we have introduced new tools to improve 3D modeling software packages; such tools are aimed at easing the workload for users. Second, we have conducted a comprehensive user study to evaluate the efficacy of popular 3d modeling software packages. The task is to measure certain dimensions for which ground truth measurements are already known. A relative error is calculated for every measurement. The evaluation of each software tool is done through survey form, event logs, and measurement relative error. The results of this user study clearly show that our approach to 3D modeling using multiple images has a lower relative error and produces higher quality 3D models than other software packages. In addition, it shows our new tools reduce the required time for completing a task.Source: IEEE ACCESS, vol. 12, pp. 104138-104152
DOI: 10.1109/access.2024.3434584
Project(s): "Photogrammetric Method for Determining BWR Internals Dimensions, EVOCATION via OpenAIRE
Metrics:


See at: IEEE Access Open Access | IEEE Access Open Access | IRIS Cnr Open Access | IRIS Cnr Open Access | CNR IRIS Restricted


2023 Journal article Open Access OPEN
MoReLab: a software for user-assisted 3D reconstruction
Siddique A, Banterle F, Corsini M, Cignoni P, Sommerville D, Joffe C
We present MoReLab, a tool for user-assisted 3D reconstruction. This reconstruction requires an understanding of the shapes of the desired objects. Our experiments demonstrate that existing Structure from Motion (SfM) software packages fail to estimate accurate 3D models in low-quality videos due to several issues such as low resolution, featureless surfaces, low lighting, etc. In such scenarios, which are common for industrial utility companies, user assistance becomes necessary to create reliable 3D models. In our system, the user first needs to add features and correspondences manually on multiple video frames. Then, classic camera calibration and bundle adjustment are applied. At this point, MoReLab provides several primitive shape tools such as rectangles, cylinders, curved cylinders, etc., to model different parts of the scene and export 3D meshes. These shapes are essential for modeling industrial equipment whose videos are typically captured by utility companies with old video cameras (low resolution, compression artifacts, etc.) and in disadvantageous lighting conditions (low lighting, torchlight attached to the video camera, etc.). We evaluate our tool on real industrial case scenarios and compare it against existing approaches. Visual comparisons and quantitative results show that MoReLab achieves superior results with regard to other user-interactive 3D modeling tools.Source: SENSORS (BASEL), vol. 23 (issue 14)
DOI: 10.3390/s23146456
Project(s): EVOCATION via OpenAIRE
Metrics:


See at: Sensors Open Access | CNR IRIS Open Access | ISTI Repository Open Access | www.mdpi.com Open Access | CNR IRIS Restricted


2023 Journal article Open Access OPEN
Quantifying the loss of coral from a bleaching event using underwater photogrammetry and AI-Assisted Image Segmentation
Kopecky K. L., Pavoni G., Nocerino E., Brooks A. J., Corsini M., Menna F., Gallagher J. P., Capra A., Castagnetti C., Rossi P., Gruen A., Neyer F., Muntoni A., Ponchio F., Cignoni P., Troyer M., Holbrook S. J., Schmitt R. J.
Detecting the impacts of natural and anthropogenic disturbances that cause declines in organisms or changes in community composition has long been a focus of ecology. However, a tradeoff often exists between the spatial extent over which relevant data can be collected, and the resolution of those data. Recent advances in underwater photogrammetry, as well as computer vision and machine learning tools that employ artificial intelligence (AI), offer potential solutions with which to resolve this tradeoff. Here, we coupled a rigorous photogrammetric survey method with novel AI-assisted image segmentation software in order to quantify the impact of a coral bleaching event on a tropical reef, both at an ecologically meaningful spatial scale and with high spatial resolution. In addition to outlining our workflow, we highlight three key results: (1) dramatic changes in the three-dimensional surface areas of live and dead coral, as well as the ratio of live to dead colonies before and after bleaching; (2) a size-dependent pattern of mortality in bleached corals, where the largest corals were disproportionately affected, and (3) a significantly greater decline in the surface area of live coral, as revealed by our approximation of the 3D shape compared to the more standard planar area (2D) approach. The technique of photogrammetry allows us to turn 2D images into approximate 3D models in a flexible and efficient way. Increasing the resolution, accuracy, spatial extent, and efficiency with which we can quantify effects of disturbances will improve our ability to understand the ecological consequences that cascade from small to large scales, as well as allow more informed decisions to be made regarding the mitigation of undesired impacts.Source: REMOTE SENSING (BASEL), vol. 15 (issue 16)
DOI: 10.3390/rs15164077
Metrics:


See at: CNR IRIS Open Access | ISTI Repository Open Access | www.mdpi.com Open Access | CNR IRIS Restricted


2023 Software Open Access OPEN
MeshLab 2023.12
Muntoni A., Callieri M., Corsini M., Cignoni P.
MeshLab is an open source, portable, and extensible system for the processing and editing of unstructured large 3D triangular meshes. It is aimed to help the processing of the typical not-so-small unstructured models arising in 3D scanning, providing a set of tools for editing, cleaning, healing, inspecting, rendering and converting this kind of meshes.DOI: 10.5281/zenodo.10362278
Metrics:


See at: CNR IRIS Open Access | www.meshlab.net Open Access | CNR IRIS Restricted


2022 Journal article Open Access OPEN
TagLab: AI-assisted annotation for the fast and accurate semantic segmentation of coral reef orthoimages
Pavoni G, Corsini M, Ponchio F, Muntoni A, Edwards C, Pedersen N, Sandin S, Cignoni P
Semantic segmentation is a widespread image analysis task; in some applications, it requires such high accuracy that it still has to be done manually, taking a long time. Deep learning-based approaches can significantly reduce such times, but current automated solutions may produce results below expert standards. We propose agLab, an interactive tool for the rapid labelling and analysis of orthoimages that speeds up semantic segmentation. TagLab follows a human-centered artificial intelligence approach that, by integrating multiple degrees of automation, empowers human capabilities. We evaluated TagLab's efficiency in annotation time and accuracy through a user study based on a highly challenging task: the semantic segmentation of coral communities in marine ecology. In the assisted labelling of corals, TagLab increased the annotation speed by approximately 90% for nonexpert annotators while preserving the labelling accuracy. Furthermore, human-machine interaction has improved the accuracy of fully automatic predictions by about 7% on average and by 14% when the model generalizes poorly. Considering the experience done through the user study, TagLab has been improved, and preliminary investigations suggest a further significant reduction in annotation times.Source: JOURNAL OF FIELD ROBOTICS, vol. 39 (issue 3), pp. 246-262
DOI: 10.1002/rob.22049
Metrics:


See at: CNR IRIS Open Access | onlinelibrary.wiley.com Open Access | CNR IRIS Restricted


2022 Journal article Open Access OPEN
On assisting and automatizing the semantic segmentation of masonry walls
Pavoni G, Giuliani F, De Falco A, Corsini M, Ponchio F, Callieri M, Cignoni P
In Architectural Heritage, the masonry's interpretation is an essential instrument for analysing the construction phases, the assessment of structural properties, and the monitoring of its state of conservation. This work is generally carried out by specialists that, based on visual observation and their knowledge, manually annotate ortho-images of the masonry generated by photogrammetric surveys. This results in vector thematic maps segmented according to their construction technique (isolating areas of homogeneous materials/structure/texture or each individual constituting block of the masonry) or state of conservation, including degradation areas and damaged parts.This time-consuming manual work, often done with tools that have not been designed for this purpose, represents a bottleneck in the documentation and management workflow and is a severely limiting factor in monitoring large-scale monuments (e.g., city walls). This article explores the potential of AI-based solutions to improve the efficiency of masonry annotation in Architectural Heritage. This experimentation aims at providing interactive tools that support and empower the current workflow, benefiting from specialists' expertise.Source: JOURNAL ON COMPUTING AND CULTURAL HERITAGE, vol. 15 (issue 2)
DOI: 10.1145/3477400
Metrics:


See at: dl.acm.org Open Access | CNR IRIS Open Access | ISTI Repository Open Access | Journal on Computing and Cultural Heritage Restricted | CNR IRIS Restricted | CNR IRIS Restricted


2021 Journal article Open Access OPEN
Needs and gaps in optical underwater technologies and methods for the investigation of marine animal forest 3D-structural complexity
Rossi P, Ponti M, Righi S, Castagnetti C, Simonini R, Mancini F, Agrafiotis P, Bassani L, Bruno F, Cerrano C, Cignoni P, Corsini M, Drap P, Dubbini M, Garrabou J, Gori A, Gracias N, Ledoux Jb, Linares C, Mantas Tp, Menna F, Nocerino E, Palma M, Pavoni G, Ridolfi A, Rossi S, Skarlatos D, Treibitz T, Turicchia E, Yuval M, Capra A
Marine animal forests are benthic communities dominated by sessile suspension feeders (such as sponges, corals, and bivalves) able to generate three-dimensional (3D) frameworks with high structural complexity. The biodiversity and functioning of marine animal forests are strictly related to their 3D complexity. The present paper aims at providing new perspectives in underwater optical surveys. Starting from the current gaps in data collection and analysis that critically limit the study and conservation of marine animal forests, we discuss the main technological and methodological needs for the investigation of their 3D structural complexity at different spatial and temporal scales. Despite recent technological advances, it seems that several issues in data acquisition and processing need to be solved, to properly map the different benthic habitats in which marine animal forests are present, their health status and to measure structural complexity. Proper precision and accuracy should be chosen and assured in relation to the biological and ecological processes investigated. Besides, standardized methods and protocols are strictly necessary to meet the FAIR (findability, accessibility, interoperability, and reusability) data principles for the stewardship of habitat mapping and biodiversity, biomass, and growth data.Source: FRONTIERS IN MARINE SCIENCE, vol. 8 (issue 591292)
DOI: 10.3389/fmars.2021.591292
Metrics:


See at: Frontiers in Marine Science Open Access | Recolector de Ciencia Abierta, RECOLECTA Open Access | Archivio istituzionale della ricerca - Alma Mater Studiorum Università di Bologna Open Access | CNR IRIS Open Access | Flore (Florence Research Repository) Open Access | Diposit Digital de la Universitat de Barcelona Open Access | Ktisis Open Access | ISTI Repository Open Access | Frontiers in Marine Science Open Access | Frontiers in Marine Science Open Access | CNR IRIS Restricted


2021 Journal article Open Access OPEN
CHARITY: Cloud for holography and cross reality
Dazzi P, Corsini M
ISTI-CNR is involved in the H2020 CHARITY project (Cloud for HologrAphy and Cross RealITY), which started in January 2021. The project aims to leverage the benefits of intelligent, autonomous orchestration of a heterogeneous set of cloud, edge, and network resources, to create a symbiotic relationship between low and high latency infrastructures that will facilitate the needs of emerging applications.Source: ERCIM NEWS, vol. 126, pp. 46-47

See at: ercim-news.ercim.eu Open Access | CNR IRIS Open Access | ISTI Repository Open Access | CNR IRIS Restricted


2021 Journal article Open Access OPEN
Multimodal attention networks for low-level vision-and-language navigation
Landi F, Baraldi L, Cornia M, Corsini M, Cucchiara R
Vision-and-Language Navigation (VLN) is a challenging task in which an agent needs to follow a language-specified path to reach a target destination. The goal gets even harder as the actions available to the agent get simpler and move towards low-level, atomic interactions with the environment. This setting takes the name of low-level VLN. In this paper, we strive for the creation of an agent able to tackle three key issues: multi-modality, long-term dependencies, and adaptability towards different locomotive settings. To that end, we devise "Perceive, Transform, and Act" (PTA): a fully-attentive VLN architecture that leaves the recurrent approach behind and the first Transformer-like architecture incorporating three different modalities -- natural language, images, and low-level actions for the agent control. In particular, we adopt an early fusion strategy to merge lingual and visual information efficiently in our encoder. We then propose to refine the decoding phase with a late fusion extension between the agent's history of actions and the perceptual modalities. We experimentally validate our model on two datasets: PTA achieves promising results in low-level VLN on R2R and achieves good performance in the recently proposed R4R benchmark.Source: COMPUTER VISION AND IMAGE UNDERSTANDING, vol. 210
DOI: 10.1016/j.cviu.2021.103255
Metrics:


See at: CNR IRIS Open Access | ISTI Repository Open Access | www.sciencedirect.com Open Access | CNR IRIS Restricted | CNR IRIS Restricted


2021 Conference article Open Access OPEN
Evaluating deep learning methods for low resolution point cloud registration in outdoor scenarios
Siddique A, Corsini M, Ganovelli Fand Cignoni P
Point cloud registration is a fundamental task in 3D reconstruction and environment perception. We explore the performance of modern Deep Learning-based registration techniques, in particular Deep Global Registration (DGR) and Learning Multi-view Registration (LMVR), on an outdoor real world data consisting of thousands of range maps of a building acquired by a Velodyne LIDAR mounted on a drone. We used these pairwise registration methods in a sequential pipeline to obtain an initial rough registration. The output of this pipeline can be further globally refined. This simple registration pipeline allow us to assess if these modern methods are able to deal with this low quality data. Our experiments demonstrated that, despite some design choices adopted to take into account the peculiarities of the data, more work is required to improve the results of the registration.DOI: 10.2312/stag.20211489
Project(s): EVOCATION via OpenAIRE, ENCORE via OpenAIRE
Metrics:


See at: diglib.eg.org Open Access | CNR IRIS Open Access | ISTI Repository Open Access | CNR IRIS Restricted


2021 Conference article Open Access OPEN
A deep learning method for frame selection in videos for structure from motion pipelines
Banterle F, Gong R, Corsini M, Ganovelli F, Van Gool L, Cignoni P
Structure-from-Motion (SfM) using the frames of a video sequence can be a challenging task because there is a lot of redundant information, the computational time increases quadratically with the number of frames, there would be low-quality images (e.g., blurred frames) that can decrease the final quality of the reconstruction, etc. To overcome all these issues, we present a novel deep-learning architecture that is meant for speeding up SfM by selecting frames using predicted sub-sampling frequency. This architecture is general and can learn/distill the knowledge of any algorithm for selecting frames from a video for generating high-quality reconstructions. One key advantage is that we can run our architecture in real-time saving computations while keeping high-quality results.Source: PROCEEDINGS - INTERNATIONAL CONFERENCE ON IMAGE PROCESSING, pp. 3667-3671. Anchorage, Alaska, USA, 19-22/09/2021
DOI: 10.1109/icip42928.2021.9506227
Project(s): ENCORE via OpenAIRE
Metrics:


See at: CNR IRIS Open Access | ieeexplore.ieee.org Open Access | ISTI Repository Open Access | doi.org Restricted | CNR IRIS Restricted | CNR IRIS Restricted


2021 Conference article Open Access OPEN
Cloud for holography and augmented reality
Makris A, Boudi A, Coppola M, Cordeiro L, Corsini M, Dazzi P, Andilla Fd, Gonzalez Rozas Y, Kamarianakis M, Pateraki M, Pham Tl, Protopsaltis A, Raman A, Romussi A, Rosa L, Spatafora E, Taleb T, Theodoropoulos T, Tserpes K, Zschau E, Herzog U
The paper introduces the CHARITY framework, a novel framework which aspires to leverage the benefits of intelligent, network continuum autonomous orchestration of cloud, edge, and network resources, to create a symbiotic relationship between low and high latency infrastructures. These infrastructures will facilitate the needs of emerging applications such as holographic events, virtual reality training, and mixed reality entertainment. The framework relies on different enablers and technologies related to cloud and edge for offering a suitable environment in order to deliver the promise of ubiquitous computing to the NextGen application clients. The paper discusses the main pillars that support the CHARITY vision, and provide a description of the planned use cases that are planned to demonstrate CHARITY capabilities.DOI: 10.1109/cloudnet53349.2021.9657125
Project(s): CHARITY via OpenAIRE
Metrics:


See at: CNR IRIS Open Access | ieeexplore.ieee.org Open Access | ZENODO Open Access | doi.org Restricted | CNR IRIS Restricted | CNR IRIS Restricted


2021 Conference article Open Access OPEN
TagLab: A human-centric AI system for interactive semantic segmentation
Pavoni G, Corsini M, Ponchio F, Muntoni A, Cignoni P
Fully automatic semantic segmentation of highly specific semantic classes and complex shapes may not meet the accuracy standards demanded by scientists. In such cases, human-centered AI solutions, able to assist operators while preserving human control over complex tasks, are a good trade-off to speed up image labeling while maintaining high accuracy levels. TagLab is an open-source AI-assisted software for annotating large orthoimages which takes advantage of different degrees of automation; it speeds up image annotation from scratch through assisted tools, creates custom fully automatic semantic segmentation models, and, finally, allows the quick edits of automatic predictions. Since the orthoimages analysis applies to several scientific disciplines, TagLab has been designed with a flexible labeling pipeline. We report our results in two different scenarios, marine ecology, and architectural heritage.DOI: 10.48550/arxiv.2112.12702
Metrics:


See at: arXiv.org e-Print Archive Open Access | CNR IRIS Open Access | ISTI Repository Open Access | doi.org Restricted | CNR IRIS Restricted