251 result(s)
Page Size: 10, 20, 50
Export: bibtex, xml, json, csv
Order by:

CNR Author operator: and / or
more
Typology operator: and / or
Language operator: and / or
Date operator: and / or
more
Rights operator: and / or
2004 Conference article Unknown
An extended maximum likelihood approach for the robust blind separation of autocorrelated images from noisy mixtures
Gerace I., Cricco D., Tonazzini A.
In this paper we consider the problem of separating autocorrelated source images from linear mixtures with unknown coefficients, in presence of even significant noise. Assuming the statistical independence of the sources, we formulate the problem in a Bayesian estimation framework, and describe local correlation within the individual source images through the use of suitable Gibbs priors, accounting also for well-behaved edges in the images. Based on an extension of the Maximum Likelihood approach to ICA, we derive an algorithm for recovering the mixing matrix that makes the estimated sources fit the known properties of the original sources. The preliminary experimental results on synthetic mixtures showed that a significant robustness against noise, both stationary and non-stationary, can be achieved even by using generic autocorrelation models.Source: ICA 2004 - Independent Component Analysis and Blind Signal Separation: Fifth International Conference, pp. 954–961, Granada, Spain, 22-24 September

See at: CNR ExploRA


2001 Journal article Unknown
Blur identification analysis in blind image deconvolution using Markov random fields
Tonazzini A.
This paper deals with the blind deconvolution of blurred noisy images and proposes exploiting edge preserving MRF-based regularization to improve the quality of both the image and blur estimates.Source: Pattern recognition and image analysis 11 (2001): 699–710.

See at: CNR ExploRA


2010 Journal article Closed Access
Color space transformations for analysis and enhancement of ancient degraded manuscripts
Tonazzini A.
In this paper we focus on ancient manuscripts, acquired in the RGB modality, which are degraded by the presence of complex background textures that interfere with the text of interest. Removing these artifacts is not trivial, especially with ancient originals, where they are usually very strong. Rather than applying techniques to just cancel out the interferences, we adopt the point of view of separating, extracting and classifying the various patterns superimposed in the document. We show that representing RGB images in different color spaces can be effective for this goal. In fact, even if the RGB color representation is the most frequently used color space in image processing, it does not maximize the information contents of the image. Thus, in the literature, several color spaces have been developed for analysis tasks, such as object segmentation and edge detection. Some color spaces seem to be particularly suitable to the analysis of degraded documents, allowing for the enhancement of the contents, the improvement of the text readability, the extraction of partially hidden features, and a better performance of thresholding techniques for text binarization. We present and discuss several examples of the successful application of both fixed color spaces and self-adaptive color spaces, based on the decorrelation of the original RGB channels. We also show that even simpler arithmetic operations among the channels can be effective for removing bleed-through, refocusing and improving the contrast of the foreground text, and to recover the original RGB appearance of the enhanced document.Source: Pattern recognition and image analysis 20 (2010): 404–417. doi:10.1134/S105466181003017X
DOI: 10.1134/s105466181003017x
Metrics:


See at: Pattern Recognition and Image Analysis Restricted | www.springerlink.com Restricted | CNR ExploRA


2004 Conference article Unknown
Joint blind separation and restoration of mixed degraded images for document analysis
Tonazzini A., Gerace I., Cricco F.
We consider the problem of extracting clean images from noisy mixtures of images degraded by blur operators. This special case of source separation arises, for instance, when analyzing document images showing bleed-through or show-through. We propose to jointly perform demixing and deblurring by augmenting blind source separation with a step of image restoration. Within the ICA approach, i.e. assuming the statistical independence of the sources, we adopt a Bayesian formulation were the priors on the ideal images are given in the form of MRF, and a MAP estimation is employed for the joint recovery of both the mixing matrix and the images. We show that taking into account for the blur model and for a proper image model improves the separation process and makes it more robust against noise. Preliminary results on synthetic examples of documents exhibiting bleed-through are provided, considering edge-preserving priors that are suitable to describe text images.Source: IEEE International Conference on Image Processing, pp. 311–314, Singapore, October 24-27, 2004

See at: CNR ExploRA


2006 Conference article Unknown
ISYREADET: un sistema integrato per il restauro virtuale
Console E., Burdin V., Legnaioli S., Palleschi V., Tassone R., Tonazzini A.
Il progetto Isyreadet (Integrated System for Recovery and Archiving Degraded Texts), finanziato dalla Commissione Europea con i fondi del V Programma Quadro di Ricerca, Sviluppo Tecnologico e Dimostrazione (1998-2002) si è proposto di realizzare un sistema integrato, hardware e software per il restauro virtuale e l'archiviazione di documenti danneggiati utilizzando metodi e strumenti innovativi, come camere multispettrali e algoritmi di elaborazione di immagini.Source: IV Congresso Nazionale di Archeometria - Scienza e Beni Culturali, pp. 311, Pisa, 01-03/02/2006

See at: CNR ExploRA


2006 Conference article Unknown
Joint correction of cross-talk and peak spreading in DNA electropherograms
Tonazzini A., Bedini L.
In automated DNA sequencing, the final algorithmic phase, referred to as basecalling, consists of the translation of four time signals in the form of peak sequences (electropherogram) to the corresponding sequence of bases. The most popular basecaller, Phred, detects the peaks based on heuristics, and is very efficient when the peaks are well distinct and quite regular in spread, amplitude and spacing. Unfortunately, in the practice the data is subject to several degradations, particularly near the end of the sequence. The most frequent ones are peak superposition, peak merging and signal leakage, resulting in secondary peaks. In these conditions the experiment must be repeated and the human intervention is required. Recently, there have been attempts to provide methodological foundations to the problem and use statistical models to solve it. In this paper, we propose exploiting a priori information and Bayesian estimation to remove degradations and recover the signals in an impulsive form which makes the task of basecalling straightforward.Source: RECOMB 2006. The 10th Annual International Conference on Research in Computational Molecular Biology, Venice, 01-04/04/2006

See at: CNR ExploRA


2006 Conference article Unknown
Statistical analysis of electrophoresis time series for improving basecalling in DNA sequencing
Tonazzini A., Bedini L.
In automated DNA sequencing, the final algorithmic phase, referred to as basecalling, consists of the translation of four time signals in the form of peak sequences (electropherogram) to the corresponding sequence of bases. Commercial basecallers detect the peaks based on heuristics, and are very efficient when the peaks are distinct and regular in spread, amplitude and spacing. Unfortunately, in the practice the signals are subject to several degradations, among which peak superposition and peak merging are the most frequent. In these cases the experiment must be repeated and human intervention is required. Recently, there have been attempts to provide methodological foundations to the problem and to use statistical models for solving it. In this paper, we exploit a priori information and Bayesian estimation to remove degradations and recover the signals in an impulsive form which makes basecalling straightforward.Source: ICDM 2006, Workshop on Mass-Data Analysis of Images and Signals in Medicine, Biotechnology and Chemistry MDA´2006, Lipsia, 13/07/2006

See at: CNR ExploRA


2006 Contribution to conference Open Access OPEN
Virtual restoring by multispectral imaging
Console E., Burdin V., Cazuguel G., Legnaioli S., Palleschi V., Tassone R., Tonazzini A.
Isyreadet (Integrated System for Recovering and Archiving Degraded Texts) is a research project funded by the EU and aimed at realising a system for the virtual restoring of damaged historical documents using innovative methods and tools, such as multispectral cameras and image processing algorithms. During the period 2003-2004 the Consortium, formed by 5 SMEs and 3 RTD Performers has successfully carried out a series of activities related to the analysis of different kind of possible damages, the digitisation of documents using a multispectral camera, the selection and application of the suitable algorithms for the virtual restoration, the implementation of the user-friendly graphic interface.Source: International Conference Museums, libraries and archives online: MICHAEL service and other international initiatives, Roma, 4-5 dicembre 2006

See at: ISTI Repository Open Access | CNR ExploRA


2001 Report Unknown
Degradation identification and model parameter estimation in discontinuity-adaptive visual reconstruction
Tonazzini A., Bedini L.
This paper describes our recent experiences and progress towards an efficient solution of the highly ill-posed and computationally demanding problem of blind and unsupervised visual reconstruction. Our case study is image restoration, i.e. deblurring and denoising. The methodology employed makes reference to edge-preserving regularization. This is formulated both in a fully Bayesian framework, using a MRF image model with explicit, and possibly geometrically constrained, line processes, and in a deterministic framework, where the line process is addressed in an implicit manner, by using a particular MRF model which allows for self-interactions of the line and an adaptive variation of the model parameters. These MRF models have been proven to be efficient in modeling the local regularity properties of most real scenes, as well as the local regularity of object boundaries and intensity discontinuities. In both cases, our approach to this problem attempts to effectively exploit the correlation between intensities and lines, and is based on the assumption that the line process alone, when correctly recovered and located, can retain a good deal of information about both the hyperparameters that best model the whole image and the degradation features. We show that these approaches offer a way to improve both the quality of the reconstructed image, and also the estimates of the degradation and model parameters, and significantly reduce the computational burden of the estimation processes.Source: ISTI Technical reports, 2001

See at: CNR ExploRA


2004 Report Unknown
Bleed-through removal from degraded documents using a color decorrelation method
Tonazzini A., Salerno E., Mochi M., Bedini L.
A color decorrelation strategy to improve the human or automatic readability of degraded documents is presented. The particular degradation that is considered here is bleed-through, that is, a pattern that interferes with the text to be read due to seeping of ink from the reverse side of the document. A simplified linear model for this degradation is introduced to permit the application of decorrelation techniques to the RGB components of the color data images, and to compare this strategy to the independent component analysis approach. Some examples from an extensive experimentation with real ancient documents are described, and the possibility to further improve the restoration performance by using hyperspectral/multispectral data is envisaged.Source: ISTI Technical reports, 2004

See at: CNR ExploRA


2009 Contribution to journal Restricted
Editorial - Image and video processing for cultural heritage
Charvillat V., Tonazzini A., Van Gool L., Nikolaidis N.
The preservation, archival, and study of cultural heritage is of the utmost importance at local, national, and international levels. Not only global organizations like UNESCO but also museums, libraries, cultural institutions, and private initiatives are working in these directions. During the last three decades, researchers in the field of imaging science have started to contribute a growing set of tools for cultural heritage, thereby providing indispensable support to these efforts.

See at: www.hindawi.com Restricted | CNR ExploRA


2011 Journal article Open Access OPEN
Attaching semantics to document images safeguards our cultural heritage
Console E., Tonazzini A., Bruno F.
Extracting and archiving information from digital images of documents is one of the goals of the project AMMIRA (multispectral acquisition, enhancing, indexing and retrieval of artifacts), led by Tea- Sas, a service firm based in Catanzaro, Italy, with the collaboration of two Italian research teams, the Institute of Information Science and Technologies of CNR in Pisa, and the Department of Mechanical Engineering of the University of Calabria in Cosenza. AMMIRA is supported by European funding, through the Italian regional program for integrated support to enterprises.Source: ERCIM news 86 (2011): 19–20.

See at: ercim-news.ercim.eu Open Access | CNR ExploRA


2014 Conference article Restricted
Demosaicing of noisy color images through edge-preserving regularization
Gerace I., Martinelli F., Tonazzini A.
We propose edge-preserving regularization for color image demosaicing in the realistic case of noisy data. We enforce both intrachannel local smoothness of the intensity, and interchannel local similarities of the edges. To describe these local correlations while preserving even the finest image details, we exploit suitable functions of the derivatives of first, second and third order. The solution of the demosaicing problem is defined as the minimizer of a non-convex energy function, accounting for all these constraints plus a data fidelity term. Minimization is performed via an iterative deterministic algorithm, applied to a family of approximating functions, each implicitly referring to meaningful discontinuities. Our method is irrespective of the specific color filter array employed. However, to permit quantitative comparisons with other published results, we tested it in the case of the Bayer CFA, and on the Kodak 24-image set.Source: IWCIM 2014 - International Workshop on Computational Intelligence for Multimedia Understanding, Paris, France, 1-2 November 2014
DOI: 10.1109/iwcim.2014.7008795
Metrics:


See at: doi.org Restricted | ieeexplore.ieee.org Restricted | CNR ExploRA


2016 Conference article Open Access OPEN
An inpainting technique based on regularization to remove bleed-through from ancient documents
Gerace I., Palumba C., Tonazzini A.
In the techniques proposed so far to remove bleed-through from digital images of ancient documents, two critical aspects are the identification of the occlusion areas, i.e. those pixels where the bleed-through pattern overlaps with the main foreground text, and the inpainting of the areas to be removed with a pattern that is in continuity with the surrounding background, often inhomogeneous due to paper texture or noise. In this paper we propose a new method for bleed-through removal that aims at solving both the aforementioned issues. The method first exploits information from the accurately registered images of the manuscript recto and verso to locate, in each side, the pixels corresponding to the interfering text, no matter if they are pure bleed-through or occlusion pixels. Then, processing separately the two sides, the identified areas are filled in by interpolating, through a suitable regularization model, the surrounding regions. We show the promising results obtained with this method on manuscripts affected by a very strong bleed-through.Source: International Workshop on Computational Intelligence for Multimedia Understanding, Reggio Calabria, Italy, 27-28 October 2016
DOI: 10.1109/iwcim.2016.7801177
Metrics:


See at: ISTI Repository Open Access | doi.org Restricted | ieeexplore.ieee.org Restricted | CNR ExploRA


2000 Journal article Unknown
Adaptive smoothing and edge tracking in image deblurring and denoising
Tonazzini A.
Image deblurring and denoising are formulated as the minimization of an energy function in which a line process is implicity referred through a novel discontinuity-adaptive stabilizer. This stabilizer depends on a parameter, called temperature, which is related to the threshold for the creation of intensity discontinuities (edges). The solution is computed using a GNC-like algorithm that minimizes in sequence the energy function at decreasing values of the temperature. We show that this allows for a coarse-to-fine recovery of edges of decreasing width, while smoothing off the noise. Furthermore, the need for a fine tuning of the regularization an threshold parameters is significantly relaxed. As a further advantage with respect to the most edge-preserving stabilizers, the method is also flexible for the introduction of self-interactions between lines, in order to express various constraints on the configurations of the edge field, without any increase in the computational cost.Source: Pattern recognition and image analysis 10 (2000): 492–499.

See at: CNR ExploRA


1999 Other Unknown
Blur identification analysis in blind edge-preserving image restoration
Tonazzini A.
This paper proposes exploiting edge-preserving regularization lo improve the quality of both the image and the blur estimates in blind restoration. Indeed, edge-preserving regularization allows for a more reliable detection of the intensity discontinuities. Since the most part of the information which is needed for the estimation of the blur is located across the discontinuity edges, we infer that a better estimate of the blur parameters can be obtained as well. I n a fully Bayesian approach, assuming that the image is modeled through a coupled MRF with an explicit, binary and constrained line process, our method is based on the joint maximization of a distribution of the image field, the data and the blur parameters. This very complex joint maximization can be decomposed into a sequence of MAP and/or ML estimations, to be alternately and iteratively performed, with a significant reduction of complexity and computational load. In a previous paper55, a similar approach was adopted lo simultaneously estimate the image and its MRF model hyperparameters (unsupervised restoration). In that case, the presence of an explicit and binary line field was exploited to decrease the computational cost of the usually very expensive hyperparameter estimation step. Successively, an overall Bayesian estimation procedure was established, where blind restoration is merged with unsupervised restoration far a completely data driven image recovery42, and a specialized neural network architecture was devised for its fast and efficient implementation. In the present paper we recall the theoretical assessment of blind, unsupervised image restoration, summarize the main features of our approach, and experimentally analyze several qualitative and quantitative aspects of joint image estimation and blur identification. In particular, we show how the use of edge-preserving image models can help in obtaining good blur estimates even in presence of a significant amount of noise, without any need for smoothness assumptions on the blur coefficients, which would polarize the solution towards often unrealistic uniform blurs.

See at: CNR ExploRA


1989 Other Unknown
Una soluzione duale al restauro di segnali con il metodo di massima entropia
Tonazzini A.
Molti problemi di elaborazione delle immagini possono essere formulati in termini di ricostruzione di una funzione continua sulla base di un insieme finito di due misure distorte ed affette da rumore. In questo lavoro tale problema viene affrontato con riferimento al restauro di segnali monodimensionali. Riconoscendo nel problema un tipico esempio di problema inverso mal-posto, si propone una tecnica di regolarizzazione basata sull'ottimizzazione di un funzionale costo soggetto a vincoli derivanti da conoscenze a priori. Nell'ipotesi che opportune condizioni di convessità siano soddisfatte, il problema di ottimizzazione vincolata originale, di dimensione infinita, può essere ricondotto ad un problema equivalente di ottimizzazione non vincolata, di dimensione finita, risolubile mediante un metodo di gradiente coniugato. In questo lavoro vengono analizzati i casi in cui si utilizzano, come funzionali costo, l'entropia e la cross-entropia. Le prestazioni ottenibili con tali funzionali vengono confrontate con quelle fornite dal più tradizionale metodo di minima norma. I risultati mostrano che il metodo di massima entropia consente il recupero di segnali continui di tipo impulsivo da un insieme molto limitato di misure.

See at: CNR ExploRA


1987 Conference article Unknown
A dual space optimization technique for maximum entropy signal reconstruction and restoration
Leahy R., Tonazzini A., Wang H.
The properties of the maximum entropy method (MEM) as applied in digital signal processing have been the subject of much controversy in the recent literature. In this paper we attempt to clarify the properties of the MEM by considering statistical, Bayesian and model based interpretations. The modeling interpretation is based on a dual space optimization approach to the problem which allows the estimation of the unknown signal as a continuous function from a finite set of data. It is shown that the effective role of the entropy function is to select a model for the unknown signal of dimension equal to the number of data samples. This dual space approach is demonstrated in applications in signal deconvolution and image rcconstruction from projections using sparsely sampled, noisy data.Source: 20th Asilomar Conference on Signals, Systems, and Computers, pp. 452–456, Pacific Grove, California, 11/1986

See at: CNR ExploRA


1986 Other Unknown
Correzione delle più comuni distorsioni geometriche di scene terrestri telerilevate da aereo o da satellite. Progetto pilota di telerilevamento
Casalini P. L., Tonazzini A.
No abstract available

See at: CNR ExploRA


1986 Conference article Unknown
Maximum entropy signal restoration from short data records
Leahy R., Tonazzini A.
There has been much discussion in the literature on the merits of the maximum entropy method arising from its information minimizing and consistency properties. In this paper we describe an application of the technique of the restoration of continuous signals given a set of sparsely sampled, noisy data. We compare the performance of the two common forms of the entropy cost functional, ? f ln (f) and ? ln (f), with an L_2 minimization. The deconvolution problem is formulated as the estimation of a continuous, unknown positive function given a discrete set of noisy samples i.e. a discrete-continuous formulation. Optimizing a cost functional on the solution subject to constraints derived from our prior knowledge of the problem allows us to select a unique solution from the generally infinite set of possible solutions, provide certain convexity requirements are fulfilled. Optimization is performed using a conjugate gradient method, the optimal step lengths were found using a Fibonacci search. Results demonstrate that use of the discrete-continuous MEM formulation allows the recovery or continuous signals from very short data records.Source: Eighth Iasted International Symposium "Measurement, Signal Processing and Control" - MECO '86, pp. 195–199, Taormina, Italy, 3-5/09/1986

See at: CNR ExploRA