253 result(s)
Page Size: 10, 20, 50
Export: bibtex, xml, json, csv
Order by:

CNR Author operator: and / or
more
Typology operator: and / or
Language operator: and / or
Date operator: and / or
more
Rights operator: and / or
2002 Journal article Unknown
Degradation identification and model parameter estimation in discontinuity-adaptive visual reconstruction
Tonazzini A, Bedini L
This paper describes our recent experiences and progress towards an efficient solution of the highly ill-posed and computationally demanding problem of blind and unsupervised visual reconstruction. Our case study is image restoration, i.e. deblurring and denoising. The methodology employed makes reference to edge-preserving regularization. This is formulated both in a fully Bayesian framework, using a MRF image model with explicit, and possibly geometrically constrained, line processes, and in a deterministic framework, where the line process is addressed in an implicit manner, by using a particular MRF model which allows for self-interactions of the line and an adaptive variation of the model parameters. These MRF models have been proven to be efficient in modeling the local regularity properties of most real scenes, as well as the local regularity of object boundaries and intensity discontinuities.In both cases, our approach to this problem attempts to effectively exploit the correlation between intensities and lines, and is based on the assumption that the line process alone, when correctly recovered and located, can retain a good deal of information about both the hyperparameters that best model the whole image and the degradation features. We show that these approaches offer a way to improve both the quality of the reconstructed image, and also the estimates of the degradation and model parameters, and significantly reduce the computational burden of the estimation processes.Source: ADVANCES IN IMAGING AND ELECTRON PHYSICS, vol. 120, pp. 193-284
DOI: 10.1016/s1076-5670(02)80036-2
Metrics:


See at: biblioproxy.cnr.it Restricted | doi.org Restricted | CNR IRIS Restricted


2003 Journal article Restricted
Monte Carlo Markov chain techniques for unsupervised MRF-based image denoising
Tonazzini A, Bedini L
This paper deals with discontinuity-adaptive smoothing for recovering degraded images,when Markov random ?eld models with explicit lines are used,but no a priori information about the free parameters of the related Gibbs distributions is available. The adopted approach is based on the maximization of the posterior distribution with respect to the line ?eld and the Gibbs parameters,while the intensity ?eld is assumed to be clamped to the maximizer of the posterior itself,conditioned on the lines and the parameters. This enables the application of a mixed-annealing algorithm for the maximum a posteriori (MAP) estimation of the image ?eld,and of Markov chain Monte Carlo techniques, over binary variables only, for the simultaneous maximum likelihood estimation of the parameters. A practical procedure is then derived which is nearly as fast as a MAP image reconstruction by mixed-annealing with known Gibbs parameters. We derive the method for the general case of a linear degradation process plus superposition of additive noise,and experimentally validate it for the sub-case of image denoising.Source: PATTERN RECOGNITION LETTERS, vol. 24, pp. 55-64
DOI: 10.1016/s0167-8655(02)00188-5
Metrics:


See at: Pattern Recognition Letters Restricted | CNR IRIS Restricted | CNR IRIS Restricted | www.sciencedirect.com Restricted


2004 Conference article Restricted
An extended maximum likelihood approach for the robust blind separation of autocorrelated images from noisy mixtures
Gerace I, Cricco D, Tonazzini A
In this paper we consider the problem of separating autocorrelated source images from linear mixtures with unknown coefficients, in presence of even significant noise. Assuming the statistical independence of the sources, we formulate the problem in a Bayesian estimation framework, and describe local correlation within the individual source images through the use of suitable Gibbs priors, accounting also for well-behaved edges in the images. Based on an extension of the Maximum Likelihood approach to ICA, we derive an algorithm for recovering the mixing matrix that makes the estimated sources fit the known properties of the original sources. The preliminary experimental results on synthetic mixtures showed that a significant robustness against noise, both stationary and non-stationary, can be achieved even by using generic autocorrelation models.

See at: CNR IRIS Restricted | CNR IRIS Restricted


2001 Journal article Restricted
Fast fully data-driven image restoration by means of edge-preserving regularization
Bedini L, Tonazzini A
The fully data driven deconvolution of noisy images is a highly ill-posed problem, where the image, the blur and the noise parameters have to be simultaneously estimated from the data alone. Our approach is to exploit the information related to the image intensity edges both to improve the solution and to significantly redice the computational costs.Source: REAL-TIME IMAGING, vol. 7, pp. 3-19

See at: CNR IRIS Restricted | CNR IRIS Restricted


2001 Journal article Restricted
Preconditioned edge-preserving image deblurring and denoising
Bedini L, Del Corso Gm, Tonazzini A
Preconditioned conjugate gradient (PCG) algorithms have been successfully used to significantly reduce the number of iterations in Tikhonov regularization techniques for image restoration. Nevertheless, in many cases Tikhonov regularization is inadequate, in that it produces images that are oversmoothed across intensity edges. Edge-preserving regularization can overcome this inconvenience but has a higher complexity, in that it involves non-convex optimization. In this paper, we show how the use of preconditioners can improve the computational performance of edge-preserving image restoration as well. In particular, we adopt an image model which explicitly accounts for a constrained binary line process, and a mixed-annealing algorithm that alternates steps of stochastic updating of the lines with steps of preconditioned conjugate gradient-based estimation of the intensity. The presence of the line process requires a specific preconditioning strategy to manage the particular structure of the matrix of the equivalent least squares problem. Experimental results are provided to show the satisfactory performance of the method, both with respect to the quality of the restored images and the computational saving.Source: PATTERN RECOGNITION LETTERS, vol. 22 (issue 10), pp. 1083-1101

See at: CNR IRIS Restricted | CNR IRIS Restricted


2007 Contribution to book Restricted
Statistical analysis of electrophoresis time series for improving basecalling in DNA sequencing
Tonazzini A, Bedini L
In automated DNA sequencing, the final algorithmic phase, referred to as basecalling, consists of the translation of four time signals in the form of peak sequences (electropherogram) to the corresponding sequence of bases. Commercial basecallers detect the peaks based on heuristics, and are very efficient when the peaks are distinct and regular in spread, amplitude and spacing. Unfortunately, in the practice the signals are subject to several degradations, among which peak superposition and peak merging are the most frequent. In these cases the experiment must be repeated and human intervention is required. Recently, there have been attempts to provide methodological foundations to the problem and to use statistical models for solving it. In this paper, we exploit a priori information and Bayesian estimation to remove degradations and recover the signals in an impulsive form which makes basecalling straightforward.DOI: 10.1007/978-3-540-76300-0
Metrics:


See at: doi.org Restricted | CNR IRIS Restricted | CNR IRIS Restricted


2010 Journal article Restricted
Multichannel blind separation and deconvolution of images for document analysis
Tonazzini A, Gerace I, Martinelli F
In this paper we apply Bayesian blind source separation (BSS) from noisy convolutive mixtures to jointly separate and restore source images degraded through unknown blur operators, and then linearly mixed. We found that this problem arises in several image processing applications, among which there are some interesting instances of degraded document analysis. In particular, the convolutive mixture model is proposed for describing multiple views of documents affected by the overlapping of two or more text patterns. We consider two different models, the interchannel model, where the data represent multispectral views of a single-sided document, and the intrachannel model, where the data are given by two sets of multispectral views of the recto and verso side of a document page. In both cases, the aim of the analysis is to recover clean maps of the main foreground text, but also the enhancement and extraction of other document features, such as faint or masked patterns. We adopt Bayesian estimation for all the unknowns, and describe the typical local correlation within the individual source images through the use of suitable Gibbs priors, accounting also for well-behaved edges in the images. This a priori information is particularly suitable for the kind of objects depicted in the images treated, i.e. homogeneous texts in homogeneous background, and, as such, is capable to stabilize the ill-posed, inverse problem considered. The method is validated through numerical and real experiments that are representative of various real scenarios.Source: IEEE TRANSACTIONS ON IMAGE PROCESSING, vol. 19 (issue 4), pp. 912-925
DOI: 10.1109/tip.2009.2038814
Metrics:


See at: IEEE Transactions on Image Processing Restricted | CNR IRIS Restricted | CNR IRIS Restricted


2004 Journal article Restricted
Analysis and recognition of highly degraded printed characters
Tonazzini A, Vezzosi S, Bedini L
This paper proposes an integrated system for the processing and analysis of highly degraded printed documents for the purpose of recognizing text characters. As a case study, ancient printed texts are considered. The system is comprised of various blocks operating sequentially. Starting with a single page of the document, the background noise is reduced by wavelet-based decomposition and filtering, the text lines are detected, extracted, and segmented by a simple and fast adaptive thresholding into blobs corresponding to characters, and the various blobs are analyzed by a feedforward multilayer neural network trained with a back-propagation algorithm. For each character, the probability associated with the recognition is then used as a discriminating parameter that determines the automatic activation of a feedback process, leading the system back to a block for refining segmentation. This block acts only on the small portions of the text where the recognition cannot be relied on and makes use of blind deconvolution and MRF-based segmentation techniques whose high complexity is greatly reduced when applied to a few subimages of small size. The experimental results highlight that the proposed system performs a very precise segmentation of the characters and then a highly effective recognition of even strongly degraded texts.

See at: CNR IRIS Restricted | CNR IRIS Restricted


2001 Journal article Restricted
Blur identification analysis in blind image deconvolution using Markov random fields
Tonazzini A
This paper deals with the blind deconvolution of blurred noisy images and proposes exploiting edge preserving MRF-based regularization to improve the quality of both the image and blur estimates.Source: PATTERN RECOGNITION AND IMAGE ANALYSIS, vol. 11 (issue 4), pp. 699-710

See at: CNR IRIS Restricted | CNR IRIS Restricted


2008 Journal article Restricted
Statistical analysis of electrophoresis time series for improving basecalling in DNA sequencing
Tonazzini A, Bedini L
In automated DNA sequencing, the final algorithmic phase, referred to as basecalling, consists of the translation of four time signals in the form of peak sequences (electropherogram) to the corresponding sequence of bases. Commercial basecallers detect the peaks based on heuristics, and are very efficient when the peaks are distinct and regular in spread, amplitude and spacing. Unfortunately, in the practice the signals are subject to several degradations, among which peak superposition and peak merging are the most frequent. In these cases the experiment must be repeated and human intervention is required. Recently, there have been attempts to provide methodological foundations to the problem and to use statistical models for solving it. In this paper, we exploit a priori information and Bayesian estimation to remove degradations and recover the signals in an impulsive form which makes basecalling straightforward.Source: INTERNATIONAL JOURNAL OF SIGNAL AND IMAGING SYSTEMS ENGINEERING, vol. 1 (issue 1), pp. 36-40
DOI: 10.1504/ijsise.2008.017772
Metrics:


See at: International Journal of Signal and Imaging Systems Engineering Restricted | CNR IRIS Restricted | CNR IRIS Restricted | www.inderscience.com Restricted


2010 Journal article Restricted
Color space transformations for analysis and enhancement of ancient degraded manuscripts
Tonazzini A
In this paper we focus on ancient manuscripts, acquired in the RGB modality, which are degraded by the presence of complex background textures that interfere with the text of interest. Removing these artifacts is not trivial, especially with ancient originals, where they are usually very strong. Rather than applying techniques to just cancel out the interferences, we adopt the point of view of separating, extracting and classifying the various patterns superimposed in the document. We show that representing RGB images in different color spaces can be effective for this goal. In fact, even if the RGB color representation is the most frequently used color space in image processing, it does not maximize the information contents of the image. Thus, in the literature, several color spaces have been developed for analysis tasks, such as object segmentation and edge detection. Some color spaces seem to be particularly suitable to the analysis of degraded documents, allowing for the enhancement of the contents, the improvement of the text readability, the extraction of partially hidden features, and a better performance of thresholding techniques for text binarization. We present and discuss several examples of the successful application of both fixed color spaces and self-adaptive color spaces, based on the decorrelation of the original RGB channels. We also show that even simpler arithmetic operations among the channels can be effective for removing bleed-through, refocusing and improving the contrast of the foreground text, and to recover the original RGB appearance of the enhanced document.Source: PATTERN RECOGNITION AND IMAGE ANALYSIS, vol. 20 (issue 3), pp. 404-417
DOI: 10.1134/s105466181003017x
Metrics:


See at: Pattern Recognition and Image Analysis Restricted | CNR IRIS Restricted | CNR IRIS Restricted | www.springerlink.com Restricted


2004 Conference article Restricted
Joint blind separation and restoration of mixed degraded images for document analysis
Tonazzini A, Gerace I, Cricco F
We consider the problem of extracting clean images from noisy mixtures of images degraded by blur operators. This special case of source separation arises, for instance, when analyzing document images showing bleed-through or show-through. We propose to jointly perform demixing and deblurring by augmenting blind source separation with a step of image restoration. Within the ICA approach, i.e. assuming the statistical independence of the sources, we adopt a Bayesian formulation were the priors on the ideal images are given in the form of MRF, and a MAP estimation is employed for the joint recovery of both the mixing matrix and the images. We show that taking into account for the blur model and for a proper image model improves the separation process and makes it more robust against noise. Preliminary results on synthetic examples of documents exhibiting bleed-through are provided, considering edge-preserving priors that are suitable to describe text images.

See at: CNR IRIS Restricted | CNR IRIS Restricted


2005 Conference article Restricted
Bayesian MRF-based blind source separation of convolutive mixtures of images
Tonazzini A, Gerace I
This paper deals with the recovery of clean images from a set of their noisy convolutive mixtures. In practice, this problem can be seen as the one of simultaneously separating and restoring source images that have been first degraded by unknown filters, then summed up and added with noise. We approach this problem in the framework of Blind Source Separation (BSS), where the unknown filters, in our case FIR filters in the form of blur kernels, must be estimated jointly with the sources. Assuming the statistical independence of the source images, we adopt Bayesian estimation for all the unknowns, and exploit information about local correlation within the individual sources through the use of suitable Gibbs priors, accounting also for well-behaved edges in the images. We derive an algorithm for recovering the blur kernels that make the estimated sources fit the known properties of the original sources. The method is validated through numerical experiments in a simplified setting, which is however related to real application scenarios.

See at: CNR IRIS Restricted | CNR IRIS Restricted


2006 Conference article Restricted
ISYREADET: un sistema integrato per il restauro virtuale
Console E., Burdin V., Legnaioli S., Palleschi V., Tassone R., Tonazzini A.
Il progetto Isyreadet (Integrated System for Recovery and Archiving Degraded Texts), finanziato dalla Commissione Europea con i fondi del V Programma Quadro di Ricerca, Sviluppo Tecnologico e Dimostrazione (1998-2002) si è proposto di realizzare un sistema integrato, hardware e software per il restauro virtuale e l'archiviazione di documenti danneggiati utilizzando metodi e strumenti innovativi, come camere multispettrali e algoritmi di elaborazione di immagini.

See at: CNR IRIS Restricted | CNR IRIS Restricted


2006 Conference article Restricted
Joint correction of cross-talk and peak spreading in DNA electropherograms
Tonazzini A, Bedini L
In automated DNA sequencing, the final algorithmic phase, referred to as basecalling, consists of the translation of four time signals in the form of peak sequences (electropherogram) to the corresponding sequence of bases. The most popular basecaller, Phred, detects the peaks based on heuristics, and is very efficient when the peaks are well distinct and quite regular in spread, amplitude and spacing. Unfortunately, in the practice the data is subject to several degradations, particularly near the end of the sequence. The most frequent ones are peak superposition, peak merging and signal leakage, resulting in secondary peaks. In these conditions the experiment must be repeated and the human intervention is required. Recently, there have been attempts to provide methodological foundations to the problem and use statistical models to solve it. In this paper, we propose exploiting a priori information and Bayesian estimation to remove degradations and recover the signals in an impulsive form which makes the task of basecalling straightforward.

See at: CNR IRIS Restricted | CNR IRIS Restricted


2006 Conference article Restricted
Statistical analysis of electrophoresis time series for improving basecalling in DNA sequencing
Tonazzini A, Bedini L
In automated DNA sequencing, the final algorithmic phase, referred to as basecalling, consists of the translation of four time signals in the form of peak sequences (electropherogram) to the corresponding sequence of bases. Commercial basecallers detect the peaks based on heuristics, and are very efficient when the peaks are distinct and regular in spread, amplitude and spacing. Unfortunately, in the practice the signals are subject to several degradations, among which peak superposition and peak merging are the most frequent. In these cases the experiment must be repeated and human intervention is required. Recently, there have been attempts to provide methodological foundations to the problem and to use statistical models for solving it. In this paper, we exploit a priori information and Bayesian estimation to remove degradations and recover the signals in an impulsive form which makes basecalling straightforward.

See at: CNR IRIS Restricted | CNR IRIS Restricted


2001 Conference article Open Access OPEN
Image segmentation as a preliminary step for character recognition in ancient printed documents
Bedini L, Tonazzini A
After analyzing and processing several ancient printed documents, we argued that the joint restoration and segmentation of images is the first fundamental step for character recognition, which usually relies on isolated characters. This step is particularly critic in the case of ancient printed documents where several degradation processes may cause the characters to touch and merge one another. In this paper we propose to to integrate techniques of image restoration with techniques of image segmentation based on Markov Random Field models. Several results of both simulated and real experiments are shown to validate the method.

See at: CNR IRIS Open Access | CNR IRIS Restricted


2002 Conference article Restricted
An integrated system for the analysis and the recognition of characters in ancient documents
Vezzosi S, Bedini L, Tonazzini A
This paper describes an integrated system for processing and analyzing highly degraded ancient printed documents. For each page, the system reduces noise by wavelet-based filtering, extracts and segments the text lines into characters by a fast adaptive thresholding, and performs OCR by a feed-forward back-propagation multilayer neural network. The probability recognition is used as a discriminant parameter for determining the automatic activation of a feed-back process, leading back to a block for refining segmentation. This block acts only on the small portions of the text where the recognition was not trustable, and makes use of blind deconvolution and MRF-based segmentation techniques. The experimental results highlight the good performance of the whole system in the analysis of even strongly degraded texts.

See at: CNR IRIS Restricted | CNR IRIS Restricted


2007 Conference article Restricted
ISYREADET: un sistema integrato per il restauro virtuale di documenti antichi
Console E, Valerie B, Cazuguel G, Legnaioli S, Palleschi V, Tassone R, Tonazzini A
Isyreadet (Integrated System for Recovering and Archiving Degraded Texts) is a research project funded by the European Commission whose aim has been to realize an integrated hardware and software system for the virtual restoring of damaged historical documents using innovative methods and tools, such as multispectral cameras and image processing algorithms. During the two years life of the project (2003-2004) the consortium, formed by five SMEs (T.E.A. s.a.s., Catanzaro, Art Conservation, Vlaardingen, Atelier Quillet, La Rochelle, Art Innovation, Hengelo, Transmedia Technology, Swansea) and three RTD Performers (CNR- Istituto per i Processi Chimico-Fisici, Pisa, CNR- Isituto di Scienza e Tecnologie dell'Informazione, Pisa, ENST- École Nationale Supérieure des Télécommunications, Brest), has successfully carried out a series of activities. The activities provided for the realization of the project have been related to the analysis and the classification of different kind of possible damages, the digitalization of the test documents using a multispectral camera, the selection of suitable image enhancement algorithms and further application, the implementation of the user-friendly graphic interface. Above are shown the outcomes reached by the application of the algorithms for the virtual restoration of the documents.

See at: CNR IRIS Restricted | CNR IRIS Restricted


2006 Contribution to conference Open Access OPEN
Virtual restoring by multispectral imaging
Console E, Burdin V, Cazuguel G, Legnaioli S, Palleschi V, Tassone R, Tonazzini A
Isyreadet (Integrated System for Recovering and Archiving Degraded Texts) is a research project funded by the EU and aimed at realising a system for the virtual restoring of damaged historical documents using innovative methods and tools, such as multispectral cameras and image processing algorithms. During the period 2003-2004 the Consortium, formed by 5 SMEs and 3 RTD Performers has successfully carried out a series of activities related to the analysis of different kind of possible damages, the digitisation of documents using a multispectral camera, the selection and application of the suitable algorithms for the virtual restoration, the implementation of the user-friendly graphic interface.

See at: CNR IRIS Open Access | ISTI Repository Open Access | CNR IRIS Restricted