Nonlinear model identification and seethrough cancellation from recto-verso data Salerno Emanuele, Martinelli Francesca, Tonazzini Anna The problem of seethrough cancellation in digital images of double-sided documents is addressed. Previous approaches to solve this problem from recto-verso pairs of grayscale data images show a number of drawbacks, ranging from errors due to an inadequate data model to excessive computational complexities. While satisfying the need to assume a nonlinear convolutional mixture model and to estimate its parameters along with the recto and verso patterns, we propose a simple and fast strategy to estimate the trasparency of the paper and the seethrough convolutional kernel, thus enabling an efficient correction of this distortion. Compared to other separation strategies, our choice is slightly more cumbersome since average background values must be estimated and a pure showthrough area must be isolated manually by the operator. Although the procedure cannot be fully automatic, however, it outperforms other restoration strategies, especially if based on linear instantaneous models.
See-through correction in recto-verso documents via a regularized nonlinear model Gerace Ivan, Martinelli Francesca, Tonazzini Anna In this paper, we approach the removal of back-to-front interferences from scans of double-sided documents as a blind source separation problem. We consider the front and back ideal images as two individual patterns, overlapped in the observed recto and verso scans through a nonlinear convolutional mixing model. We adopt a regularization approach to estimate both the ideal images and the model parameters, by minimizing a suitable energy function of all the unknowns. The regularity of the solution images is described by typical local autocorrelation constraints, accounting also for well-behaved edges. This a priori information is particularly suitable for the kind of objects depicted in the images treated, i.e. homogeneous texts in homogeneous background, and, as such, is capable to stabilize the ill-posed, inverse problem considered. We show that the results obtained by this approach are much better than the ones obtained through data decorrelation or independent component analysis. As compared to approaches based on segmentation/classification, which often aim at cleaning a foreground text by removing all the textured background, one of the advantages of our method is that cleaning does not alter genuine features of the document, such as color or other structures it may contain. This is particularly interesting when the document has a historical importance, since its readability can be improved while maintaining the original appearance.Source: ISTI Technical reports, 2011
A deterministic algorithm for optical flow estimation Gerace I., Martinelli F., Pucci P. In this paper we propose a new deterministic algorithm for determining optical flow through regularization techniques so that the solution of the problem is defined as the minimum of an appropriate energy function. We also assume that the displacements are piecewise continuous and that the discontinuities are variable to be estimated. More precisely, we introduce a hierarchical three-step optimization strategy to minimize the constructed energy function, which is not convex. In the first step we find a suitable initial guess of the displacements field by a gradient-based GNC algorithm. In the second step we define the local energy of a displacement field as the energy function obtained by fixing all the field with the exception of a row or of a column. Then, through an application of the shortest path technique we minimize iteratively each local energy function restricted to a row or to a column until we arrive at a fixed point. In the last step we use again a GNC algorithm to recover a sub-pixel accuracy. The experimental results confirm the goodness of this technique.Source: Communications in Applied and Industrial Mathematics 1 (2011): 249–268. doi:10.1685/2010CAIM584 DOI: 10.1685/2010caim584 Metrics:
Simple automatic procedures to enhance low-quality ancient color manuscripts Martinelli F., Tonazzini A. Frequent degradation in ancient manuscripts are seethrough interferences and poor contrast between the written text and the background, which compromise or make difficult their legibility. Normally, these manuscripts appear colored due to ageing factors such as yellowing of the paper or diffusion and oxidation of the ink chemical components. Very often, these degradations produce monochrome manuscripts where a single color is predominant. Scholars and archivists are nowadays interested in digital image processing techniques to improve their readability, but might desire to preserve their general appearance as well, since this is a mark of their history. In this paper we propose very simple and fast procedures to remove interferences and enhance the contrast in monochrome manuscripts while preserving their color. These procedures are based on the enhancement of the only luminance component Y of the Y CbCr representation of the original RGB manuscript image, and make also use of the CMYK representation. Our method favorably compares with tools of commercial software packages for image manipulation, which, furthermore, often requires expert user intervention.Source: IASTED International Conference Signal and Image Processing and Applications, SIPA 2011, pp. 151–158, Crete, Greece, 22 - 24 June 2011 DOI: 10.2316/p.2011.738-041 Metrics:
AMMIRA: an easy and effective system to manage digital images of artworks Salerno, E., Tonazzini, A., Savino, P., Martinelli, F., Debole, Bruno, Bianco, G., Console AMMIRA is a hardware-software system to manage digital images of cultural heritage objects. Currently under development, its final version will integrate the functionalities of three subsystems, a computer-controlled scientific camera for multispectral image capture, an easy-to-use package for image manipulation and annotation, and a metadata editor that enables nonspecialist users to include semantics into the processed data, so that they can be stored and searched by content in large databases. Such a tool is very important to extract and manage all the information related to valuable objects that must be studied, maintained, and also accessed by many types of users. The data acquisition is based on a DTA Chroma refrigerated camera, equipped with its standard control software, a custom-made motorized autofocus system, and a real-time viewfinder for rapid framing and focusing in all the channels available. A filter wheel allows three visible and two infrared channels to be captured. The illumination is provided by white-light lamps for RGB and IR reflection images, and Wood lamps for ultraviolet fluorescence images. Moreover, a structured-light projector can be used to reconstruct the 3D shape of the object under acquisition. Having 3D information available can be useful to many purposes. This is obvious for 3D objects, but is also true for paintings or documents, whose virtual restoration can include a flattening of the surface to reduce the effect of material deformation. The multichannel image data available help both virtual restoration and feature extraction. Most methods to perform these tasks need the channel maps to be precisely coregistered. Our raw data, however, do not meet this requirement. Indeed, each channel image is acquired separately, and the different filters used can produce displacements and differences in focusing from channel to channel. Additional distortions can arise from accidental causes during the capture procedure. This is why the first image manipulation module is devoted to coregister multiple images, corrected, where needed, for 2D or 3D geometric distortions. The other image manipulation modules are used for virtual restoration or extraction of spectrally-distinguishable features from the object's appearance. In particular, a group of algorithms are based on linear image models, and are able to both reduce degradations in documents (such as stains, blurred areas, etc.) and extract even barely visible features, such as erased text, watermarks, and stamps, which can then be classified and annotated. Other algorithms are based on nonlinear models, and are able to remove the characteristic back-to-front interference that often affects document images. A specially designed metadata editor allows the user to record the procedures applied to any piece of data and its relationships to other stored material, including all the administrative and descriptive information needed. The metadata files produced enable content-based searches in large databases. The effectiveness of all the procedures implemented has been evaluated quantitatively on simulated data, and tested successfully on real images of heavily degraded documents. The three AMMIRA subsystems are now operational, and a first software release is being tested by the project partners and by some selected user institution.Source: 5th International Congress on "Science and Technology for the Safeguard of Cultural heritage in the Mediterranean Basin", Cultural Heritage Istanbul 2011, pp. 223, Istanbul, 22-25 November 2011