2018
Journal article
Open Access
Solvability and uniqueness criteria for generalized Sylvester-type equations
De Teran F, Iannazzo B, Poloni F, Robol LWe provide necessary and sufficient conditions for the generalized (star operator)-Sylvester matrix equation, AXB+CX(star operator)D=E, to have exactly one solution for any right-hand side E. These conditions are given for arbitrary coefficient matrices A, B, C, D (either square or rectangular) and generalize existing results for the same equation with square coefficients. We also review the known results regarding the existence and uniqueness of solution for generalized Sylvester and (star operator)-Sylvester equations.Source: LINEAR ALGEBRA AND ITS APPLICATIONS, vol. 542, pp. 501-521
See at:
CNR IRIS
| ISTI Repository
| www.sciencedirect.com
| CNR IRIS
| CNR IRIS
2018
Journal article
Open Access
On quadratic matrix equations with infinite size coefficients encountered in QBD stochastic processes
Bini Da, Massei S, Meini B, Robol LMatrix equations of the kind $A_1 X^2 + A0 X + A_{-1} = X$, where both the matrix coefficients and the unknown are semi-infinite matrices belonging to a Banach algebra, are considered. These equations, where coefficients are quasi-Toeplitz matrices, are encountered in certain quasi-birth-death processes as the tandem Jackson queue or in any other processes that can be modeled as a reflecting random walk in the quarter plane. We provide a numerical framework for approximating the minimal nonnegative solution of these equations that relies on semi-infinite quasi-Toeplitz matrix arithmetic. In particular, we show that the algorithm of cyclic reduction can be effectively applied and can approxi- mate the infinite-dimensional solutions with quadratic convergence at a cost that is comparable to that of the finite case. This way, we may compute a finite approximation of the sought solution and of the invariant probability measure of the associated quasi-birth-death process, within a given accuracy. Numerical experiments, performed on a collection of benchmarks, confirm the theoretical analysis.Source: NUMERICAL LINEAR ALGEBRA WITH APPLICATIONS (ONLINE), vol. 25 (issue 6)
See at:
CNR IRIS
| onlinelibrary.wiley.com
| ISTI Repository
| CNR IRIS
| CNR IRIS
2017
Contribution to book
Restricted
Solving large scale quasiseparable Lyapunov equations
Massei S, Palitta D, Robol LWe consider the problem of efficiently solving Lyapunov and Sylvester equations
of medium and large scale, in the case where all the coefficients are quasiseparable,
i.e., they have off-diagonal blocks of low-rank. This comprises the case with banded
coefficients and right-hand side, recently studied in [6, 9].
We show that, under suitable assumptions, this structure is guaranteed to be numer-
ically present in the solution, and we provide explicit estimates of the numerical rank
of the off-diagonal blocks. Moreover, we describe an efficient method for approximating
the solution, which relies on the technology of hierarchical matrices.
A theoretical characterization of the quasiseparable structure in the solution is pre-
sented, and numerically experiments confirm the applicability and efficiency of our ap-
proach. We provide a MATLAB toolbox that allows easy replication of the experiments
and a ready-to-use interface for our solver.
See at:
cmmse.usal.es
| CNR IRIS
| CNR IRIS
2019
Journal article
Open Access
Quasi-Toeplitz matrix arithmetic: a MATLAB toolbox
Bini Da, Massei S, Robol LQuasi Toeplitz (QT) matrix is a semi-infinite matrix of the kind $A=T(a)+E$ where $T(a)=(a_{j-i})_{i,j\in\mathbb Z^+}$, $E=(e_{i,j})_{i,j\in\mathbb Z^+}$
is compact and the norms $\lVert a\rVert_{\mathcal W} = \sum_{i\in\mathbb Z}|a_i|$ and $\lVert E \rVert_2$ are finite. These properties allow to approximate any QT-matrix, within any given precision, by means of a finite number of parameters. QT-matrices, equipped with the norm $\lVert A \rVert_{\mathcal QT}=\alpha\lVert a\rVert_{\mathcal{W}} \lVert E \rVert_2$, for $\alpha = (1+\sqrt 5)/2$, are a Banach algebra with the standard arithmetic operations. We provide an algorithmic description of these operations on the finite parametrization of QT-matrices, and we develop a MATLAB toolbox implementing them in a transparent way. The toolbox is then extended to perform arithmetic operations on matrices of finite size that have a Toeplitz plus low-rank structure. This enables the development of algorithms for Toeplitz and quasi-Toeplitz matrices whose cost does not necessarily increase with the dimension of the problem. Some examples of applications to computing matrix functions and to solving matrix equations are presented, and confirm the effectiveness of the approach.Source: NUMERICAL ALGORITHMS, vol. 81 (issue 2), pp. 741-769
See at:
CNR IRIS
| link.springer.com
| CNR IRIS
2019
Contribution to book
Open Access
Factoring block Fiedler Companion Matrices
Del Corso G M, Poloni F, Robol L, Vandebril RWhen Fiedler published his "A note on Companion matrices" in 2003 in Linear Algebra and its Applications, he could not have foreseen the significance of this elegant factorization of a companion matrix into essentially two-by-two Gaussian transformations, which we will name \emph{(scalar) elementary Fiedler factors}. Since then, researchers extended these results and studied the various resulting linearizations, the stability of Fiedler companion matrices, factorizations of block companion matrices, Fiedler pencils, and even looked at extensions to non-monomial bases.
In this chapter, we introduce a new way to factor block Fiedler companion matrices into the product of scalar elementary Fiedler factors. We use this theory to prove that, e.g., a block (Fiedler) companion matrix can always be written as the product of several scalar (Fiedler) companion matrices. We demonstrate that this factorization in terms of elementary Fiedler factors can be used to construct new linearizations. Some linearizations have notable properties, such as low band-width, or allow for factoring the coefficient matrices into unitary-plus-low-rank matrices. Moreover, we will provide bounds on the low-rank parts of the resulting unitary-plus-low-rank decomposition.
To present these results in an easy-to-understand manner we rely on the flow-graph representation for Fiedler matrices recently proposed by Del Corso and Poloni in Linear Algebra and its Applications, 2017.Source: SPRINGER INDAM SERIES, pp. 129-155
See at:
CNR IRIS
| link-springer-com-443.webvpn.fjmu.edu.cn
| ISTI Repository
| CNR IRIS
| CNR IRIS
2018
Book
Restricted
Core-chasing algorithms for the eigenvalue problem
Aurentz Jl, Mach T, Robol L, Vandebril R, Watkins DsThis monograph is about a class of methods for solving matrix eigenvalue problems. Of course the methods are also useful for computing related objects such as eigenvectors and invariant subspaces. We will introduce new algorithms along the way, but we are also advocating a new way of viewing and implementing existing algorithms, notably Francis's implicitly-shifted QR algorithm [36].
Our first message is that if we want to compute the eigenvalues of a matrix A, it is often advantageous to store A in QR-decomposed form. That is, we write A = QR, where Q is unitary and R is upper triangular, and we store Q and R instead of A. This may appear to be an inefficient approach but, as we shall see, it often is not. Most matrices that arise in applications have some special structures, and these often imply special structures for the factors Q and R. For example, if A is upper Hessenberg, then Q is also upper Hessenberg, and it follows from this that Q can be stored very compactly. As another example, suppose A is unitary. Then Q = A, and R is the identity matrix, so we don't have to store R at all.
Every matrix can be transformed to upper Hessenberg form by a unitary similarity transformation. We will study this and related transformations in detail in Chapter 6, but for the early chapters of the book we will simply take the transformation for granted; we will assume that A is already in Hessenberg form.
Thus we consider an arbitrary upper Hessenberg matrix in QR-decomposed form and show how to compute its eigenvalues. Our method proceeds by a sequence of similarity transformations that drive the matrix toward upper triangular form.
Once the matrix is triangular, the eigenvalues can be read from the main diagonal. In fact our method is just a new implementation of Francis's implicitly-shifted QR algorithm. The storage space requirement is O(n^2) because we must store the upper-triangular matrix R, and the flop count is O(n^3).
Once we know how to handle general matrices, we consider how the procedure can be simplified in special cases. The easiest is the unitary case, where R = I. This results in an immediate reduction in the storage requirement to O(n) and a corresponding reduction of the computational cost to O(n^2) flops. A similar case is that of a companion matrix, which is both upper Hessenberg and unitary-plus-rank-one. This results in an R that is unitary-plus-rank-one. Once we have figured out how to store R using only O(n) storage, we again get an algorithm that runs in O(n 2 ) flops. The unitary case is old [42], but our companion algorithm is new [7].
A structure that arises frequently in eigenvalue problems is symmetry: A = A^T . This very important structure does not translate into any obvious structure for the factors Q and R, so it does not fit into our framework in an obvious way. In Section 4.4 we show that the symmetric problem can be solved using our methodology: we turn it into a unitary problem by a Cayley transform [10]. We did not seriously expect this approach would be faster than all of the many other existing methods for the symmetric eigenvalue problem [73, ยง 7.2], but we were pleased to find that it is not conspicuously slow. Our solution to the symmetric problem serves as a stepping stone to the solution of the symmetric-plus-rank-one problem, which includes comrade and colleague matrices [14] as important special cases. If a polynomial p is presented as a linear combination of Chebyshev or Legendre polynomials, for example, the coefficients can be placed into a comrade matrix with eigenvalues equal to the zeros of p. A Cayley transform turns the symmetric-plus-rank-one problem into a unitary-plus-rank-one problem, which we can solve by our fast companion solver. Our O(n^2) algorithm gives us a fast way to compute the zeros of polynomials expressed in terms of these classic orthogonal polynomial bases.
We also study the generalized eigenvalue problem, for which the Moler-Stewart QZ algorithm is the appropriate variant of Francis's algorithm. We show how to apply this to a companion pencil using the same approach as for the companion matrix, but utilizing two unitary-plus-rank-one upper triangular matrices instead of one [4]. We then extend this methodology to matrix polynomial eigenvalue problems. A block companion pencil is formed and then factored into a large number of unitary-plus-rank-one factors. The resulting algorithm is advantageous when the degree of the matrix polynomial is large [6]. The final chapter discusses the reduction to Hessenberg form and generalizations of Hessenberg form. We introduce generalizations of Francis's algorithm that can be applied to these generalized Hessenberg forms [64]. This monograph is a summary and report on a research project that the five of us (in various combinations) have been working on for several years now.
Included within these covers is material from a large number of recent sources, including [4-10, 12, 54, 62-64, 74]. We have also included some material that has not yet been submitted for publication. This book exists because the senior author decided that it would be worthwhile to present a compact and unified treatment of our findings in a single volume. The actual writing was done by the senior author, who wanted to ensure uniformity of style and viewpoint (and, one could also say, prejudices). But the senior author is only the scribe, who (of course) benefitted from substantial feedback from the other authors. Moreover, the book would never have come into existence without the work of a team over the course of years. We are excited about the outcome of our research, and we are pleased to share it with you. The project is not finished by any means, so a second edition of this book might appear at some time in the future.
See at:
bookstore.siam.org
| CNR IRIS
| CNR IRIS
2018
Journal article
Open Access
Fast and backward stable computation of roots of polynomials. Part II: backward error analysis; companion matrix and companion pencil
Aurentz Jl, Mach T, Robol L, Vandebril R, Watkins DsThis work is a continuation of work by J. L. Aurentz, T. Mach, R. Vandebril, and D. S. Watkins, J. Matrix Anal. Appl., 36 (2015), pp. 942--973. In that paper we introduced a companion QR algorithm that finds the roots of a polynomial by computing the eigenvalues of the companion matrix in O(n^2) time using O(n) memory. We proved that the method is backward stable. Here we introduce, as an alternative, a companion QZ algorithm that solves a generalized eigenvalue problem for a companion pencil. More importantly, we provide an improved backward error analysis that takes advantage of the special structure of the problem. The improvement is also due, in part, to an improvement in the accuracy (in both theory and practice) of the turnover operation, which is the key component of our algorithms. We prove that for the companion QR algorithm, the backward error on the polynomial coefficients varies linearly with the norm of the polynomial's vector of coefficients. Thus, the companion QR lgorithm has a smaller backward error than the unstructured QR algorithm (used by MATLAB's roots
command, for example), for which the backward error on the polynomial coefficients grows quadratically with the norm of the coefficient vector. The companion QZ algorithm has the same favorable backward error as companion QR, provided that the polynomial coefficients are properly scaled.Source: SIAM JOURNAL ON MATRIX ANALYSIS AND APPLICATIONS, vol. 39 (issue 3), pp. 1245-1269
See at:
epubs.siam.org
| CNR IRIS
| ISTI Repository
| CNR IRIS
| CNR IRIS
2018
Journal article
Open Access
Solving rank-structured Sylvester and Lyapunov equations
Massei S, Palitta D, Robol LWe consider the problem of efficiently solving Sylvester and Lyapunov equations of medium and large scale, in case of rank-structured data, i.e., when the coefficient matrices and the right-hand side have low-rank off-diagonal blocks. This comprises problems with banded data, recently studied in [A. Haber and M. Verhaegen, Automatica J. IFAC, 73 (2016), pp. 256-268; D. Palitta and V. Simoncini, Numerical Methods for Large-Scale Lyapunov Equations with Symmetric Banded Data, preprint, arxiv, 1711.04187, 2017], which often arise in the discretization of elliptic PDEs. We show that, under suitable assumptions, the quasiseparable structure is guaranteed to be numerically present in the solution, and explicit novel estimates of the numerical rank of the offdiagonal blocks are provided. Efficient solution schemes that rely on the technology of hierarchical matrices are described, and several numerical experiments confirm the applicability and efficiency of the approaches. We develop a MATLAB toolbox that allows easy replication of the experiments and a ready-to-use interface for the solvers. The performances of the different approaches are compared, and we show that the new methods described are efficient on several classes of relevant problems.Source: SIAM JOURNAL ON MATRIX ANALYSIS AND APPLICATIONS, vol. 39 (issue 4), pp. 1564-1590
See at:
epubs.siam.org
| CNR IRIS
| ISTI Repository
| CNR IRIS
2019
Journal article
Open Access
Low-rank updates and a divide-and-conquer method for linear matrix equations
Kressner D, Massei S, Robol LLinear matrix equations, such as the Sylvester and Lyapunov equations, play an important role in various applications, including the stability analysis and dimensionality reduction of linear dynamical control systems and the solution of partial differential equations. In this work, we present and analyze a new algorithm, based on tensorized Krylov subspaces, for quickly updating the solution of such a matrix equation when its coefficients undergo low-rank changes. We demonstrate how our algorithm can be utilized to accelerate the Newton method for solving continuous-time algebraic Riccati equations. Our algorithm also forms the basis of a new divide-and-conquer approach for linear matrix equations with coefficients that feature hierarchical low-rank structure, such as hierarchically off-diagonal low-rank structures, hierarchically semiseparable, and banded matrices. Numerical experiments demonstrate the advantages of divide-and-conquer over existing approaches, in terms of computational time and memory consumption.Source: SIAM JOURNAL ON SCIENTIFIC COMPUTING (PRINT), vol. 41 (issue 2)
See at:
ISTI Repository
| epubs.siam.org
| CNR IRIS
| CNR IRIS
2019
Journal article
Metadata Only Access
Fast solvers for two-dimensional fractional diffusion equations using rank structured matrices
Massei S, Mazza M, Robol LWe consider the discretization of time-space diffusion equations with fractional derivatives in space and either one-dimensional (1D) or 2D spatial domains. The use of an implicit Euler scheme in time and finite differences or finite elements in space leads to a sequence of dense large scale linear systems describing the behavior of the solution over a time interval. We prove that the coefficient matrices arising in the 1D context are rank structured and can be efficiently represented using hierarchical formats (\scrH -matrices, HODLR). Quantitative estimates for the rank of the off-diagonal blocks of these matrices are presented. We analyze the use of HODLR arithmetic for solving the 1D case and we compare this strategy with existing methods that exploit the Toeplitz-like structure to precondition the GMRES iteration. The numerical tests demonstrate the convenience of the HODLR format when at least a reasonably low number of time steps is needed. Finally, we explain how these properties can be leveraged to design fast solvers for problems with 2D spatial domains that can be reformulated as matrix equations. The experiments show that the approach based on the use of rank-structured arithmetic is particularly effective and outperforms current state of the art techniques.Source: SIAM JOURNAL ON SCIENTIFIC COMPUTING (PRINT), vol. 41 (issue 4), pp. A2627-A2656
See at:
epubs.siam.org
| CNR IRIS
2019
Journal article
Open Access
When is a matrix unitary or Hermitian plus low rank?
Del Corso Gm, Poloni F, Robol L, Vandebril RHermitian and unitary matrices are two representatives of the class of normal matrices whose full eigenvalue decomposition can be stably computed in quadratic computing complexity once the matrix has been reduced, for instance, to tridiagonal or Hessenberg form. Recently, fast and reliable eigensolvers dealing with low-rank perturbations of unitary and Hermitian matrices have been proposed. These structured eigenvalue problems appear naturally when computing roots, via confederate linearizations, of polynomials expressed in, for example, the monomial or Chebyshev basis. Often, however, it is not known beforehand whether or not a matrix can be written as the sum of a Hermitian or unitary matrix plus a low-rank perturbation. In this paper, we give necessary and sufficient conditions characterizing the class of Hermitian or unitary plus low-rank matrices. The number of singular values deviating from 1 determines the rank of a perturbation to bring a matrix to unitary form. A similar condition holds for Hermitian matrices; the eigenvalues of the skew-Hermitian part differing from 0 dictate the rank of the perturbation. We prove that these relations are linked via the Cayley transform. Then, based on these conditions, we identify the closest Hermitian or unitary plus rank k matrix to a given matrix A, in Frobenius and spectral norm, and give a formula for their distance from A. Finally, we present a practical iteration to detect the low-rank perturbation. Numerical tests prove that this straightforward algorithm is effective.Source: NUMERICAL LINEAR ALGEBRA WITH APPLICATIONS, vol. 26 (issue 6)
See at:
CNR IRIS
| onlinelibrary.wiley.com
| ISTI Repository
| CNR IRIS
| CNR IRIS
2020
Journal article
Open Access
Hm-toolbox: Matlab software for hodlr and HSS matrices
Massei S, Robol L, Kressner DMatrices with hierarchical low-rank structure, including HODLR and HSS matrices, constitute a versatile tool to develop fast algorithms for addressing large-scale problems. While existing software packages for such matrices often focus on linear systems, their scope of applications is in fact much wider and includes, for example, matrix functions and eigenvalue problems. In this work, we present a new MATLAB toolbox called hm-toolbox, which encompasses this versatility with a broad set of tools for HODLR and HSS matrices, unmatched by existing software. While mostly based on algorithms that can be found in the literature, our toolbox also contains a few new algorithms as well as novel auxiliary functions. Being entirely based on MATLAB, our implementation does not strive for optimal performance. Nevertheless, it maintains the favorable complexity of hierarchical low-rank matrices and offers, at the same time, a convenient way of prototyping and experimenting with algorithms. A number of applications illustrate the use of the hm-toolbox.Source: SIAM JOURNAL ON SCIENTIFIC COMPUTING (PRINT), vol. 42 (issue 2), pp. C43-C68
See at:
epubs.siam.org
| CNR IRIS
| ISTI Repository
| CNR IRIS
| CNR IRIS