40 result(s)
Page Size: 10, 20, 50
Export: bibtex, xml, json, csv
Order by:

CNR Author operator: and / or
more
Typology operator: and / or
Language operator: and / or
Date operator: and / or
Rights operator: and / or
2018 Journal article Open Access OPEN
Efficient Ehrlich-Aberth iteration for finding intersections of interpolating polynomials and rational functions
Robol L., Vandebril R.
We analyze the problem of carrying out an efficient iteration to approximate the eigenvalues of some rank structured pencils obtained as linearization of sums of polynomials and rational functions expressed in (possibly different) interpolation bases. The class of linearizations that we consider has been introduced by Robol, Vandebril and Van Dooren in [17]. We show that a traditional QZ iteration on the pencil is both asymptotically slow (since it is a cubic algorithm in the size of the matrices) and sometimes not accurate (since in some cases the deflation of artificially introduced infinite eigenvalues is numerically difficult). To solve these issues we propose to use a specifically designed Ehrlich-Aberth iteration that can approximate the eigenvalues in O(kn²) flops, where k is the average number of iterations per eigenvalue, and n the degree of the linearized polynomial. We suggest possible strategies for the choice of the initial starting points that make k asymptotically smaller than O(n), thus making this method less expensive than the QZ iteration. Moreover, we show in the numerical experiments that this approach does not suffer of numerical issues, and accurate results are obtained.Source: Linear algebra and its applications 542 (2018): 282–309. doi:10.1016/j.laa.2017.05.010
DOI: 10.1016/j.laa.2017.05.010
Metrics:


See at: Linear Algebra and its Applications Open Access | ISTI Repository Open Access | Lirias Open Access | Linear Algebra and its Applications Restricted | CNR ExploRA


2017 Journal article Open Access OPEN
Fast Hessenberg reduction of some rank structured matrices
Gemignani L., Robol L
We develop two fast algorithms for Hessenberg reduction of a structured matrix $A = D + UV^H$, where $D$ is a real or unitary n x n diagonal matrix and $U, V in mathbb{C}^{n times k}$. The proposed algorithm for the real case exploits a two-stage approach by first reducing the matrix to a generalized Hessenberg form and then completing the reduction by annihilation of the unwanted subdiagonals. It is shown that the novel method requires O(n^2 k) arithmetic operations and is significantly faster than other reduction algorithms for rank structured matrices. The method is then extended to the unitary plus low rank case by using a block analogue of the CMV form of unitary matrices. It is shown that a block Lanczos-type procedure for the block tridiagonalization of Re(D) induces a structured reduction on A in a block staircase CMV-type shape. Then, we present a numerically stable method for performing this reduction using unitary transformations and show how to generalize the subdiagonal elimination to this shape, while still being able to provide a condensed representation for the reduced matrix. In this way the complexity still remains linear in k and, moreover, the resulting algorithm can be adapted to deal efficiently with block companion matrices.Source: SIAM journal on matrix analysis and applications (Print) 38 (2017): 574–598. doi:10.1137/16M1107851
DOI: 10.1137/16m1107851
DOI: 10.48550/arxiv.1612.04196
Metrics:


See at: arXiv.org e-Print Archive Open Access | SIAM Journal on Matrix Analysis and Applications Open Access | ISTI Repository Open Access | SIAM Journal on Matrix Analysis and Applications Restricted | doi.org Restricted | epubs.siam.org Restricted | CNR ExploRA


2018 Journal article Open Access OPEN
Solvability and uniqueness criteria for generalized Sylvester-type equations
De Teran F., Iannazzo B., Poloni F., Robol L.
We provide necessary and sufficient conditions for the generalized (star operator)-Sylvester matrix equation, AXB+CX(star operator)D=E, to have exactly one solution for any right-hand side E. These conditions are given for arbitrary coefficient matrices A, B, C, D (either square or rectangular) and generalize existing results for the same equation with square coefficients. We also review the known results regarding the existence and uniqueness of solution for generalized Sylvester and (star operator)-Sylvester equations.Source: Linear algebra and its applications 542 (2018): 501–521. doi:10.1016/j.laa.2017.07.010
DOI: 10.1016/j.laa.2017.07.010
DOI: 10.48550/arxiv.1608.01183
Metrics:


See at: arXiv.org e-Print Archive Open Access | Linear Algebra and its Applications Open Access | ISTI Repository Open Access | Linear Algebra and its Applications Restricted | doi.org Restricted | www.sciencedirect.com Restricted | CNR ExploRA


2018 Journal article Open Access OPEN
On quadratic matrix equations with infinite size coefficients encountered in QBD stochastic processes
Bini D. A., Massei S., Meini B., Robol L.
Matrix equations of the kind $A_1 X^2 + A0 X + A_{-1} = X$, where both the matrix coefficients and the unknown are semi-infinite matrices belonging to a Banach algebra, are considered. These equations, where coefficients are quasi-Toeplitz matrices, are encountered in certain quasi-birth-death processes as the tandem Jackson queue or in any other processes that can be modeled as a reflecting random walk in the quarter plane. We provide a numerical framework for approximating the minimal nonnegative solution of these equations that relies on semi-infinite quasi-Toeplitz matrix arithmetic. In particular, we show that the algorithm of cyclic reduction can be effectively applied and can approxi- mate the infinite-dimensional solutions with quadratic convergence at a cost that is comparable to that of the finite case. This way, we may compute a finite approximation of the sought solution and of the invariant probability measure of the associated quasi-birth-death process, within a given accuracy. Numerical experiments, performed on a collection of benchmarks, confirm the theoretical analysis.Source: Numerical linear algebra with applications (Online) 25 (2018). doi:10.1002/nla.2128
DOI: 10.1002/nla.2128
Metrics:


See at: Numerical Linear Algebra with Applications Open Access | ISTI Repository Open Access | Numerical Linear Algebra with Applications Restricted | onlinelibrary.wiley.com Restricted | CNR ExploRA


2018 Journal article Open Access OPEN
Corrigendum to "Solvability and uniqueness criteria for generalized Sylvester-type equations"
De Teran F., Iannazzo B., Poloni F., Robol L.
We provide an amended version of Corollaries 7 and 9 in [De Teran, Iannazzo, Poloni, Robol, "Solvability and uniqueness criteria for generalized Sylvester-type equations"]. These results characterize the unique solvability of the matrix equation AXB + CX*D = E (where the coefficients need not be square) in terms of an equivalent condition on the spectrum of certain matrix pencils of the same size as one of its coefficients. (C) 2017 Elsevier Inc. All rights reserved.Source: Linear algebra and its applications 542 (2018): 522–526. doi:10.1016/j.laa.2017.10.018
DOI: 10.1016/j.laa.2017.10.018
Metrics:


See at: Linear Algebra and its Applications Open Access | ISTI Repository Open Access | www.sciencedirect.com Restricted | CNR ExploRA


2019 Journal article Open Access OPEN
Fast and backward stable computation of the eigenvalues and eigenvectors of matrix polynomials
Aurentz J., Mach T., Robol L., Vandebril R., Watkins D. S.
In the last decade matrix polynomials have been investigated with the primary focus on adequate linearizations and good scaling techniques for computing their eigenvalues and eigenvectors. In this article we propose a new method for computing a factored Schur form of the associated companion pencil. The algorithm has a quadratic cost in the degree of the polynomial and a cubic one in the size of the coefficient matrices. Also the eigenvectors can be computed at the same cost. The algorithm is a variant of Francis's implicitly shifted QR algorithm applied on the companion pencil. A preprocessing unitary equivalence is executed on the matrix polynomial to simultaneously bring the leading matrix coefficient and the constant matrix term to triangular form before forming the companion pencil. The resulting structure allows us to stably factor each matrix of the pencil as a product of k matrices of unitary-plus-rank-one form, admitting cheap and numerically reliable storage. The problem is then solved as a product core chasing eigenvalue problem. A backward error analysis is included, implying normwise backward stability after a proper scaling. Computing the eigenvectors via reordering the Schur form is discussed as well. Numerical experiments illustrate stability and efficiency of the proposed methods.Source: Mathematics of computation (Online) 88 (2019): 313–347. doi:10.1090/mcom/3338
DOI: 10.1090/mcom/3338
DOI: 10.48550/arxiv.1611.10142
Project(s): FUNCOMP via OpenAIRE
Metrics:


See at: arXiv.org e-Print Archive Open Access | Mathematics of Computation Open Access | Lirias Open Access | ISTI Repository Open Access | www.ams.org Open Access | doi.org Restricted | CNR ExploRA


2017 Contribution to book Restricted
Solving large scale quasiseparable Lyapunov equations
Massei S., Palitta D., Robol L.
We consider the problem of efficiently solving Lyapunov and Sylvester equations of medium and large scale, in the case where all the coefficients are quasiseparable, i.e., they have off-diagonal blocks of low-rank. This comprises the case with banded coefficients and right-hand side, recently studied in [6, 9]. We show that, under suitable assumptions, this structure is guaranteed to be numer- ically present in the solution, and we provide explicit estimates of the numerical rank of the off-diagonal blocks. Moreover, we describe an efficient method for approximating the solution, which relies on the technology of hierarchical matrices. A theoretical characterization of the quasiseparable structure in the solution is pre- sented, and numerically experiments confirm the applicability and efficiency of our ap- proach. We provide a MATLAB toolbox that allows easy replication of the experiments and a ready-to-use interface for our solver.Source: , pp. 1445–1448, 2017

See at: cmmse.usal.es Restricted | CNR ExploRA


2019 Journal article Open Access OPEN
Quasi-Toeplitz matrix arithmetic: a MATLAB toolbox
Bini D. A., Massei S., Robol L.
Quasi Toeplitz (QT) matrix is a semi-infinite matrix of the kind $A=T(a)+E$ where $T(a)=(a_{j-i})_{i,j\in\mathbb Z^+}$, $E=(e_{i,j})_{i,j\in\mathbb Z^+}$ is compact and the norms $\lVert a\rVert_{\mathcal W} = \sum_{i\in\mathbb Z}|a_i|$ and $\lVert E \rVert_2$ are finite. These properties allow to approximate any QT-matrix, within any given precision, by means of a finite number of parameters. QT-matrices, equipped with the norm $\lVert A \rVert_{\mathcal QT}=\alpha\lVert a\rVert_{\mathcal{W}} \lVert E \rVert_2$, for $\alpha = (1+\sqrt 5)/2$, are a Banach algebra with the standard arithmetic operations. We provide an algorithmic description of these operations on the finite parametrization of QT-matrices, and we develop a MATLAB toolbox implementing them in a transparent way. The toolbox is then extended to perform arithmetic operations on matrices of finite size that have a Toeplitz plus low-rank structure. This enables the development of algorithms for Toeplitz and quasi-Toeplitz matrices whose cost does not necessarily increase with the dimension of the problem. Some examples of applications to computing matrix functions and to solving matrix equations are presented, and confirm the effectiveness of the approach.Source: Numerical algorithms 81 (2019): 741–769. doi:10.1007/s11075-018-0571-6
DOI: 10.1007/s11075-018-0571-6
DOI: 10.48550/arxiv.1801.08158
Metrics:


See at: arXiv.org e-Print Archive Open Access | Numerical Algorithms Open Access | Numerical Algorithms Restricted | doi.org Restricted | link.springer.com Restricted | CNR ExploRA


2019 Contribution to book Open Access OPEN
Factoring block Fiedler Companion Matrices
Del Corso G. M., Poloni F., Robol L., Vandebril R.
When Fiedler published his "A note on Companion matrices" in 2003 in Linear Algebra and its Applications, he could not have foreseen the significance of this elegant factorization of a companion matrix into essentially two-by-two Gaussian transformations, which we will name \emph{(scalar) elementary Fiedler factors}. Since then, researchers extended these results and studied the various resulting linearizations, the stability of Fiedler companion matrices, factorizations of block companion matrices, Fiedler pencils, and even looked at extensions to non-monomial bases. In this chapter, we introduce a new way to factor block Fiedler companion matrices into the product of scalar elementary Fiedler factors. We use this theory to prove that, e.g., a block (Fiedler) companion matrix can always be written as the product of several scalar (Fiedler) companion matrices. We demonstrate that this factorization in terms of elementary Fiedler factors can be used to construct new linearizations. Some linearizations have notable properties, such as low band-width, or allow for factoring the coefficient matrices into unitary-plus-low-rank matrices. Moreover, we will provide bounds on the low-rank parts of the resulting unitary-plus-low-rank decomposition. To present these results in an easy-to-understand manner we rely on the flow-graph representation for Fiedler matrices recently proposed by Del Corso and Poloni in Linear Algebra and its Applications, 2017.Source: edited by Bini, Dario; Di Benedetto, Fabio; Tyrtyshnikov, Eugene; Van Barel, Marc, pp. 129–155. Berlin: Springer International Publishing AG, 2019
DOI: 10.1007/978-3-030-04088-8_7
Metrics:


See at: ISTI Repository Open Access | doi.org Restricted | link-springer-com-443.webvpn.fjmu.edu.cn Restricted | CNR ExploRA


2018 Book Closed Access
Core-chasing algorithms for the eigenvalue problem
Aurentz J. L., Mach T., Robol L., Vandebril R., Watkins D. S.
This monograph is about a class of methods for solving matrix eigenvalue problems. Of course the methods are also useful for computing related objects such as eigenvectors and invariant subspaces. We will introduce new algorithms along the way, but we are also advocating a new way of viewing and implementing existing algorithms, notably Francis's implicitly-shifted QR algorithm [36]. Our first message is that if we want to compute the eigenvalues of a matrix A, it is often advantageous to store A in QR-decomposed form. That is, we write A = QR, where Q is unitary and R is upper triangular, and we store Q and R instead of A. This may appear to be an inefficient approach but, as we shall see, it often is not. Most matrices that arise in applications have some special structures, and these often imply special structures for the factors Q and R. For example, if A is upper Hessenberg, then Q is also upper Hessenberg, and it follows from this that Q can be stored very compactly. As another example, suppose A is unitary. Then Q = A, and R is the identity matrix, so we don't have to store R at all. Every matrix can be transformed to upper Hessenberg form by a unitary similarity transformation. We will study this and related transformations in detail in Chapter 6, but for the early chapters of the book we will simply take the transformation for granted; we will assume that A is already in Hessenberg form. Thus we consider an arbitrary upper Hessenberg matrix in QR-decomposed form and show how to compute its eigenvalues. Our method proceeds by a sequence of similarity transformations that drive the matrix toward upper triangular form. Once the matrix is triangular, the eigenvalues can be read from the main diagonal. In fact our method is just a new implementation of Francis's implicitly-shifted QR algorithm. The storage space requirement is O(n^2) because we must store the upper-triangular matrix R, and the flop count is O(n^3). Once we know how to handle general matrices, we consider how the procedure can be simplified in special cases. The easiest is the unitary case, where R = I. This results in an immediate reduction in the storage requirement to O(n) and a corresponding reduction of the computational cost to O(n^2) flops. A similar case is that of a companion matrix, which is both upper Hessenberg and unitary-plus-rank-one. This results in an R that is unitary-plus-rank-one. Once we have figured out how to store R using only O(n) storage, we again get an algorithm that runs in O(n 2 ) flops. The unitary case is old [42], but our companion algorithm is new [7]. A structure that arises frequently in eigenvalue problems is symmetry: A = A^T . This very important structure does not translate into any obvious structure for the factors Q and R, so it does not fit into our framework in an obvious way. In Section 4.4 we show that the symmetric problem can be solved using our methodology: we turn it into a unitary problem by a Cayley transform [10]. We did not seriously expect this approach would be faster than all of the many other existing methods for the symmetric eigenvalue problem [73, § 7.2], but we were pleased to find that it is not conspicuously slow. Our solution to the symmetric problem serves as a stepping stone to the solution of the symmetric-plus-rank-one problem, which includes comrade and colleague matrices [14] as important special cases. If a polynomial p is presented as a linear combination of Chebyshev or Legendre polynomials, for example, the coefficients can be placed into a comrade matrix with eigenvalues equal to the zeros of p. A Cayley transform turns the symmetric-plus-rank-one problem into a unitary-plus-rank-one problem, which we can solve by our fast companion solver. Our O(n^2) algorithm gives us a fast way to compute the zeros of polynomials expressed in terms of these classic orthogonal polynomial bases. We also study the generalized eigenvalue problem, for which the Moler-Stewart QZ algorithm is the appropriate variant of Francis's algorithm. We show how to apply this to a companion pencil using the same approach as for the companion matrix, but utilizing two unitary-plus-rank-one upper triangular matrices instead of one [4]. We then extend this methodology to matrix polynomial eigenvalue problems. A block companion pencil is formed and then factored into a large number of unitary-plus-rank-one factors. The resulting algorithm is advantageous when the degree of the matrix polynomial is large [6]. The final chapter discusses the reduction to Hessenberg form and generalizations of Hessenberg form. We introduce generalizations of Francis's algorithm that can be applied to these generalized Hessenberg forms [64]. This monograph is a summary and report on a research project that the five of us (in various combinations) have been working on for several years now. Included within these covers is material from a large number of recent sources, including [4-10, 12, 54, 62-64, 74]. We have also included some material that has not yet been submitted for publication. This book exists because the senior author decided that it would be worthwhile to present a compact and unified treatment of our findings in a single volume. The actual writing was done by the senior author, who wanted to ensure uniformity of style and viewpoint (and, one could also say, prejudices). But the senior author is only the scribe, who (of course) benefitted from substantial feedback from the other authors. Moreover, the book would never have come into existence without the work of a team over the course of years. We are excited about the outcome of our research, and we are pleased to share it with you. The project is not finished by any means, so a second edition of this book might appear at some time in the future.Source: Philadelphia: SIAM Publications, 2018

See at: bookstore.siam.org Restricted | CNR ExploRA


2018 Journal article Open Access OPEN
Fast and backward stable computation of roots of polynomials. Part II: backward error analysis; companion matrix and companion pencil
Aurentz J. L., Mach T., Robol L., Vandebril R., Watkins D. S.
This work is a continuation of work by J. L. Aurentz, T. Mach, R. Vandebril, and D. S. Watkins, J. Matrix Anal. Appl., 36 (2015), pp. 942--973. In that paper we introduced a companion QR algorithm that finds the roots of a polynomial by computing the eigenvalues of the companion matrix in O(n^2) time using O(n) memory. We proved that the method is backward stable. Here we introduce, as an alternative, a companion QZ algorithm that solves a generalized eigenvalue problem for a companion pencil. More importantly, we provide an improved backward error analysis that takes advantage of the special structure of the problem. The improvement is also due, in part, to an improvement in the accuracy (in both theory and practice) of the turnover operation, which is the key component of our algorithms. We prove that for the companion QR algorithm, the backward error on the polynomial coefficients varies linearly with the norm of the polynomial's vector of coefficients. Thus, the companion QR lgorithm has a smaller backward error than the unstructured QR algorithm (used by MATLAB's roots command, for example), for which the backward error on the polynomial coefficients grows quadratically with the norm of the coefficient vector. The companion QZ algorithm has the same favorable backward error as companion QR, provided that the polynomial coefficients are properly scaled.Source: SIAM journal on matrix analysis and applications (Print) 39 (2018): 1245–1269. doi:10.1137/17M1152802
DOI: 10.1137/17m1152802
DOI: 10.48550/arxiv.1611.02435
Metrics:


See at: arXiv.org e-Print Archive Open Access | SIAM Journal on Matrix Analysis and Applications Open Access | ISTI Repository Open Access | SIAM Journal on Matrix Analysis and Applications Restricted | doi.org Restricted | epubs.siam.org Restricted | CNR ExploRA


2018 Journal article Open Access OPEN
Solving rank-structured Sylvester and Lyapunov equations
Massei S., Palitta D., Robol L.
We consider the problem of efficiently solving Sylvester and Lyapunov equations of medium and large scale, in case of rank-structured data, i.e., when the coefficient matrices and the right-hand side have low-rank off-diagonal blocks. This comprises problems with banded data, recently studied in [A. Haber and M. Verhaegen, Automatica J. IFAC, 73 (2016), pp. 256-268; D. Palitta and V. Simoncini, Numerical Methods for Large-Scale Lyapunov Equations with Symmetric Banded Data, preprint, arxiv, 1711.04187, 2017], which often arise in the discretization of elliptic PDEs. We show that, under suitable assumptions, the quasiseparable structure is guaranteed to be numerically present in the solution, and explicit novel estimates of the numerical rank of the offdiagonal blocks are provided. Efficient solution schemes that rely on the technology of hierarchical matrices are described, and several numerical experiments confirm the applicability and efficiency of the approaches. We develop a MATLAB toolbox that allows easy replication of the experiments and a ready-to-use interface for the solvers. The performances of the different approaches are compared, and we show that the new methods described are efficient on several classes of relevant problems.Source: SIAM journal on matrix analysis and applications (Print) 39 (2018): 1564–1590. doi:10.1137/17M1157155
DOI: 10.1137/17m1157155
Metrics:


See at: SIAM Journal on Matrix Analysis and Applications Open Access | epubs.siam.org Open Access | Archivio istituzionale della ricerca - Alma Mater Studiorum Università di Bologna Open Access | ISTI Repository Open Access | SIAM Journal on Matrix Analysis and Applications Restricted | Infoscience - EPFL scientific publications Restricted | CNR ExploRA


2019 Journal article Open Access OPEN
Low-rank updates and a divide-and-conquer method for linear matrix equations
Kressner D., Massei S., Robol L.
Linear matrix equations, such as the Sylvester and Lyapunov equations, play an important role in various applications, including the stability analysis and dimensionality reduction of linear dynamical control systems and the solution of partial differential equations. In this work, we present and analyze a new algorithm, based on tensorized Krylov subspaces, for quickly updating the solution of such a matrix equation when its coefficients undergo low-rank changes. We demonstrate how our algorithm can be utilized to accelerate the Newton method for solving continuous-time algebraic Riccati equations. Our algorithm also forms the basis of a new divide-and-conquer approach for linear matrix equations with coefficients that feature hierarchical low-rank structure, such as hierarchically off-diagonal low-rank structures, hierarchically semiseparable, and banded matrices. Numerical experiments demonstrate the advantages of divide-and-conquer over existing approaches, in terms of computational time and memory consumption.Source: SIAM journal on scientific computing (Print) 41 (2019). doi:10.1137/17M1161038
DOI: 10.1137/17m1161038
DOI: 10.48550/arxiv.1712.04349
Metrics:


See at: arXiv.org e-Print Archive Open Access | SIAM Journal on Scientific Computing Open Access | ISTI Repository Open Access | SIAM Journal on Scientific Computing Restricted | doi.org Restricted | epubs.siam.org Restricted | Infoscience - EPFL scientific publications Restricted | CNR ExploRA


2019 Journal article Open Access OPEN
Fast solvers for two-dimensional fractional diffusion equations using rank structured matrices
Massei S., Mazza M., Robol L.
We consider the discretization of time-space diffusion equations with fractional derivatives in space and either one-dimensional (1D) or 2D spatial domains. The use of an implicit Euler scheme in time and finite differences or finite elements in space leads to a sequence of dense large scale linear systems describing the behavior of the solution over a time interval. We prove that the coefficient matrices arising in the 1D context are rank structured and can be efficiently represented using hierarchical formats (\scrH -matrices, HODLR). Quantitative estimates for the rank of the off-diagonal blocks of these matrices are presented. We analyze the use of HODLR arithmetic for solving the 1D case and we compare this strategy with existing methods that exploit the Toeplitz-like structure to precondition the GMRES iteration. The numerical tests demonstrate the convenience of the HODLR format when at least a reasonably low number of time steps is needed. Finally, we explain how these properties can be leveraged to design fast solvers for problems with 2D spatial domains that can be reformulated as matrix equations. The experiments show that the approach based on the use of rank-structured arithmetic is particularly effective and outperforms current state of the art techniques.Source: SIAM journal on scientific computing (Print) 41 (2019): A2627–A2656. doi:10.1137/18M1180803
DOI: 10.1137/18m1180803
DOI: 10.48550/arxiv.1804.05522
Metrics:


See at: arXiv.org e-Print Archive Open Access | SIAM Journal on Scientific Computing Open Access | SIAM Journal on Scientific Computing Restricted | doi.org Restricted | epubs.siam.org | CNR ExploRA


2019 Journal article Open Access OPEN
When is a matrix unitary or Hermitian plus low rank?
Del Corso G. M., Poloni F., Robol L., Vandebril R.
Hermitian and unitary matrices are two representatives of the class of normal matrices whose full eigenvalue decomposition can be stably computed in quadratic computing complexity once the matrix has been reduced, for instance, to tridiagonal or Hessenberg form. Recently, fast and reliable eigensolvers dealing with low-rank perturbations of unitary and Hermitian matrices have been proposed. These structured eigenvalue problems appear naturally when computing roots, via confederate linearizations, of polynomials expressed in, for example, the monomial or Chebyshev basis. Often, however, it is not known beforehand whether or not a matrix can be written as the sum of a Hermitian or unitary matrix plus a low-rank perturbation. In this paper, we give necessary and sufficient conditions characterizing the class of Hermitian or unitary plus low-rank matrices. The number of singular values deviating from 1 determines the rank of a perturbation to bring a matrix to unitary form. A similar condition holds for Hermitian matrices; the eigenvalues of the skew-Hermitian part differing from 0 dictate the rank of the perturbation. We prove that these relations are linked via the Cayley transform. Then, based on these conditions, we identify the closest Hermitian or unitary plus rank k matrix to a given matrix A, in Frobenius and spectral norm, and give a formula for their distance from A. Finally, we present a practical iteration to detect the low-rank perturbation. Numerical tests prove that this straightforward algorithm is effective.Source: Numerical linear algebra with applications 26 (2019). doi:10.1002/nla.2266
DOI: 10.1002/nla.2266
DOI: 10.48550/arxiv.1811.05854
Metrics:


See at: arXiv.org e-Print Archive Open Access | Numerical Linear Algebra with Applications Open Access | Lirias Open Access | ISTI Repository Open Access | Numerical Linear Algebra with Applications Restricted | doi.org Restricted | onlinelibrary.wiley.com Restricted | CNR ExploRA


2019 Journal article Open Access OPEN
Nonsingular systems of generalized Sylvester equations: An algorithmic approach
De Teran F., Iannazzo B., Poloni F., Robol L.
We consider the uniqueness of solution (i.e., nonsingularity) of systems of r generalized Sylvester and -Sylvester equations with nxn coefficients. After several reductions, we show that it is sufficient to analyze periodic systems having, at most, one generalized -Sylvester equation. We provide characterizations for the nonsingularity in terms of spectral properties of either matrix pencils or formal matrix products, both constructed from the coefficients of the system. The proposed approach uses the periodic Schur decomposition and leads to a backward stable O(n(3)r) algorithm for computing the (unique) solution.Source: Numerical linear algebra with applications 26 (2019). doi:10.1002/nla.2261
DOI: 10.1002/nla.2261
DOI: 10.48550/arxiv.1709.03783
Metrics:


See at: arXiv.org e-Print Archive Open Access | Numerical Linear Algebra with Applications Open Access | Recolector de Ciencia Abierta, RECOLECTA Open Access | ISTI Repository Open Access | Numerical Linear Algebra with Applications Restricted | doi.org Restricted | onlinelibrary.wiley.com Restricted | CNR ExploRA


2020 Journal article Open Access OPEN
Hm-toolbox: Matlab software for hodlr and HSS matrices
Massei S., Robol L., Kressner D.
Matrices with hierarchical low-rank structure, including HODLR and HSS matrices, constitute a versatile tool to develop fast algorithms for addressing large-scale problems. While existing software packages for such matrices often focus on linear systems, their scope of applications is in fact much wider and includes, for example, matrix functions and eigenvalue problems. In this work, we present a new MATLAB toolbox called hm-toolbox, which encompasses this versatility with a broad set of tools for HODLR and HSS matrices, unmatched by existing software. While mostly based on algorithms that can be found in the literature, our toolbox also contains a few new algorithms as well as novel auxiliary functions. Being entirely based on MATLAB, our implementation does not strive for optimal performance. Nevertheless, it maintains the favorable complexity of hierarchical low-rank matrices and offers, at the same time, a convenient way of prototyping and experimenting with algorithms. A number of applications illustrate the use of the hm-toolbox.Source: SIAM journal on scientific computing (Print) 42 (2020): C43–C68. doi:10.1137/19M1288048
DOI: 10.1137/19m1288048
DOI: 10.48550/arxiv.1909.07909
Project(s): Fast algorithms from low-rank updates via OpenAIRE
Metrics:


See at: arXiv.org e-Print Archive Open Access | SIAM Journal on Scientific Computing Open Access | ISTI Repository Open Access | SIAM Journal on Scientific Computing Restricted | doi.org Restricted | epubs.siam.org Restricted | Infoscience - EPFL scientific publications Restricted | CNR ExploRA


2020 Journal article Open Access OPEN
Rational Krylov and ADI iteration for infinite size quasi-Toeplitz matrix equations
Robol L.
We consider a class of linear matrix equations involving semi-infinite matrices which have a quasi-Toeplitz structure. These equations arise in different settings, mostly connected with PDEs or the study of Markov chains such as random walks on bidimensional lattices. We present the theory justifying the existence of the solution in an appropriate Banach algebra which is computationally treatable, and we propose several methods for computing them. We show how to adapt the ADI iteration to this particular infinite dimensional setting, and how to construct rational Krylov methods. Convergence theory is discussed, and numerical experiments validate the proposed approaches.Source: Linear algebra and its applications 604 (2020): 210–235. doi:10.1016/j.laa.2020.06.013
DOI: 10.1016/j.laa.2020.06.013
DOI: 10.48550/arxiv.1907.02753
Metrics:


See at: arXiv.org e-Print Archive Open Access | Linear Algebra and its Applications Open Access | ISTI Repository Open Access | ISTI Repository Open Access | Linear Algebra and its Applications Restricted | doi.org Restricted | www.sciencedirect.com Restricted | CNR ExploRA


2021 Journal article Open Access OPEN
Structured backward errors in linearizations
Noferini V., Robol L., Vandebril R.
A standard approach to calculate the roots of a univariate polynomial is to compute the eigenvalues of an associated confederate matrix instead, such as, for instance, the companion or comrade matrix. The eigenvalues of the confederate matrix can be computed by Francis's QR algorithm. Unfortunately, even though the QR algorithm is provably backward stable, mapping the errors back to the original polynomial coefficients can still lead to huge errors. However, the latter statement assumes the use of a non-structure-exploiting QR algorithm. In [J. L. Aurentz et al., Fast and backward stable computation of roots of polynomials, SIAM J. Matrix Anal. Appl., 36 (2015), pp. 942-973] it was shown that a structure-exploiting QR algorithm for companion matrices leads to a structured backward error in the companion matrix. The proof relied on decomposing the error into two parts: a part related to the recurrence coefficients of the basis (a monomial basis in that case) and a part linked to the coefficients of the original polynomial. In this article we prove that the analysis can be extended to other classes of comrade matrices. We first provide an alternative backward stability proof in the monomial basis using structured QR algorithms; our new point of view shows more explicitly how a structured, decoupled error in the confederate matrix gets mapped to the associated polynomial coefficients. This insight reveals which properties have to be preserved by a structure-exploiting QR algorithm to end up with a backward stable algorithm. We will show that the previously formulated companion analysis fits into this framework, and we analyze in more detail Jacobi polynomials (comrade matrices) and Chebyshev polynomials (colleague matrices).Source: Electronic transactions on numerical analysis 54 (2021): 420–442. doi:10.1553/ETNA_VOL54S420
DOI: 10.1553/etna_vol54s420
DOI: 10.48550/arxiv.1912.04157
Metrics:


See at: Aaltodoc Publication Archive Open Access | arXiv.org e-Print Archive Open Access | Electronic Transactions on Numerical Analysis Open Access | Electronic Transactions on Numerical Analysis Open Access | epub.oeaw.ac.at Open Access | ISTI Repository Open Access | doi.org Restricted | CNR ExploRA


2018 Report Open Access OPEN
Analyzing a security and reliability model using Krylov methods and matrix functions
Masetti G., Robol L.
It has been recently shown how the computation of performability measures for Markov models can be recasted as the evaluation of a bilinear forminduced by appropriate matrix functions. In view of these results, we show how to analyze a security model, inspired by a real world scenario. The model describes a mobile cyber-physical system of communicating nodes which are subject to security attacks. We take advantage of the properties of matrix functions of block matrices, and provide efficient evaluation methods.Moreover, we show how this new formulation can be used to retrieve interesting theoretical results, which can also rephrased in probabilistic terms.Source: ISTI Technical reports, 2018

See at: ISTI Repository Open Access | CNR ExploRA