If the matrix to diagonalize is large and sparse, direct methods simply become impractical, also because many of the direct methods tend to destroy sparsity. As a result large dense matrices may arise during the diagonalization procedure. The idea behind iterative methods is to project the $n-$dimensional problem in smaller spaces, so-called Krylov subspaces. Given a matrix \( \mathbf{A} \) and a vector \( \mathbf{v} \), the associated Krylov sequences of vectors (and thereby subspaces) \( \mathbf{v} \), \( \mathbf{A}\mathbf{v} \), \( \mathbf{A}^2\mathbf{v} \), \( \mathbf{A}^3\mathbf{v},\dots \), represent successively larger Krylov subspaces.
Matrix | \( \mathbf{A}\mathbf{x}=\mathbf{b} \) | \( \mathbf{A}\mathbf{x}=\lambda\mathbf{x} \) |
\( \mathbf{A}=\mathbf{A}^* \) | Conjugate gradient | Lanczos |
\( \mathbf{A}\ne \mathbf{A}^* \) | GMRES etc | Arnoldi |