Sometimes the convergence rate that is calculated based on the ratio between 
lambda_max and lambda_min is rather pessimistic. The distribution of 
eigenvalues plays a critical role in convergence. The common argument for the 
effectiveness of eigenvector deflation is that "a few extreme eigenvalues 
(usually the smallest ones in magnitude) hamper the convergence so deflating 
them is helpful." This does not sound clear enough to me, either. But I did 
have an experience where after a preconditioner is applied, the spectrum of the 
matrix is clustered, except for a few extreme eigenvalues. Deflating them 
significantly improves the convergence of CG. So I think the real magic is not 
about "50 out of 1 billion", but rather, about how the spectrum of the matrix 
changes when you combine deflation with preconditioning.

Jie




----- Original Message -----
From: "Barry Smith" <[email protected]>
To: "For users of the development version of PETSc" <petsc-dev at mcs.anl.gov>
Sent: Friday, March 1, 2013 9:52:25 PM
Subject: Re: [petsc-dev] Deflated Krylov solvers for PETSc


   This has always been the biggest puzzler for deflation. Say one has a 1 
billion unknown linear system; simple elliptic problem so the eigenvalues are 
distributed between lambda_min and lambda_max with the ratio of lambda_max over 
lambda_min is pretty big. Now deflate out 50 eigenvalues, so what? how can 
deflating out 50 eigenvalues even if they are the most extreme really affect 
the convergence rate very much? It is 50 out of 1 billion. Seems too magical to 
be believable?

   Barry

Reply via email to