On Mar 1, 2013, at 9:40 PM, Jie Chen <jiechen at mcs.anl.gov> wrote:
> I think the number of deflation vectors should not be large. So the K_c here
> is a small matrix, and whether Y is dense or sparse does not make a big
> difference. In this regard, deflation is not exactly the same as one V-cycle.
> For multigrid, you coarsen a grid of size 1,000,000 to 500,000. But for
> deflation, you reduce 1,000,000 to 10 or 50. Make sense?
This has always been the biggest puzzler for deflation. Say one has a 1
billion unknown linear system; simple elliptic problem so the eigenvalues are
distributed between lambda_min and lambda_max with the ratio of lambda_max over
lambda_min is pretty big. Now deflate out 50 eigenvalues, so what? how can
deflating out 50 eigenvalues even if they are the most extreme really affect
the convergence rate very much? It is 50 out of 1 billion. Seems too magical to
be believable?
Barry
>
> Jie
>
>
>
> ----- Original Message -----
> From: "Jed Brown" <jedbrown at mcs.anl.gov>
> To: "For users of the development version of PETSc" <petsc-dev at mcs.anl.gov>
> Sent: Friday, March 1, 2013 3:07:23 PM
> Subject: Re: [petsc-dev] Deflated Krylov solvers for PETSc
>
>
> I think the most common deflation approach is to use estimates of the most
> global eigenvectors. Those are dense, so we can construct the coarse operator
> as
>
>
> K_c = Y^T * (P^{-1} A) * Y
>
>
> without further ado. If subdomains aggregates are used for the deflation
> vectors Y, then we'd really like to exploit their sparsity. We would seem to
> want preconditioner application to a special sort of sparse matrix.
>