It is pasted below
> On Jun 25, 2023, at 1:40 PM, Edoardo alinovi <[email protected]>
> wrote:
>
> Hi Barry,
>
> thanks for pointing me out to that discussion! Unfortunately I am getting
> issues with this link:
> http://lists.mcs.anl.gov/pipermail/petsc-dev/attachments/20190302/b0c1ad29/attachment.mht
> , any chance it is a dead one?
>
> Cheers
--====-=-=
Content-Type: text/plain
Content-Disposition: inline
Ok, I have implemented the algorithm using PETSc PCCOMPOSITE and PCGALERKIN
and get identical iterations as the code you sent. PCFIELDSPLIT is not intended
for this type of solver composition. Here is the algorithm written in
"two-step" form
x_1/2 = P KSPSolve( R A P, using BoomerAMG) R b
x = x_1/2 + PCApply( A, using Hypre PILUT preconditioner) ( b - A
x_1/2)
PCCOMPOSITE with a type of multiplicative handles the two steps and PCGALERKIN
handles the P KSPSolve(R A P) R business.
You will need to use the master version of PETSc because I had to add a feature
to PCGALERKIN to allow the solver to be easily used for and A that changes
values for later solvers.
Here is the output from -ksp_view
KSP Object: 1 MPI processes
type: fgmres
GMRES: restart=100, using Classical (unmodified) Gram-Schmidt
Orthogonalization with no iterative refinement
GMRES: happy breakdown tolerance 1e-30
maximum iterations=10000, initial guess is zero
tolerances: relative=1e-05, absolute=1e-50, divergence=10000.
right preconditioning
using UNPRECONDITIONED norm type for convergence test
PC Object: 1 MPI processes
type: composite
Composite PC type - MULTIPLICATIVE
PCs on composite preconditioner follow
---------------------------------
PC Object: (sub_0_) 1 MPI processes
type: galerkin
Galerkin PC
KSP on Galerkin follow
---------------------------------
KSP Object: (sub_0_galerkin_) 1 MPI processes
type: richardson
Richardson: damping factor=1.
maximum iterations=10000, initial guess is zero
tolerances: relative=1e-05, absolute=1e-50, divergence=10000.
left preconditioning
using PRECONDITIONED norm type for convergence test
PC Object: (sub_0_galerkin_) 1 MPI processes
type: hypre
HYPRE BoomerAMG preconditioning
HYPRE BoomerAMG: Cycle type V
HYPRE BoomerAMG: Maximum number of levels 25
HYPRE BoomerAMG: Maximum number of iterations PER hypre call 1
HYPRE BoomerAMG: Convergence tolerance PER hypre call 0.
HYPRE BoomerAMG: Threshold for strong coupling 0.25
HYPRE BoomerAMG: Interpolation truncation factor 0.
HYPRE BoomerAMG: Interpolation: max elements per row 0
HYPRE BoomerAMG: Number of levels of aggressive coarsening 0
HYPRE BoomerAMG: Number of paths for aggressive coarsening 1
HYPRE BoomerAMG: Maximum row sums 0.9
HYPRE BoomerAMG: Sweeps down 1
HYPRE BoomerAMG: Sweeps up 1
HYPRE BoomerAMG: Sweeps on coarse 1
HYPRE BoomerAMG: Relax down symmetric-SOR/Jacobi
HYPRE BoomerAMG: Relax up symmetric-SOR/Jacobi
HYPRE BoomerAMG: Relax on coarse Gaussian-elimination
HYPRE BoomerAMG: Relax weight (all) 1.
HYPRE BoomerAMG: Outer relax weight (all) 1.
HYPRE BoomerAMG: Using CF-relaxation
HYPRE BoomerAMG: Not using more complex smoothers.
HYPRE BoomerAMG: Measure type local
HYPRE BoomerAMG: Coarsen type Falgout
HYPRE BoomerAMG: Interpolation type classical
linear system matrix = precond matrix:
Mat Object: 1 MPI processes
type: seqaij
rows=50, cols=50
total: nonzeros=244, allocated nonzeros=244
total number of mallocs used during MatSetValues calls =0
not using I-node routines
linear system matrix = precond matrix:
Mat Object: 1 MPI processes
type: seqaij
rows=100, cols=100, bs=2
total: nonzeros=976, allocated nonzeros=100000
total number of mallocs used during MatSetValues calls =0
using I-node routines: found 50 nodes, limit used is 5
PC Object: (sub_1_) 1 MPI processes
type: hypre
HYPRE Pilut preconditioning
HYPRE Pilut: maximum number of iterations 1
HYPRE Pilut: drop tolerance 0.1
HYPRE Pilut: default factor row size
linear system matrix = precond matrix:
Mat Object: 1 MPI processes
type: seqaij
rows=100, cols=100, bs=2
total: nonzeros=976, allocated nonzeros=100000
total number of mallocs used during MatSetValues calls =0
using I-node routines: found 50 nodes, limit used is 5
Note that one can change the solvers on the two "stages" and all
their options such as tolerances etc using the options database a proper
prefixes which you can find in the output above. For example to use PETSc's ILU
instead of hypre's just run with -sub_1_pc_type ilu or to use a direct solver
instead of boomer -sub_0_galerkin_pc_type lu
I've attached a new version of testmain2.c that runs your solver and also my
version.
Given the "unique" nature of your R = [ I I ... I] and P = [0 I 0 0
... 0] I am not sure that it makes sense to include this preconditioner
directly in PETSc as a new PC type; so if you are serious about using it you
can take what I am sending back and modify it for your needs. As a numerical
analyst who works on linear solvers I am also not convinced that this is likely
to be a particular good preconditioner
Let me know if you have any questions,
Barry
--====-=-=
Content-Type: text/plain; name=testmain2.c
Content-Disposition: attachment; filename=testmain2.c
// make && mpirun -np 2 ./testmain2 -ksp_error_if_not_converged
#include "MCPR.h"
/*
Computes the submatrix associated with the Galerkin subproblem Ap = R A P
*/
PetscErrorCode ComputeSubmatrix(PC pc,Mat A, Mat Ap, Mat *cAp,void *ctx)
{
PetscErrorCode ierr;
PetscInt b,Am,An,start,end,first = 0, offset = 1;
IS is,js;
Mat Aij;
PetscFunctionBegin;
ierr = MatGetLocalSize(A,&Am,&An);CHKERRQ(ierr);
ierr = MatGetBlockSize(A,&b);CHKERRQ(ierr);
ierr = MatGetOwnershipRange(A, &start, &end);CHKERRQ(ierr);
ierr =
ISCreateStride(PetscObjectComm((PetscObject)A),Am/b,start+offset,b,&js);CHKERRQ(ierr);
if (!Ap) {
ierr =
ISCreateStride(PetscObjectComm((PetscObject)A),An/b,start+0,b,&is);CHKERRQ(ierr);
ierr = MatGetSubMatrix(A,is,js,MAT_INITIAL_MATRIX,&Ap);CHKERRQ(ierr);
ierr = ISDestroy(&is);CHKERRQ(ierr);
*cAp = Ap;
first = 1;
} else {
ierr = MatZeroEntries(Ap);CHKERRQ(ierr);
}
for(PetscInt k=first;k<b;++k) {
ierr =
ISCreateStride(PetscObjectComm((PetscObject)A),An/b,start+k,b,&is);CHKERRQ(ierr);
ierr = MatGetSubMatrix(A,is,js,MAT_INITIAL_MATRIX,&Aij);CHKERRQ(ierr);
ierr = MatAXPY(Ap,1.0,Aij,DIFFERENT_NONZERO_PATTERN);CHKERRQ(ierr);
ierr = MatDestroy(&Aij);CHKERRQ(ierr);
ierr = ISDestroy(&is);CHKERRQ(ierr);
}
ierr = ISDestroy(&js);CHKERRQ(ierr);
PetscFunctionReturn(0);
}
/*
Apply the restriction operator for the Galkerin problem
*/
PetscErrorCode ApplyR(Mat A, Vec x,Vec y)
{
PetscErrorCode ierr;
PetscInt b;
PetscFunctionBegin;
ierr = VecGetBlockSize(x,&b);CHKERRQ(ierr);
ierr = VecStrideGather(x,0,y,INSERT_VALUES);CHKERRQ(ierr);
for (PetscInt k=1;k<b;++k) {ierr =
VecStrideGather(x,k,y,ADD_VALUES);CHKERRQ(ierr);}
PetscFunctionReturn(0);
}
/*
Apply the interpolation operator for the Galerkin problem
*/
PetscErrorCode ApplyP(Mat A, Vec x,Vec y)
{
PetscErrorCode ierr;
PetscInt offset = 1;
PetscFunctionBegin;
ierr = VecStrideScatter(x,offset,y,INSERT_VALUES);CHKERRQ(ierr);
PetscFunctionReturn(0);
}
int main( int argc, char *argv[] )
{
PetscInitialize(&argc,&argv,PETSC_NULL,PETSC_NULL);
int rank, size;
MPI_Comm_rank (MPI_COMM_WORLD, &rank); /* get current process
id */
MPI_Comm_size (MPI_COMM_WORLD, &size); /* get number of
processes */
MPI_Comm C = PETSC_COMM_WORLD;
PetscRandom rnd;
PetscRandomCreate(C,&rnd);
PetscRandomSetInterval(rnd,0.0,1.0);
PetscRandomSetFromOptions(rnd);
int M = 100;
int N = size*M;
Mat A = getSampleMatrix(M);
Mat T1 = getT1(A,1);
Mat T2 = getT2(A,0.1);
Mat MCPR = getMCPR(A,T1,T2);
Vec u,v,w,z;
VecCreate(C,&u);
VecSetBlockSize(u,2);
VecSetSizes(u,M,N);
VecSetFromOptions(u);
VecDuplicate(u,&v);
VecDuplicate(u,&w);
VecDuplicate(u,&z);
VecSetRandom(u,rnd);
Mat mats[] = {T2,MCPR};
const char *names[] = {"T2","MCPR"};
for(int k=1;k<2;++k) {
KSP solver;
KSPCreate(C,&solver);
KSPSetOperators(solver,A,A);
KSPSetType(solver,KSPFGMRES);
KSPGMRESSetRestart(solver,N);
PC pc;
KSPGetPC(solver,&pc);
putMatrixInPC(pc,mats[k]);
KSPSetFromOptions(solver);
KSPSolve(solver,u,v);
KSPConvergedReason reason;
int its;
KSPGetConvergedReason(solver,&reason);
KSPGetIterationNumber(solver,&its);
PetscPrintf(PETSC_COMM_WORLD,"testmain2: %s converged
reason %d; iterations %d.\n",names[k],reason,its);
KSPView(solver,PETSC_VIEWER_STDOUT_WORLD);
KSPDestroy(&solver);
}
/*
The preconditioner is
x_1/2 = P KSPSolve( R A P, using
BoomerAMG) R b
x = x_1/2 + PCApply( A, using
Hypre PILUT preconditioner) ( b - A x_1/2)
where the first line is implemented using PCGALERKIN with
BoomerAMG on the subproblem so can be written as
x_1/2 = PCApply(A, using PCGALERKIN
with KSPSolve( R A P, using BoomerAMG) as the inner solver
x = x_1/2 + PCApply( A, using
Hypre PILUT preconditioner) ( b - A x_1/2)
Which is implemented using the PETSc PCCOMPOSITE
preconditioner of type multiplicative so can be written as
x = PCApply(A, using PCCOMPOSITE using (PCGALERKIN with KSPSolve( R A
P, using BoomerAMG) as the inner solver) as the first solver and PCApply( A,
using Hypre PILUT preconditioner) as the second solver)
*/
{
PetscErrorCode ierr;
KSP ksp;
ierr = KSPCreate(PETSC_COMM_WORLD,&ksp);CHKERRQ(ierr);
ierr = KSPSetType(ksp,KSPFGMRES);CHKERRQ(ierr);
ierr = KSPGMRESSetRestart(ksp,100);CHKERRQ(ierr);
ierr = KSPSetOperators(ksp,A,A);CHKERRQ(ierr);
PC pc;
ierr = KSPGetPC(ksp,&pc);CHKERRQ(ierr);
ierr = PCSetType(pc,PCCOMPOSITE);CHKERRQ(ierr);
ierr = PCCompositeSetType(pc,PC_COMPOSITE_MULTIPLICATIVE);CHKERRQ(ierr);
/* Create first sub problem solver Hypre boomerAMG on Ap */
PC t1;
ierr = PCCompositeAddPC(pc,PCGALERKIN);CHKERRQ(ierr);
ierr = PCCompositeGetPC(pc,0,&t1);CHKERRQ(ierr);
KSP Ap_ksp;
ierr = PCGalerkinGetKSP(t1,&Ap_ksp);CHKERRQ(ierr);
ierr = KSPSetType(Ap_ksp,KSPRICHARDSON);CHKERRQ(ierr);
PC Ap_pc;
ierr = KSPGetPC(Ap_ksp,&Ap_pc);CHKERRQ(ierr);
ierr = PCSetType(Ap_pc,PCHYPRE);CHKERRQ(ierr);
/* this tells the PC how to compute the reduced matrix */
ierr =
PCGalerkinSetComputeSubmatrix(t1,ComputeSubmatrix,NULL);CHKERRQ(ierr);
PetscInt b,Am,An;
ierr = MatGetLocalSize(A,&Am,&An);CHKERRQ(ierr);
ierr = MatGetBlockSize(A,&b);CHKERRQ(ierr);
int start,end;
ierr = MatGetOwnershipRange(A, &start, &end);CHKERRQ(ierr);
/* create the R operator */
Mat R;
ierr =
MatCreateShell(PetscObjectComm((PetscObject)A),Am/b,An,PETSC_DETERMINE,PETSC_DETERMINE,NULL,&R);
ierr = MatShellSetOperation(R,MATOP_MULT,(void
(*)(void))ApplyR);CHKERRQ(ierr);
ierr = PCGalerkinSetRestriction(t1,R);CHKERRQ(ierr);
/* create the P operator */
Mat P;
ierr =
MatCreateShell(PetscObjectComm((PetscObject)A),Am,An/b,PETSC_DETERMINE,PETSC_DETERMINE,NULL,&P);
ierr = MatShellSetOperation(P,MATOP_MULT,(void
(*)(void))ApplyP);CHKERRQ(ierr);
ierr = PCGalerkinSetInterpolation(t1,P);CHKERRQ(ierr);
/* Create the second subproblem solver Block ILU */
PC t2;
ierr = PCCompositeAddPC(pc,PCHYPRE);CHKERRQ(ierr);
ierr = PCCompositeGetPC(pc,1,&t2);CHKERRQ(ierr);
ierr = PCSetType(t2,PCHYPRE);CHKERRQ(ierr);
ierr =
PetscOptionsSetValue(NULL,"-sub_1_pc_hypre_pilut_maxiter","1");
char s[100];
sprintf(s,"%e",.1);
ierr =
PetscOptionsSetValue(NULL,"-sub_1_pc_hypre_pilut_tol",s);CHKERRQ(ierr);
ierr = PCHYPRESetType(t2,"pilut");CHKERRQ(ierr);
ierr = KSPSetFromOptions(ksp);CHKERRQ(ierr);
ierr = VecZeroEntries(v);CHKERRQ(ierr);
ierr = KSPSolve(ksp,u,v);CHKERRQ(ierr);
ierr = KSPView(ksp,PETSC_VIEWER_STDOUT_WORLD);CHKERRQ(ierr);
}
return 0;
}
--====-=-=
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: quoted-printable
> On Jan 23, 2017, at 8:25 AM, S=C3=A9bastien Loisel <[email protected]
<mailto:[email protected]>> wr=
ote:
>=20
> Hi Barry,
>=20
> Thanks for your email. Thanks for pointing out my PetSC sillyness. I thin=
k what happened is I played with matrices as well as with preconditioners s=
o I initially implemented it as MATSHELL and at the end wrapped it in a PCS=
HELL. :)
>=20
> On Mon, Jan 23, 2017 at 2:26 AM, Barry Smith <[email protected]
<mailto:[email protected]>> wrote:
> I've started to look at this; it is a little weird using MATSHELL within =
your PCSHELL, seems unnecessary. Not necessarily wrong, just strange. I'll =
continue to try to understand it.
>=20
> Barry
>=20
>=20
>=20
> > On Jan 20, 2017, at 5:48 PM, S=C3=A9bastien Loisel
<[email protected] <mailto:[email protected]>> =
wrote:
> >
> > OK I'm attaching the prototype.
> >
> > It won't be 100% plug-and-play for you because in principle T1 and T2
a=
re built on top of "sub-preconditioners" (AMG for T1 and BILU for T2)
and j=
udging from the PetSC architecture elsewhere I must assume you're going to =
want to expose those in a rational way. At present, I've hard-coded some su=
b-preconditioners, and in particular for T2 I had to resort to PILUT becaus=
e I didn't have a BILU(0) handy.
> >
> > Also, I broke the PetSC law and my functions don't return integers,
so =
I also assume you're going to want to tweak that... Sorry!
> >
> > On Fri, Jan 20, 2017 at 11:29 PM, Barry Smith <[email protected]
<mailto:[email protected]>> wrot=
e:
> >
> > Sure, email the PCSHELL version, best case scenario I only need
chan=
ge out the PCSHELL and it takes me 5 minutes :-)
> >
> >
> > > On Jan 20, 2017, at 5:07 PM, S=C3=A9bastien Loisel
<[email protected] <mailto:[email protected]>=
> wrote:
> > >
> > > Hi all,
> > >
> > > Thanks for your emails. I'm willing to help in whatever way. We
have =
a "PCSHELL" prototype we can provide on request, although PetSC
experts can=
no doubt do a better job than I did.
> > >
> > > Thanks,
> > >
> > > On Fri, Jan 20, 2017 at 9:50 PM, Robert Annewandter
<robert.annewandt=
[email protected] <mailto:[email protected]>> wrote:
> > > Indeed that would be very helpful! We're very happy to support
to tes=
t things out, provide feedback etc
> > >
> > > Many thanks!
> > > Robert
> > >
> > >
> > >
> > > On 20/01/17 21:22, Hammond, Glenn E wrote:
> > >> That sounds great. I do know that there is also much
interest state=
-side in CPR preconditioning within PFLOTRAN. I have a Sandia colleague in=
Carlsbad, NM who has been asking about it. I am sure that Sebastien and/o=
r Robert will help out in any way possible.
> > >>
> > >> Thanks,
> > >>
> > >> Glenn
> > >>
> > >>
> > >>> -----Original Message-----
> > >>> From: Barry Smith [
> > >>> mailto:[email protected]
> > >>> ]
> > >>> Sent: Friday, January 20, 2017 12:58 PM
> > >>> To: Hammond, Glenn E
> > >>> <[email protected] <mailto:[email protected]>>
> > >>>
> > >>> Cc: S=C3=A9bastien Loisel
> > >>> <[email protected] <mailto:[email protected]>>
> > >>> ; Robert Annewandter
> > >>>
> > >>> <[email protected]
<mailto:[email protected]>>; Jed Brown <[email protected]
<mailto:[email protected]>>
> > >>> ;
> > >>> Paolo Orsini
> > >>> <[email protected]
<mailto:[email protected]>>
> > >>> ; Matthew Knepley
> > >>>
> > >>> <[email protected] <mailto:[email protected]>>
> > >>>
> > >>> Subject: Re: [EXTERNAL] CPR preconditioning
> > >>>
> > >>>
> > >>> Glenn,
> > >>>
> > >>> Sorry about the delay in processing this, too much
going on ...
> > >>>
> > >>> I think the best thing is for us (the PETSc
developers) to impl=
ement a CPR
> > >>> preconditioner "directly" as its own PC and
then have you guys try =
it out. I am
> > >>> planning to do this.
> > >>>
> > >>> Barry
> > >>>
> > >>>
> > >>>> On Jan 20, 2017, at 2:50 PM, Hammond, Glenn E
<[email protected] <mailto:[email protected]>>
> > >>> wrote:
> > >>>
> > >>>> Barry, Jed or Matt,
> > >>>>
> > >>>> Do you have any suggestions for how to address the
limitations of
> > >>>>
> > >>> PetscFieldSplit() discussed below. Will they need to
manipulate th=
e matrices
> > >>> manually?
> > >>>
> > >>>> Thanks,
> > >>>>
> > >>>> Glenn
> > >>>>
> > >>>> From: S=C3=A9bastien Loisel [
> > >>>> mailto:[email protected]
> > >>>> ]
> > >>>> Sent: Wednesday, January 11, 2017 3:33 AM
> > >>>> To: Barry Smith
> > >>>> <[email protected]
<mailto:[email protected]>>
> > >>>> ; Robert Annewandter
> > >>>>
> > >>>> <[email protected]
<mailto:[email protected]>>
> > >>>> ; Hammond, Glenn E
> > >>>>
> > >>>> <[email protected]
<mailto:[email protected]>>; Jed Brown <[email protected]
<mailto:[email protected]>>
> > >>>> ; Paolo Orsini
> > >>>>
> > >>>> <[email protected]
<mailto:[email protected]>>
> > >>>>
> > >>>> Subject: [EXTERNAL] CPR preconditioning
> > >>>>
> > >>>> Dear Friends,
> > >>>>
> > >>>> Paolo has asked me to write this email to clarify
issues surroundi=
ng
> > >>>> the CPR preconditioner that is widely used in
multiphase flow. I k=
now
> > >>>> Barry from a long time ago but we only met once when
I was a PhD
> > >>>> student so I would be shocked if he remembered me. :)
> > >>>>
> > >>>> I'm a math assistant professor and one of my areas
of specializati=
on is linear
> > >>>>
> > >>> algebra and preconditioning.
> > >>>
> > >>>> The main issue that is useful to clarify is the
following. There w=
as a proposal
> > >>>>
> > >>> to use PetSC's PETSCFIELDSPLIT in conjunction with
PCCOMPOSITE in o=
rder to
> > >>> implement CPR preconditioning. Although this is morally
the right i=
dea, this
> > >>> seems to be currently impossible because PETSCFIELDSPLIT
lacks the
> > >>> capabilities it would need to implement the T1
preconditioner. This=
is due to
> > >>> limitations in the API exposed by PETSCFIELDSPLIT (and
no doubt lim=
itations
> > >>> in the underlying implementation).
> > >>>
> > >>>> In order to be as clear as possible, please allow me
to describe
> > >>>>
> > >>> unambiguously the first of the two parts of the CPR
preconditioner =
using
> > >>> some MATLAB snippets. Let I denote the N by N identity,
and Z the N=
by N
> > >>> zero matrix. Put WT =3D [I I I] and C =3D [Z;Z;I]. The
pressure mat=
rix is Ap =3D
> > >>> WT*A*C, and the T1 preconditioner is C*inv(Ap)*WT, where
inv(Ap) is=
to be
> > >>> implemented with AMG.
> > >>>
> > >>>> This T1 preconditioner is the one that would have to
be implemente=
d by
> > >>>>
> > >>> PETSCFIELDSPLIT. The following limitations in
PETSCFIELDSPLIT preve=
nts one
> > >>> to implement T1:
> > >>>
> > >>>> =E2=80=A2 One must select the
"pressure" by specifying an IS ob=
ject to
> > >>>>
> > >>> PCFieldSplitSetIS(). However, since WT =3D [I I I], the
pressure is=
obtained by
> > >>> summing over the three fields. As far as I can tell, an
IS object d=
oes not allow
> > >>> one to sum over several entries to obtain the pressure
field.
> > >>>
> > >>>> =E2=80=A2 The pressure matrix is Ap =3D WT*A*C;
note that the m=
atrix WT on
> > >>>>
> > >>> the left is different from the matrix C on the right.
However, PCFI=
ELDSPLIT
> > >>> has no notion of a "left-IS" vs
"right-IS"; morally, only diagonal =
blocks of A can
> > >>> be used by PCFIELDSPLIT.
> > >>>
> > >>>> =E2=80=A2 PCFIELDSPLIT offers a range of
hard-coded block struc=
tures for the
> > >>>>
> > >>> final preconditioner, but the choice T1 =3D C*inv(Ap)*WT
is not one=
of these
> > >>> choices. Indeed, this first stage CPR preconditioner T1
is *singula=
r*, but there
> > >>> is no obvious way for PCFIELDSPLIT to produce a singular
preconditi=
oner.
> > >>>
> > >>>> Note that the documentation for PETSCFIELDSPLIT says
that "The
> > >>>>
> > >>> Constrained Pressure Preconditioner (CPR) does not
appear to be cur=
rently
> > >>> implementable directly with PCFIELDSPLIT". Unless
there are very si=
gnificant
> > >>> capabilities that are not documented, I don't see how
CPR can be
> > >>> implemented with PETSCFIELDSPLIT.
> > >>>
> > >>>> Elsewhere, someone proposed putting the two
preconditioners T1 and=
T2
> > >>>> on each side of A, e.g. T1*A*T2. That is a very bad
idea because T=
1 is
> > >>>> singular and hence T1*A*T2 is also singular. The
correct CPR
> > >>>> preconditioner is nonsingular despite the fact that
T1 is singular,
> > >>>> and MCPR is given by the formula MCPR =3D
T2*(I-A*T1)+T1, where T2=
=3D
> > >>>> BILU(0) of A. (There is a proof, due to Felix Kwok,
that BILU(0) w=
orks
> > >>>> even though ILU(0) craps out on vanishing diagonal
entries.)
> > >>>>
> > >>>> I'm also attaching a sample MATLAB code that runs
the CPR precondi=
tioner
> > >>>>
> > >>> on some fabricated random matrix A. I emphasize that
this is not a =
realistic
> > >>> matrix, but it still illustrates that the algorithm
works, and that=
MCPR is better
> > >>> than T2 alone. Note that T1 cannot be used alone since
it is singul=
ar. Further
> > >>> gains are expected when the Robert's realistic code with
correct ph=
ysics will
> > >>> come online.
> > >>>
> > >>>> <image003.jpg>
> > >>>> I hope this clarifies some things.
> > >>>>
> > >>>>
> > >>>> S=C3=A9bastien Loisel
> > >>>> Assistant Professor
> > >>>> Department of Mathematics, Heriot-Watt University
Riccarton, EH14 =
4AS,
> > >>>> United Kingdom
> > >>>> web:
> > >>>> http://www.ma.hw.ac.uk/~loisel/
> > >>>>
> > >>>> email: S.Loisel at
> > >>>> hw.ac.uk <http://hw.ac.uk/>
> > >>>>
> > >>>> phone:
> > >>>> +44 131 451 3234
> > >>>>
> > >>>> fax:
> > >>>> +44 131 451 3249
> > >
> > >
> > >
> > >
> > > --
> > > S=C3=A9bastien Loisel
> > > Assistant Professor
> > > Department of Mathematics, Heriot-Watt University
> > > Riccarton, EH14 4AS, United Kingdom
> > > web: http://www.ma.hw.ac.uk/~loisel/
> > > email: S.Loisel at hw.ac.uk <http://hw.ac.uk/>
> > > phone: +44 131 451 3234
> > > fax: +44 131 451 3249
> > >
> >
> >
> >
> >
> > --
> > S=C3=A9bastien Loisel
> > Assistant Professor
> > Department of Mathematics, Heriot-Watt University
> > Riccarton, EH14 4AS, United Kingdom
> > web: http://www.ma.hw.ac.uk/~loisel/
> > email: S.Loisel at hw.ac.uk <http://hw.ac.uk/>
> > phone: +44 131 451 3234
> > fax: +44 131 451 3249
> >
> > <petsc.zip>
>=20
>=20
>=20
>=20
> --=20
> S=C3=A9bastien Loisel
> Assistant Professor
> Department of Mathematics, Heriot-Watt University
> Riccarton, EH14 4AS, United Kingdom
> web: http://www.ma.hw.ac.uk/~loisel/
> email: S.Loisel at hw.ac.uk <http://hw.ac.uk/>
> phone: +44 131 451 3234
> fax: +44 131 451 3249
>=20
--====-=-=--
]