/* Program usage: mpiexec ex1 [-help] [all PETSc options] */
static char help[] = Basic vector routines.\n\n;
#include petscksp.h
#include petscvec.h
int main(int argc,char **argv)
{
int N=16;
int MyRank;
Mat A;
PC Pc;
KSP ksp;
Vec b,x;
On Fri, Aug 16, 2013 at 6:13 AM, 丁老师 ztdepya...@163.com wrote:
Its common for iterative solvers to converge differently for different PC.
You are using block Jacobi-ILU
which is different for different numbers of processes.
Matt
/* Program usage: mpiexec ex1 [-help] [all PETSc options] */
thank you very much! could you please suggest me a robust preconditioner which
is independent of the number of processor.
在 2013-08-16 19:54:01,Matthew Knepley knep...@gmail.com 写道:
On Fri, Aug 16, 2013 at 6:13 AM, 丁老师 ztdepya...@163.com wrote:
Its common for iterative solvers to
On Fri, Aug 16, 2013 at 7:32 AM, 丁老师 ztdepya...@163.com wrote:
thank you very much! could you please suggest me a robust preconditioner
which is independent of the number of processor.
For generic problems, they do not exist. I suggest looking in the
literature for your specific problem, and