Re: [petsc-users] Error and success codes

2014-08-27 Thread Cedric Doucet
Hello, thank you very much for your answer! No, use 0. It's less typing and that is something that will definitely never change. I understand but it could be a good thing to give it the same value as EXIT_SUCCESS (from cstdlib) because this value is not always zero. However, that's ok for

Re: [petsc-users] Error and success codes

2014-08-27 Thread Jed Brown
Cedric Doucet cedric.dou...@inria.fr writes: No, use 0. It's less typing and that is something that will definitely never change. I understand but it could be a good thing to give it the same value as EXIT_SUCCESS (from cstdlib) because this value is not always zero. C11 §7.22.4.4 If the

Re: [petsc-users] behavior of running same codes on different clusters many times

2014-08-27 Thread Matthew Knepley
On Wed, Aug 27, 2014 at 3:54 PM, Xiangdong epsco...@gmail.com wrote: Hello everyone, When I ran the same petsc codes on different clusters many times, one cluster always produces the same results, while the other one varies in terms of number of iterations for SNES and KSP convergence. If

Re: [petsc-users] behavior of running same codes on different clusters many times

2014-08-27 Thread Barry Smith
Different compilers can also produce different results because they order the operations differently. Also note that with iterative methods such as Newton's method and linear iterative methods such as Krylov methods when the residual norms start to get small they may look very different

[petsc-users] PETSC errors from KSPSolve() with MUMPS

2014-08-27 Thread Evan Um
Dear PETSC users, I try to solve a large problem (about 9,000,000 unknowns) with large number of processes (about 400 processes and 1TB). I guess that this is a reasonably large resource for solving this problem because I was able to solve the same problem using serial MUMPS with 500GB. Of

Re: [petsc-users] PETSC errors from KSPSolve() with MUMPS

2014-08-27 Thread Barry Smith
MPI_ABORT was invoked on rank 11 in communicator MPI_COMM_WORLD Please send ALL the output. In particular since rank 11 seems to have chocked we need to see all the messages from [11] to see what it thinks has gone wrong. Barry On Aug 27, 2014, at 4:27 PM, Evan Um eva...@gmail.com

Re: [petsc-users] PETSC errors from KSPSolve() with MUMPS

2014-08-27 Thread Barry Smith
Ok [11]PETSC ERROR: Caught signal number 15 Terminate: Some process (or the batch system) has told this process to end This message usually happens because either 1) the process ran out of memory or 2) the process took more time than the batch system allowed my guess is 1. I don’t know