PETSc error code 1 does not seem to indicate an expected problem, http://www.mcs.anl.gov/petsc/petsc-dev/include/petscerror.h.html. It seems as an error not handled by PETSc.
You could provide us with your code or try investigating the problem with debugger $ mpirun -n 3 xterm -e gdb -ex 'set breakpoint pending on' -ex 'break PetscError' -ex 'break dolfin::dolfin_error' -ex r -args python your_script.py ... Break point hit... (gdb) bt and post a backtrace here. Jan On Mon, 12 Oct 2015 15:16:48 +0300 Giorgos Grekas <[email protected]> wrote: > Hello, > i am using ncg from tao solver and i wanted to test my code validity > in a pc with 4 processors > before its execution in a cluster. When i run my code with 2 processes > (mpirun -np 2) everything > looks to work fine but when i use 3 or more processes i have the > following error: > > > Error: Unable to successfully call PETSc function > 'VecAssemblyBegin'. *** Reason: PETSc error code is: 1. > *** Where: This error was encountered inside > /home/ggrekas/.hashdist/tmp/dolfin-wphma2jn5fuw/dolfin/la/PETScVector.cpp. > *** Process: 3 > *** > *** DOLFIN version: 1.7.0dev > *** Git changeset: 3fbd47ec249a3e4bd9d055f8a01b28287c5bcf6a > *** > ------------------------------------------------------------------------- > > > =================================================================================== > = BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES > = EXIT CODE: 134 > = CLEANING UP REMAINING PROCESSES > = YOU CAN IGNORE THE BELOW CLEANUP MESSAGES > =================================================================================== > YOUR APPLICATION TERMINATED WITH THE EXIT STRING: Aborted (signal 6) > This typically refers to a problem with your application. > Please see the FAQ page for debugging suggestions > > > > > > So, is it an issue that i must report to the tao team? > > Thank you in advance. _______________________________________________ fenics-support mailing list [email protected] http://fenicsproject.org/mailman/listinfo/fenics-support
