Hi Jed,
The full error message is attached below. Furthermore, I have reached
the SNES nonlinear solve termination by the printing of the message:
"Nonlinear solve did not converge due to DIVERGED_FUNCTION_DOMAIN
iterations 2"
After this, I was executing the following code where close() was called.
However, I never reached Point 1 (it was never printed).
if( SNES_converged_reason<0){
if(processor_id()==0){cout << "Point 1 "<< endl; }
fflush(NULL);
MPI_Barrier(PETSC_COMM_WORLD);
system.solution->close();
}
The full error message:
Nonlinear solve did not converge due to DIVERGED_FUNCTION_DOMAIN
iterations 2
[13]PETSC ERROR: VecAssemblyBegin_MPI() line 1012 in
/data1/trayanova/petsc/3.4.3/src/vec/vec/impls/mpi/pdvec.c
[13]PETSC ERROR: VecAssemblyBegin() line 220 in
/data1/trayanova/petsc/3.4.3/src/vec/vec/interface/vector.c
[13]PETSC ERROR: close() line 953 in
"unknowndirectory/"/data1/trayanova/libmesh/0.9.2.2/install/include/libmesh/petsc_vector.h
--------------------------------------------------------------------------
MPI_ABORT was invoked on rank 13 in communicator MPI_COMM_WORLD
with errorcode 1.
NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes.
You may or may not see output from other processes, depending on
exactly when Open MPI kills them.
--------------------------------------------------------------------------
[0]PETSC ERROR: VecAssemblyBegin_MPI() line 1012 in
/data1/trayanova/petsc/3.4.3/src/vec/vec/impls/mpi/pdvec.c
[0]PETSC ERROR: VecAssemblyBegin() line 220 in
/data1/trayanova/petsc/3.4.3/src/vec/vec/interface/vector.c
[0]PETSC ERROR: close() line 953 in
"unknowndirectory/"/data1/trayanova/libmesh/0.9.2.2/install/include/libmesh/petsc_vector.h
--------------------------------------------------------------------------
mpirun has exited due to process rank 13 with PID 24354 on
node ln236 exiting without calling "finalize". This may
have caused other processes in the application to be
terminated by signals sent by mpirun (as reported here).
--------------------------------------------------------------------------
[2]PETSC ERROR:
------------------------------------------------------------------------
[2]PETSC ERROR: Caught signal number 15 Terminate: Somet process (or the
batch system) has told this process to end
[2]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger
[2]PETSC ERROR: or see
http://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind[2]PETSC
ERROR: or try http://valgrind.org on GNU/linux and Apple Mac OS X to
find memory corruption errors
[2]PETSC ERROR: configure using --with-debugging=yes, recompile, link,
and run
[1]PETSC ERROR:
------------------------------------------------------------------------
[1]PETSC ERROR: [2]PETSC ERROR: to get more information on the crash.
[2]PETSC ERROR: --------------------- Error Message
------------------------------------
[2]PETSC ERROR: [6]PETSC ERROR: [ln280:08397] 1 more process has sent
help message help-mpi-api.txt / mpi-abort
[ln280:08397] Set MCA parameter "orte_base_help_aggregate" to 0 to see
all help / error messages
Thanks,
Dafang
On 12/29/2014 11:23 PM, Jed Brown wrote:
> Dafang Wang <[email protected]> writes:
>
>> Hi Derek,
>>
>> I tried SNESetFunctionDomainError() but I could not use it correctly
>> within libmesh. The function takes an SNES object as the input. I
>> acquire the SNES object from the libmesh using the following code:
>>
>> PetscNonlinearSolver<Real> *tp =
>> dynamic_cast<PetscNonlinearSolver<Real>*>(system.nonlinear_solver.get());
>> SNESSetFunctionDomainError(tp->snes() );
>>
>> However, I got the following error during execution:
> This must not be the entire error message. Always send the entire
> message.
>
>> [13]PETSC ERROR: VecAssemblyBegin_MPI() line 1012 in
>> /data1/trayanova/petsc/3.4.3/src/vec/vec/impls/mpi/pdvec.c
>> [13]PETSC ERROR: VecAssemblyBegin() line 220 in
>> /data1/trayanova/petsc/3.4.3/src/vec/vec/interface/vector.c
>> [13]PETSC ERROR: close() line 953 in
>> "unknowndirectory/"/data1/trayanova/libmesh/0.9.2.2/install/include/libmesh/petsc_vector.h
> Who is calling close? I suspect this happened later.
--
Dafang Wang, Ph.D
Postdoctoral Fellow
Institute of Computational Medicine
Department of Biomedical Engineering
Johns Hopkins University
Hackerman Hall Room 218
Baltimore, MD, 21218
------------------------------------------------------------------------------
Dive into the World of Parallel Programming! The Go Parallel Website,
sponsored by Intel and developed in partnership with Slashdot Media, is your
hub for all things parallel software development, from weekly thought
leadership blogs to news, videos, case studies, tutorials and more. Take a
look and join the conversation now. http://goparallel.sourceforge.net
_______________________________________________
Libmesh-users mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/libmesh-users