i forgot to tell something important, there error appears after some
iterations, for instance for file ncg_in_hyperelasticity_tao.py
the error occur at the 10129 step.

Thanks for your help

On Mon, Oct 12, 2015 at 8:42 PM, Jan Blechta <[email protected]>
wrote:

> On Mon, 12 Oct 2015 20:25:14 +0300
> Giorgos Grekas <[email protected]> wrote:
>
> > So this function has not been parallelized with PETSc 3.6.1? That't
> > why it isn't reproducable?
>
> No. I mean that it seems to compute something (printing some numbers to
> stdout) and error does not occur.
>
> Jan
>
> >
> > Thank you
> >
> > On Mon, Oct 12, 2015 at 6:40 PM, Jan Blechta
> > <[email protected]> wrote:
> >
> > > Can't really reproduce using PETSc 3.6.1:
> > >
> > > Process 0: Number of global vertices: 10201
> > > Process 0: Number of global cells: 20000
> > > Expr.geometric_dimension() is deprecated, please use
> > > find_geometric_dimension(expr) instead.
> > > Expr.geometric_dimension() is deprecated, please use
> > > find_geometric_dimension(expr) instead.
> > > Expr.geometric_dimension() is deprecated, please use
> > > find_geometric_dimension(expr) instead.
> > > 10 None n =  100 100
> > > 10 None n =  100 100
> > > 10 None n =  100 100
> > > Process 0: *** Warning: The underlying linear solver cannot be
> > > modified for this specified TAO solver. The options are all ignored.
> > > Process 1: *** Warning: The underlying linear solver cannot be
> > > modified for this specified TAO solver. The options are all ignored.
> > > Process 2: *** Warning: The underlying linear solver cannot be
> > > modified for this specified TAO solver. The options are all ignored.
> > > 1 2499.99999808
> > > 1 2499.99999808
> > > 1 2499.99999808
> > > 2 9.19912577662
> > > 2 9.19912577662
> > > 2 9.19912577662
> > > 3 38082.7834178
> > > 3 38082.7834178
> > > 3 38082.7834178
> > > ^Cmpirun: killing job...
> > >
> > >
> > > Jan
> > >
> > >
> > > On Mon, 12 Oct 2015 18:12:19 +0300
> > > Giorgos Grekas <[email protected]> wrote:
> > >
> > > > A simpler example where the same error happens with
> > > > mpirun -np 4
> > > >
> > > >  you can run the attached code, it is very shorter. I am sorry
> > > > that i sent so many lines of code before.
> > > >
> > > >
> > > >
> > > > On Mon, Oct 12, 2015 at 5:15 PM, Giorgos Grekas
> > > > <[email protected]> wrote:
> > > >
> > > > > I provide backtrace to the file bt.txt and my code. For my code
> > > > > you need to run the file runMe.py.
> > > > >
> > > > >
> > > > > On Mon, Oct 12, 2015 at 4:40 PM, Jan Blechta
> > > > > <[email protected]> wrote:
> > > > >
> > > > >> PETSc error code 1 does not seem to indicate an expected
> > > > >> problem,
> > > > >> http://www.mcs.anl.gov/petsc/petsc-dev/include/petscerror.h.html.
> > > > >> It seems as an error not handled by PETSc.
> > > > >>
> > > > >> You could provide us with your code or try investigating the
> > > > >> problem with debugger
> > > > >>
> > > > >>   $ mpirun -n 3 xterm -e gdb -ex 'set breakpoint pending on'
> > > > >> -ex 'break PetscError' -ex 'break dolfin::dolfin_error' -ex r
> > > > >> -args python your_script.py
> > > > >>   ...
> > > > >>   Break point hit...
> > > > >>   (gdb) bt
> > > > >>
> > > > >> and post a backtrace here.
> > > > >>
> > > > >> Jan
> > > > >>
> > > > >>
> > > > >> On Mon, 12 Oct 2015 15:16:48 +0300
> > > > >> Giorgos Grekas <[email protected]> wrote:
> > > > >>
> > > > >> > Hello,
> > > > >> > i am using ncg from tao solver and i wanted to test my code
> > > > >> > validity in a pc  with 4 processors
> > > > >> > before its execution in a cluster. When i run my code with 2
> > > > >> > processes (mpirun -np 2) everything
> > > > >> > looks to work fine but when i use 3 or more processes i have
> > > > >> > the following error:
> > > > >> >
> > > > >> >
> > > > >> >  Error:   Unable to successfully call PETSc function
> > > > >> > 'VecAssemblyBegin'. *** Reason:  PETSc error code is: 1.
> > > > >> > *** Where:   This error was encountered inside
> > > > >> >
> > > > >>
> > >
> /home/ggrekas/.hashdist/tmp/dolfin-wphma2jn5fuw/dolfin/la/PETScVector.cpp.
> > > > >> > *** Process: 3
> > > > >> > ***
> > > > >> > *** DOLFIN version: 1.7.0dev
> > > > >> > *** Git changeset:  3fbd47ec249a3e4bd9d055f8a01b28287c5bcf6a
> > > > >> > ***
> > > > >> >
> > > > >>
> > >
> -------------------------------------------------------------------------
> > > > >> >
> > > > >> >
> > > > >> >
> > > > >>
> > >
> ===================================================================================
> > > > >> > =   BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES
> > > > >> > =   EXIT CODE: 134
> > > > >> > =   CLEANING UP REMAINING PROCESSES
> > > > >> > =   YOU CAN IGNORE THE BELOW CLEANUP MESSAGES
> > > > >> >
> > > > >>
> > >
> ===================================================================================
> > > > >> > YOUR APPLICATION TERMINATED WITH THE EXIT STRING: Aborted
> > > > >> > (signal 6) This typically refers to a problem with your
> > > > >> > application. Please see the FAQ page for debugging
> > > > >> > suggestions
> > > > >> >
> > > > >> >
> > > > >> >
> > > > >> >
> > > > >> >
> > > > >> > So, is it an issue that i must report to the tao team?
> > > > >> >
> > > > >> > Thank you in advance.
> > > > >>
> > > > >>
> > > > >
> > >
> > >
>
>
_______________________________________________
fenics-support mailing list
[email protected]
http://fenicsproject.org/mailman/listinfo/fenics-support

Reply via email to