Yes, for instance demo_cahn_hilliard.py file works with np = 4 or 8.
my machine has ubuntu 14-04, dolfin-dev, petsc-3.6.1 so if i replace
petsc with petch-dev maybe it will be fixed?
I suppose that i must change petsc to petch-dev in local.yaml file?

Thank you.

-Giorgos

On Mon, Oct 12, 2015 at 6:36 PM, CM <[email protected]> wrote:

> Hi,
>
> I tried to run this file and do not get any error message with mpirun -np
> 4 on my machine (macosx 10.11, dolfin-dev, petsc-dev).
>
> Are simpler examples and demos working on your installation?
>
> Best,
>
> Corrado
>
>
> On 12 Oct 2015, at 17:12, Giorgos Grekas <[email protected]> wrote:
>
> A simpler example where the same error happens with
> mpirun -np 4
>
>  you can run the attached code, it is very shorter. I am sorry that i sent
> so many lines of code before.
>
>
>
> On Mon, Oct 12, 2015 at 5:15 PM, Giorgos Grekas <[email protected]>
> wrote:
>
>> I provide backtrace to the file bt.txt and my code. For my code you need
>> to run the file runMe.py.
>>
>>
>> On Mon, Oct 12, 2015 at 4:40 PM, Jan Blechta <[email protected]>
>>  wrote:
>>
>>> PETSc error code 1 does not seem to indicate an expected problem,
>>> http://www.mcs.anl.gov/petsc/petsc-dev/include/petscerror.h.html. It
>>> seems as an error not handled by PETSc.
>>>
>>> You could provide us with your code or try investigating the problem
>>> with debugger
>>>
>>>   $ mpirun -n 3 xterm -e gdb -ex 'set breakpoint pending on' -ex 'break
>>> PetscError' -ex 'break dolfin::dolfin_error' -ex r -args python
>>> your_script.py
>>>   ...
>>>   Break point hit...
>>>   (gdb) bt
>>>
>>> and post a backtrace here.
>>>
>>> Jan
>>>
>>>
>>> On Mon, 12 Oct 2015 15:16:48 +0300
>>> Giorgos Grekas <[email protected]> wrote:
>>>
>>> > Hello,
>>> > i am using ncg from tao solver and i wanted to test my code validity
>>> > in a pc  with 4 processors
>>> > before its execution in a cluster. When i run my code with 2 processes
>>> > (mpirun -np 2) everything
>>> > looks to work fine but when i use 3 or more processes i have the
>>> > following error:
>>> >
>>> >
>>> >  Error:   Unable to successfully call PETSc function
>>> > 'VecAssemblyBegin'. *** Reason:  PETSc error code is: 1.
>>> > *** Where:   This error was encountered inside
>>> >
>>> /home/ggrekas/.hashdist/tmp/dolfin-wphma2jn5fuw/dolfin/la/PETScVector.cpp.
>>> > *** Process: 3
>>> > ***
>>> > *** DOLFIN version: 1.7.0dev
>>> > *** Git changeset:  3fbd47ec249a3e4bd9d055f8a01b28287c5bcf6a
>>> > ***
>>> >
>>> -------------------------------------------------------------------------
>>> >
>>> >
>>> >
>>> ===================================================================================
>>> > =   BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES
>>> > =   EXIT CODE: 134
>>> > =   CLEANING UP REMAINING PROCESSES
>>> > =   YOU CAN IGNORE THE BELOW CLEANUP MESSAGES
>>> >
>>> ===================================================================================
>>> > YOUR APPLICATION TERMINATED WITH THE EXIT STRING: Aborted (signal 6)
>>> > This typically refers to a problem with your application.
>>> > Please see the FAQ page for debugging suggestions
>>> >
>>> >
>>> >
>>> >
>>> >
>>> > So, is it an issue that i must report to the tao team?
>>> >
>>> > Thank you in advance.
>>>
>>>
>>
> <ncg_in_hyperelasticity_tao.py>
> _______________________________________________
> fenics-support mailing list
> [email protected]
> http://fenicsproject.org/mailman/listinfo/fenics-support
>
>
>
_______________________________________________
fenics-support mailing list
[email protected]
http://fenicsproject.org/mailman/listinfo/fenics-support

Reply via email to