On Thu, 18 Sep 2014 12:42:14 +0200
Jan Blechta <[email protected]> wrote:
> Some problems (when running in a clean dir) are avoided using this
> (although incorrect) patch. There are race conditions in creation of
Or maybe mpi_comm_world() is correct.
Jan
> temp dir. It should be done using atomic operation.
>
> Jan
>
>
> ==================================================================
> diff --git a/test/unit/io/python/test_XDMF.py
> b/test/unit/io/python/test_XDMF.py index 9ad65a4..31471f1 100755
> --- a/test/unit/io/python/test_XDMF.py
> +++ b/test/unit/io/python/test_XDMF.py
> @@ -28,8 +28,9 @@ def temppath():
> filedir = os.path.dirname(os.path.abspath(__file__))
> basename = os.path.basename(__file__).replace(".py", "_data")
> temppath = os.path.join(filedir, basename, "")
> - if not os.path.exists(temppath):
> - os.mkdir(temppath)
> + if MPI.rank(mpi_comm_world()) == 0:
> + if not os.path.exists(temppath):
> + os.mkdir(temppath)
> return temppath
> ==================================================================
>
>
> On Wed, 17 Sep 2014 18:01:08 +0200
> Martin Sandve Alnæs <[email protected]> wrote:
>
> > Here's a clue from gdb, does this ring any bells?
> >
> > (Have to go now, won't be able to reply tonight.)
> >
> >
> > io/python/test_XDMF.py:69: test_save_1d_scalar Building mesh (dist
> > 0a) Number of global vertices: 33
> > Number of global cells: 32
> > Building mesh (dist 1a)
> > ^C
> > Program received signal SIGINT, Interrupt.
> > 0x00007fffee8749a6 in opal_progress () from /usr/lib/libmpi.so.1
> > (gdb) where
> > #0 0x00007fffee8749a6 in opal_progress () from /usr/lib/libmpi.so.1
> > #1 0x00007fffee7c21f5 in ompi_request_default_wait_all ()
> > from /usr/lib/libmpi.so.1
> > #2 0x00007fffcab35302 in ompi_coll_tuned_sendrecv_actual ()
> > from /usr/lib/openmpi/lib/openmpi/mca_coll_tuned.so
> > #3 0x00007fffcab3d2bf in ompi_coll_tuned_barrier_intra_two_procs ()
> > from /usr/lib/openmpi/lib/openmpi/mca_coll_tuned.so
> > #4 0x00007fffee7cf56b in PMPI_Barrier () from /usr/lib/libmpi.so.1
> > #5 0x00007fffbbf2ec73 in mca_io_romio_dist_MPI_File_close ()
> > from /usr/lib/openmpi/lib/openmpi/mca_io_romio.so
> > #6 0x00007fffbbf0c1e0 in mca_io_romio_file_close ()
> > from /usr/lib/openmpi/lib/openmpi/mca_io_romio.so
> > #7 0x00007fffee7bb056 in ?? () from /usr/lib/libmpi.so.1
> > #8 0x00007fffee7bb481 in ompi_file_close ()
> > from /usr/lib/libmpi.so.1 #9 0x00007fffee7eb0c5 in PMPI_File_close
> > () from /usr/lib/libmpi.so.1 #10 0x00007fffed554324 in ?? ()
> > from /usr/lib/x86_64-linux-gnu/libhdf5.so.7 #11 0x00007fffed54b821
> > in H5FD_close () from /usr/lib/x86_64-linux-gnu/libhdf5.so.7
> > #12 0x00007fffed53b339 in ?? ()
> > from /usr/lib/x86_64-linux-gnu/libhdf5.so.7 #13 0x00007fffed53c7ed
> > in H5F_try_close () from /usr/lib/x86_64-linux-gnu/libhdf5.so.7
> > #14 0x00007fffed53caf4 in ?? ()
> > from /usr/lib/x86_64-linux-gnu/libhdf5.so.7 #15 0x00007fffed5abde2
> > in H5I_dec_ref ()
> >
> >
> >
> >
> > On 17 September 2014 17:17, Martin Sandve Alnæs <[email protected]>
> > wrote:
> >
> > > I'm trying to get the dolfin pytest branch running all tests in
> > > parallel but still have some issues that don't really seem pytest
> > > related.
> > >
> > > Running
> > > cd <dolfin>/test/unit/
> > > mpirun -np 3 python -m pytest -s -v function/ io/
> > >
> > > _sometimes_ hangs in one of the io tests, and it seems to always
> > > be in the construction of a FunctionSpace. I've observed this
> > > both in a test using a UnitIntervalMesh(32) and another test
> > > using a UnitSquareMesh(16,16).
> > >
> > > Any pointers?
> > >
> > > Martin
> > >
>
> _______________________________________________
> fenics mailing list
> [email protected]
> http://fenicsproject.org/mailman/listinfo/fenics
_______________________________________________
fenics mailing list
[email protected]
http://fenicsproject.org/mailman/listinfo/fenics