On Thu, 9 Nov 2017, Boris Boutkov wrote:

Well, I eliminated PETSc and have been linking to MPI using
--with-mpi=$MPI_DIR and playing with the refinement example I had
mentioned earlier to try and eliminate ParMETIS due the the
hang/crash issue. In these configs I attach either the
LinearPartitioner or an SFC in prepare_for_use right before calling
partition(). This causes assert trips in MeshComm::Redistribute
where elem.proc_id != proc_id while unpacking elems (stack below).

Shoot - I don't think either of those partitioners have been upgraded
to be compatible with DistributedMesh use.  Just glancing at
LinearPartitioner, it looks like it'll do fine for an *initial*
partitioning, but then it'll scramble everything if it's ever asked to
do a *repartitioning* on an already-distributed mesh.

I could probably fix that pretty quickly, if you've got a test case I
can replicate.

SFC, on the other hand, I don't know about.  We do distributed
space-filling-curve stuff elsewhere in the library with libHilbert,
and it's not trivial.
---
Roy

------------------------------------------------------------------------------
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
_______________________________________________
Libmesh-devel mailing list
Libmesh-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/libmesh-devel

Reply via email to