I'm sorry, I just noticed that you replied 6 days ago, but I apparently wasn't 
notified by the Debian bug tracker.  :-(

Ok, so this is an MPI_Alltoall issue.  Does it use MPI_IN_PLACE?


On Wed, 06 Oct 2021 20:15:38 +0200 Drew Parsons <dpars...@debian.org> wrote:
> Source: openmpi
> Followup-For: Bug #995599
> 
> Not so simple to make a minimal test case I think.
> 
> all_to_all is defined in cpp/dolfinx/common/MPI.h in dolfinx source,
> and calls MPI_Alltoall from openmpi.
> 
> It's designed to use with graph::AdjacencyList<T> from
> graph/AdjacencyList.h, and is called from
> compute_nonlocal_dual_graph() in mesh/graphbuild.cpp, where T is set
> to std::int64_t.
> 
> I tried grabbing dolfinx' all_to_all and use it with a pared down
> version of AdjacencyList.  But it's not triggering the segfault on an
> i386 chroot. Possibly because I haven't populated it with an actual
> graph so there's nothing to send with MPI_Alltoall.
> 
> 



-- 
Jeff Squyres
jsquy...@cisco.com

Reply via email to