Yep, you got down to the right thing: with SerialMesh and Exodus we do
partition independent looping on processor 0... so the files always come
out the same.

However, if you use ParallelMesh with Exodus (yes, that doesn't quite makes
sense to do... but WE do it a lot... especially in our test suite) beware
that the Exodus files will then be partition sensitive (i.e. they will
change order as the number of processors change).  I believe this has to do
with the way the mesh is serialized with MeshSerializer.

Like I mentioned, we do this a lot in our test suite so that we can test
ParallelMesh but still output an Exodus file to compare to the "gold"
standard Exodus file.  Because of the reordering we have to use a special
option to Exodiff (which is an Exodus utility that diff's two Exodus
files).  That option is "-m" for "map"... what it does is builds a
geometric map between all of the elements/nodes in both files to use for
comparison instead of doing a straight comparison using elements/nodes with
the same IDs.

I know that's a little off-topic, but I thought I would leave it here in
case anyone else is trying to do something similar.

Derek


On Fri, Feb 19, 2016 at 11:11 PM David Knezevic <david.kneze...@akselos.com>
wrote:

> On Fri, Feb 19, 2016 at 6:56 PM, Roy Stogner <royst...@ices.utexas.edu>
> wrote:
>
> >
> > On Fri, 19 Feb 2016, David Knezevic wrote:
> >
> > I'm using a SerialMesh, though. Even in the SerialMesh case I would have
> >> thought the exodus file would depend on the numbering of nodes and
> elems,
> >> which is partition dependent, but apparently not?
> >>
> >
> > With SerialMesh we use METIS for partitioning, METIS output is very
> > number-of-partitions dependent, and our numbering depends on our
> > partitioning, so it is kind of surprising to get identical Exodus
> > output on different processor counts.  Is it possible that we do a
> > Hilbert renumbering in Exodus?  I thought that was limited to Xdr.
> > ---
> > Roy
> >
>
>
> I looked into this a bit more. There is no Hilbert renumbering as far as I
> can see.
>
> The Exodus data is written out by doing a loop (on processor 0 only) over
> all nodes and elements, via mesh.nodes_begin()/end() and
> mesh.active_elements_begin()/end(). The key point, though, is that the
> order of these loops is is independent of the partitioning, which means
> that we end up with the same Exodus output with different processor counts.
>
> The libMesh solution vector ordering is of course very different with
> different processor counts, but that doesn't affect the data that gets
> written out to the Exodus file (the same is true of GMV and presumably the
> other formats too).
>
> David
>
> ------------------------------------------------------------------------------
> Site24x7 APM Insight: Get Deep Visibility into Application Performance
> APM + Mobile APM + RUM: Monitor 3 App instances at just $35/Month
> Monitor end-to-end web transactions and take corrective actions now
> Troubleshoot faster and improve end-user experience. Signup Now!
> http://pubads.g.doubleclick.net/gampad/clk?id=272487151&iu=/4140
> _______________________________________________
> Libmesh-users mailing list
> Libmesh-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/libmesh-users
>
------------------------------------------------------------------------------
Site24x7 APM Insight: Get Deep Visibility into Application Performance
APM + Mobile APM + RUM: Monitor 3 App instances at just $35/Month
Monitor end-to-end web transactions and take corrective actions now
Troubleshoot faster and improve end-user experience. Signup Now!
http://pubads.g.doubleclick.net/gampad/clk?id=272487151&iu=/4140
_______________________________________________
Libmesh-users mailing list
Libmesh-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/libmesh-users

Reply via email to