Hi Denis,
Can you please elaborate?
My naïve understanding would be:
1. before doing interpolation, all ghost values are syncrhonized
between the MPI processors
2. solution transfer would, roughly, take local values on each
element, apply transformation matrices corresponding to each child and
ship, if needed, interpolated values to new MPI owners of cells.
3. Each MPI would go through its cells and set values of vectors at
locally active DoFs which, of course, means that at the interface
between the two paritions different MPI processes may be trying to
write to the same global DoF entry in the vector.
In the above, i fail to see what could influence the order of
operations and result in round-off errors between different MPI cores.
The description of the three steps above is correct. The problem appears
in step 2 where the solution gets interpolated separately on the two
cells sharing a face. For an interpolatory finite element, the values on
a shared face should coincide and we know mathematically that they
indeed interpolate to the same values. However, the additions in the
(different) transfer matrices from the two sides might perform the
additions in a different order. Thus, roundoff could accumulate
differently, in particular if you have some cancellation. Does that make
sense?
This problem would of course also appear in serial, but we do not check
that the value set into a vector is indeed the same as the value that
was there before.
Best,
Martin
--
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see
https://groups.google.com/d/forum/dealii?hl=en
---
You received this message because you are subscribed to the Google Groups "deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email
to [email protected].
For more options, visit https://groups.google.com/d/optout.