I see. It is very surprising that parallel_sync() would affect
rank-interior nodes. A bug?
On Wed, Nov 26, 2014, 23:34 Derek Gaston wrote:
> Nope - it doesn't appear to be at the processor boundary only... but I
> would have to study it more.
>
> I really think that with a more careful algorithm
Nope - it doesn't appear to be at the processor boundary only... but I
would have to study it more.
I really think that with a more careful algorithm we could get a much
better estimate (if not the right answer).
Derek
On Thu, Nov 27, 2014 at 12:19 AM, Dmitry Karpeyev
wrote:
>
> Presumably, th
Presumably, this overestimation happens only at the "boundary" nodes i that
are contained in elements living on other MPI ranks? Those foreign ranks
will count couplings (edges) i-j that are shared by their elements with
the elements on rank p that owns i. Since only edge counts are communicated
b
Presumably, this overestimation happens only at the "boundary" nodes i that
are contained in elements living on other MPI ranks? Those foreign ranks
will count couplings (edges) i-j that are shared by their elements with
the elements on rank p that owns i. Since only edge counts are communicated
b
Ben Spencer (copied on this email) pointed me to a problem he was having
today with some of our sparsity pattern augmentation stuff. It was causing
PETSc to error out saying that the number of nonzeros on a processor was
more than the number of entries on a row for that processor. The weirdness
i