Dear Mark,
I thought so that all bets would be off with my attempt. So just to
illustrate what I meant by my second approach.
I can start from the second line of the blocksystem and rewrite it to
D dp = f2 - C du
then I tried a pesudoinverse (for lack of alternatives I did this with
an
Mark McClure writes:
> Thank you, I will try BCGSL.
>
> And good to know that this is worth pursuing, and that it is possible. Step
> 1, I guess I should upgrade to the latest release on Petsc.
>
> How can I make sure that I am "using an MPI that follows the suggestion for
> implementers about
In the typical FD implementation, you only set local rows, but with FE and
sometimes FV, you also create values that need to be communicated and
summed on other processors.
Makes sense.
Anyway, in this case, I am certain that I am giving the solver bitwise
identical matrices from each process. I
Hello,
I have been a user of Petsc for quite a few years, though I haven't updated
my version in a few years, so it's possible that my comments below could be
'out of date'.
Several years ago, I'd asked you guys about reproducibility. I observed
that if I gave an identical matrix to the Petsc
If you use unpreconditioned BCGS and ensure that you assemble the same matrix
(depends how you do the communication for that), I think you'll get bitwise
reproducible results when using an MPI that follows the suggestion for
implementers about determinism. Beyond that, it'll depend somewhat on
Thank you, I will try BCGSL.
And good to know that this is worth pursuing, and that it is possible. Step
1, I guess I should upgrade to the latest release on Petsc.
How can I make sure that I am "using an MPI that follows the suggestion for
implementers about determinism"? I am using MPICH