Re: [petsc-users] Preallocation

2023-04-02 Thread Barry Smith

  Yes, but it would be interesting to see a comparison of timing between your 
current code and code with just a call to MatSetUp() and no calls to the 
preallocation routines.

  Barry


> On Apr 2, 2023, at 6:27 PM, Sanjay Govindjee  wrote:
> 
> I was looking at the release notes for 3.19.0 and noted the comment:
> Deprecate all MatPreallocate* routines. These are no longer needed since 
> non-preallocated matrices will now be as fast as using them
> 
> My interpretation of this is that I can now comment out all the 
> MatPreallocate* lines in my code and it will run just fine (and be as fast as 
> before) and no other change are necessary -- with the side benefit of no 
> longer having to maintain my preallocation code.
> 
> Have I read this correctly?
> 
> -sanjay
> 



[petsc-users] Preallocation

2023-04-02 Thread Sanjay Govindjee

I was looking at the release notes for 3.19.0 and noted the comment:

   Deprecate all MatPreallocate* routines. These are no longer needed
   since non-preallocated matrices will now be as fast as using them

My interpretation of this is that I can now comment out all the 
MatPreallocate* lines in my code and it will run just fine (and be as 
fast as before) and no other change are necessary -- with the side 
benefit of no longer having to maintain my preallocation code.


Have I read this correctly?

-sanjay


[petsc-users] Question about NASM initialization

2023-04-02 Thread Takahashi, Tadanaga
Hello PETSc devs,

I am using SNES NASM with Newton LS on the sub-SNES. I was wondering how
the sub-SNES chooses the initial guess during each NASM iteration. Is it
using the previously computed solution or is it restarting from zero?


Re: [petsc-users] MPI linear solver reproducibility question

2023-04-02 Thread Mark McClure
Ok, good to know. I'll update to latest Petsc, and do some testing, and let
you know either way.


On Sun, Apr 2, 2023 at 6:31 AM Jed Brown  wrote:

> Vector communication used a different code path in 3.13. If you have a
> reproducer with current PETSc, I'll have a look. Here's a demo that the
> solution is bitwise identical (the sha256sum is the same every time you run
> it, though it might be different on your computer from mine due to compiler
> version and flags).
>
> $ mpiexec -n 8 ompi/tests/snes/tutorials/ex5 -da_refine 3 -snes_monitor
> -snes_view_solution binary && sha256sum binaryoutput
>   0 SNES Function norm 1.265943996096e+00
>   1 SNES Function norm 2.831564838232e-02
>   2 SNES Function norm 4.456686729809e-04
>   3 SNES Function norm 1.206531765776e-07
>   4 SNES Function norm 1.740255643596e-12
> 5410f84e91a9db3a74a2ac336031fb48e7eaf739614192cfd53344517986
> binaryoutput
>
> Mark McClure  writes:
>
> > In the typical FD implementation, you only set local rows, but with FE
> and
> > sometimes FV, you also create values that need to be communicated and
> > summed on other processors.
> > Makes sense.
> >
> > Anyway, in this case, I am certain that I am giving the solver bitwise
> > identical matrices from each process. I am not using a preconditioner,
> > using BCGS, with Petsc version 3.13.3.
> >
> > So then, how can I make sure that I am "using an MPI that follows the
> > suggestion for implementers about determinism"? I am using MPICH version
> > 3.3a2, didn't do anything special when installing it. Does that sound OK?
> > If so, I could upgrade to the latest Petsc, try again, and if confirmed
> > that it persists, could provide a reproduction scenario.
> >
> >
> >
> > On Sat, Apr 1, 2023 at 9:53 PM Jed Brown  wrote:
> >
> >> Mark McClure  writes:
> >>
> >> > Thank you, I will try BCGSL.
> >> >
> >> > And good to know that this is worth pursuing, and that it is possible.
> >> Step
> >> > 1, I guess I should upgrade to the latest release on Petsc.
> >> >
> >> > How can I make sure that I am "using an MPI that follows the
> suggestion
> >> for
> >> > implementers about determinism"? I am using MPICH version 3.3a2.
> >> >
> >> > I am pretty sure that I'm assembling the same matrix every time, but
> I'm
> >> > not sure how it would depend on 'how you do the communication'. Each
> >> > process is doing a series of MatSetValues with INSERT_VALUES,
> >> > assembling the matrix by rows. My understanding of this process is
> that
> >> > it'd be deterministic.
> >>
> >> In the typical FD implementation, you only set local rows, but with FE
> and
> >> sometimes FV, you also create values that need to be communicated and
> >> summed on other processors.
> >>
>


Re: [petsc-users] MPI linear solver reproducibility question

2023-04-02 Thread Jed Brown
Vector communication used a different code path in 3.13. If you have a 
reproducer with current PETSc, I'll have a look. Here's a demo that the 
solution is bitwise identical (the sha256sum is the same every time you run it, 
though it might be different on your computer from mine due to compiler version 
and flags).

$ mpiexec -n 8 ompi/tests/snes/tutorials/ex5 -da_refine 3 -snes_monitor 
-snes_view_solution binary && sha256sum binaryoutput
  0 SNES Function norm 1.265943996096e+00
  1 SNES Function norm 2.831564838232e-02
  2 SNES Function norm 4.456686729809e-04
  3 SNES Function norm 1.206531765776e-07
  4 SNES Function norm 1.740255643596e-12
5410f84e91a9db3a74a2ac336031fb48e7eaf739614192cfd53344517986  binaryoutput

Mark McClure  writes:

> In the typical FD implementation, you only set local rows, but with FE and
> sometimes FV, you also create values that need to be communicated and
> summed on other processors.
> Makes sense.
>
> Anyway, in this case, I am certain that I am giving the solver bitwise
> identical matrices from each process. I am not using a preconditioner,
> using BCGS, with Petsc version 3.13.3.
>
> So then, how can I make sure that I am "using an MPI that follows the
> suggestion for implementers about determinism"? I am using MPICH version
> 3.3a2, didn't do anything special when installing it. Does that sound OK?
> If so, I could upgrade to the latest Petsc, try again, and if confirmed
> that it persists, could provide a reproduction scenario.
>
>
>
> On Sat, Apr 1, 2023 at 9:53 PM Jed Brown  wrote:
>
>> Mark McClure  writes:
>>
>> > Thank you, I will try BCGSL.
>> >
>> > And good to know that this is worth pursuing, and that it is possible.
>> Step
>> > 1, I guess I should upgrade to the latest release on Petsc.
>> >
>> > How can I make sure that I am "using an MPI that follows the suggestion
>> for
>> > implementers about determinism"? I am using MPICH version 3.3a2.
>> >
>> > I am pretty sure that I'm assembling the same matrix every time, but I'm
>> > not sure how it would depend on 'how you do the communication'. Each
>> > process is doing a series of MatSetValues with INSERT_VALUES,
>> > assembling the matrix by rows. My understanding of this process is that
>> > it'd be deterministic.
>>
>> In the typical FD implementation, you only set local rows, but with FE and
>> sometimes FV, you also create values that need to be communicated and
>> summed on other processors.
>>