Oh sorry, I missed that. That's great!
Thanks,
Myriam
Le 05/29/19 à 16:55, Zhang, Hong a écrit :
> Myriam:
> This branch is merged to master.
> Thanks for your work and patience. It helps us a lot. The graphs are
> very nice :-)
>
> We plan to re-organise the APIs of mat-mat opts, make them
Hi all,
I tried with 3.11.1 version and Barry's fix. The good scaling is back!
See the green curve in the plot attached. It is even better than PETSc
3.6! And it runs faster (10-15s instead of 200-300s with 3.6).
So you were right. It seems that not all the PtAPs used the scalable
version.
I
structures 1
>
> Algorithm 2:
>
> -matptap_via allatonce_merged -mat_freeintermediatedatastructures 1
>
>
> Note that you need to use the current petsc-master, and also please
> put "-snes_view" in your script so that we can confirm these options
> are actually get
Hi,
that's really good news for us, thanks! I will plot again the memory
scaling using these new options and let you know. Next week I hope.
Before that, I just need to clarify the situation. Throughout our
discussions, we mentionned a number of options concerning the scalability:
-matptatp_via
Hi,
you'll find the new scaling attached (green line). I used the version
3.11 and the four scalability options :
-matptap_via scalable
-inner_diag_matmatmult_via scalable
-inner_offdiag_matmatmult_via scalable
-mat_freeintermediatedatastructures
The scaling is much better! The code even uses
Hi all,
I used the wrong script, that's why it diverged... Sorry about that.
I tried again with the right script applied on a tiny problem (~200
elements). I can see a small difference in memory usage (gain ~ 1mB).
when adding the -mat_freeintermediatestructures option. I still have to
execute
ns on MatPtAP(), which might
> trade memory for speed. It would be helpful to see a complete comparison.
> Hong
>
> On Tue, Apr 9, 2019 at 7:43 AM Myriam Peyrounette via petsc-users
> mailto:petsc-users@mcs.anl.gov>> wrote:
>
> Hi,
>
> in my first mail, I provi
processes
>>
>> type: mpiaij
>>
>> rows=363, cols=363, bs=3
>>
>> total: nonzeros=8649, allocated nonzeros=8649
>>
>> total number of mallocs used during MatSetValues calls =0
>>
>>
>> The first
>>> Peyrounette >>>>>> <mailto:myriam.peyroune...@idris.fr>> wrote:
>>>>>>>
>>>>>>> I used PCView to display the size of the
>&g
09:52, Myriam Peyrounette via petsc-users
> mailto:petsc-users@mcs.anl.gov>> wrote:
>
> How can I be sure they are indeed used? Can I print this
> information in some log file?
>
> Yes. Re-run the job with the command line option
>
> -options_left true
>
How can I be sure they are indeed used? Can I print this information in
some log file?
Thanks in advance
Myriam
Le 03/25/19 à 18:24, Matthew Knepley a écrit :
> On Mon, Mar 25, 2019 at 10:54 AM Myriam Peyrounette via petsc-users
> mailto:petsc-users@mcs.anl.gov>> wrote
nner_diag_matmatmult_via
> scalable to select inner scalable algorithms.
>
> (3) -matptap_via nonscalable: Suppose to be even faster, but use
> more memory. It does dense matrix operations.
>
>
> Thanks,
>
> Fande Kong
>
>
>
>
> On Wed, Mar 20, 2019 at 10:06 A
I plotted the memory scalings using different threshold
>>>> values. The two scalings are slightly translated (from
>>>> -22 to -88 mB) but this gain is neglectable. The
>>>> 3.6-scaling keeps being robust while the 3.10-sc
8 mB) but this gain is neglectable. The
>>> 3.6-scaling keeps being robust while the 3.10-scaling
>>> deteriorates.
>>>
>>> Do you have any other suggestion?
>>>
>>> Mark, what is the option she can give to output all the GAMG
&g
gain is neglectable. The
>>> 3.6-scaling keeps being robust while the 3.10-scaling
>>> deteriorates.
>>>
>>> Do you have any other suggestion?
>>>
>>> Mark, what is the option she can gi
ave any other suggestion?
>>
>> Mark, what is the option she can give to output all the GAMG data?
>>
>> Also, run using -ksp_view. GAMG will report all the sizes of its
>> grids, so it should be easy to see
>> if the coarse grid size
Hi,
good point, I changed the 3.10 version so that it is configured with
--with-debugging=0. You'll find attached the output of the new LogView.
The execution time is reduced (although still not as good as 3.6) but I
can't see any improvement with regard to memory.
You'll also find attached the
view. GAMG will report all the sizes of its
> grids, so it should be easy to see
> if the coarse grid sizes are increasing, and also what the effect of
> the threshold value is.
>
> Thanks,
>
> Matt
>
> Thanks
>
> Myriam
>
> Le 03/02/19 à 02:2
/19 à 02:27, Matthew Knepley a écrit :
> On Fri, Mar 1, 2019 at 10:53 AM Myriam Peyrounette via petsc-users
> mailto:petsc-users@mcs.anl.gov>> wrote:
>
> Hi,
>
> I used to run my code with PETSc 3.6. Since I upgraded the PETSc
> version
> to 3.10, this
Hi,
I am currently comparing two codes based on PETSc. The first one uses
PETSC 3.6.4 and the other one PETSc 3.10.2.
I am having a look at the use of the function PCMGSetGalerkin(). With
PETSc 3.6, the input is a boolean, while it is either
PC_MG_GALERKIN_MAT, PC_MG_GALERKIN_PMAT or
20 matches
Mail list logo