Re: [petsc-users] DMPlexDistributeField

2019-06-27 Thread Adrian Croucher via petsc-users


On 28/06/19 10:09 AM, Zhang, Junchao wrote:


Check how the graph is created and then whether the parameters to 
PetscSFSetGraph() are correct.



Yes, unfortunately I don't have a good enough understanding of how 
DMPlexDistribute() works to see what the problem is.


- Adrian

--
Dr Adrian Croucher
Senior Research Fellow
Department of Engineering Science
University of Auckland, New Zealand
email: a.crouc...@auckland.ac.nz
tel: +64 (0)9 923 4611



Re: [petsc-users] DMPlexDistributeField

2019-06-27 Thread Zhang, Junchao via petsc-users


On Thu, Jun 27, 2019 at 4:50 PM Adrian Croucher 
mailto:a.crouc...@auckland.ac.nz>> wrote:
hi

On 28/06/19 3:14 AM, Zhang, Junchao wrote:
> You can dump relevant SFs to make sure their graph is correct.


Yes, I'm doing that, and the graphs don't look correct.
Check how the graph is created and then whether the parameters to 
PetscSFSetGraph() are correct.


- Adrian

--
Dr Adrian Croucher
Senior Research Fellow
Department of Engineering Science
University of Auckland, New Zealand
email: a.crouc...@auckland.ac.nz
tel: +64 (0)9 923 4611



Re: [petsc-users] DMPlexDistributeField

2019-06-27 Thread Adrian Croucher via petsc-users

hi

On 28/06/19 3:14 AM, Zhang, Junchao wrote:

You can dump relevant SFs to make sure their graph is correct.



Yes, I'm doing that, and the graphs don't look correct.

- Adrian

--
Dr Adrian Croucher
Senior Research Fellow
Department of Engineering Science
University of Auckland, New Zealand
email: a.crouc...@auckland.ac.nz
tel: +64 (0)9 923 4611



Re: [petsc-users] DMPlexDistributeField

2019-06-27 Thread Zhang, Junchao via petsc-users


On Wed, Jun 26, 2019 at 11:12 PM Adrian Croucher 
mailto:a.crouc...@auckland.ac.nz>> wrote:

hi

On 27/06/19 4:07 PM, Zhang, Junchao wrote:

 Adrian, I am working on SF but know nothing about DMPlexDistributeField. Do 
you think SF creation or communication is wrong? If yes, I'd like to know the 
detail.  I have a branch jczhang/sf-more-opts, which adds some optimizations to 
SF.  It probably won't solve your problem. But since it changes SF a lot, it's 
better to have a try.


My suspicion is that there may be a problem in DMPlexDistribute(), so that the 
distribution SF entries for DMPlex faces are not correct when overlap > 0.

You can dump relevant SFs to make sure their graph is correct.


So it's probably not a problem with SF as such.

- Adrian

--
Dr Adrian Croucher
Senior Research Fellow
Department of Engineering Science
University of Auckland, New Zealand
email: a.crouc...@auckland.ac.nz
tel: +64 (0)9 923 4611



Re: [petsc-users] GAMG scalability for serendipity 20 nodes hexahedra

2019-06-27 Thread TARDIEU Nicolas via petsc-users


Thank you very much for your answer, Mark.
Do you think it is worth it, to play around with aggregation variants? Plain 
aggregation "à la Notay" for instance.

Nicolas
 



 

De : mfad...@lbl.gov 
Envoyé : mercredi 26 juin 2019 22:37
À : TARDIEU Nicolas
Cc : PETSc users list
Objet : Re: [petsc-users] GAMG scalability for serendipity 20 nodes hexahedra
  

I get growth with Q2 elements also. I've never seen anyone report scaling of 
high order elements with generic AMG.


First, discretizations are very important for AMG solver. All optimal solvers 
really. I've never looked at serendipity elements. It might be a good idea to 
try Q2 as well.


SNES ex56 is 3D elasticity on a cube with tensor elements. Below are parameters 
that I have been using. I see some evidence that more smoothing steps 
(-mg_levels_ksp_max_it N) helps "scaling" but not necessarily solve time.


An example of what I see, running ex56 with -cells 8,12,16  -max_conv_its 5 and 
the below params I get these iteration counts: 19, 20, 31, 31, 38.


My guess is that you need higher order interpolation for higher order elements 
and when you add a new level you get an increase in condition number (ie, it is 
not an optimal MG method). But, the original smoothed aggregation paper did 
have high order discretizations  their theory said it was still optimal, as I 
recall.


Mark


-log_view
-max_conv_its 5
-petscspace_degree 2
-snes_max_it 2
-ksp_max_it 100
-ksp_type cg
-ksp_rtol 1.e-11
-ksp_atol 1.e-71
-ksp_norm_type unpreconditioned
-snes_rtol 1.e-10
-pc_type gamg
-pc_gamg_type agg
-pc_gamg_agg_nsmooths 1
-pc_gamg_coarse_eq_limit 1000
-pc_gamg_process_eq_limit 200
-pc_gamg_reuse_interpolation true
-ksp_converged_reason
-snes_monitor_short
-ksp_monitor_short
-snes_converged_reason
-use_mat_nearnullspace true
-mg_levels_ksp_max_it 4
-mg_levels_ksp_type chebyshev
-mg_levels_esteig_ksp_type cg
-gamg_est_ksp_type cg
-gamg_est_ksp_max_it 10
-mg_levels_esteig_ksp_max_it 10
-mg_levels_ksp_chebyshev_esteig 0,0.05,0,1.05
-mg_levels_pc_type jacobi
-petscpartitioner_type simple
-mat_block_size 3
-matptap_via scalable
-run_type 1
-pc_gamg_repartition false
-pc_gamg_threshold 0.0
-pc_gamg_threshold_scale .25
-pc_gamg_square_graph 1
-check_pointer_intensity 0
-snes_type ksponly
-ex56_dm_view
-options_left




 


On Wed, Jun 26, 2019 at 8:21 AM TARDIEU Nicolas via petsc-users 
 wrote:
 
Dear PETSc team,


I have run a simple weak scalability test based on canonical 3D elasticity 
problem : a cube, meshed with 8 nodes hexaedra, clamped on one of its face and 
submited to a pressure load on the opposite face. 
I am using the FGMRES ksp with GAMG as preconditioner. I have set the rigid 
body modes using MatNullSpaceCreateRigidBody and its works like a charm. The 
solver exhibit a perfect scalability until 800 cores (I haven't tested with 
more cores). The ksp always  converges in 11 or 12 iterations. Let me emphasize 
that I use GAMG default options.



Nevertheless, if I switch to a quadratic mesh with 20 nodes serendipity 
hexaedra, the weak scalability deteriorates. For instance the number of 
iteration for the ksp increases from 20 iterations for the smallest problem to 
30 for the biggest. 
Here is my question : I wonder what is the right tuning for GAMG to recover the 
same weak scalability as in the linear case? I apologize if this is a stupid 
question...





 
I  look forward to reading you,  
Nicolas

  

Ce message et toutes les pièces jointes (ci-après le 'Message') sont établis à 
l'intention exclusive des destinataires et les informations qui y figurent sont 
strictement confidentielles. Toute utilisation de ce Message non conforme à sa 
destination, toute  diffusion ou toute publication totale ou partielle, est 
interdite sauf autorisation expresse.
Si vous n'êtes pas le destinataire de ce Message, il vous est interdit de le 
copier, de le faire suivre, de le divulguer ou d'en utiliser tout ou partie. Si 
vous avez reçu ce Message par erreur, merci de le supprimer de votre système, 
ainsi que toutes ses  copies, et de n'en garder aucune trace sur quelque 
support que ce soit. Nous vous remercions également d'en avertir immédiatement 
l'expéditeur par retour du message.
Il est impossible de garantir que les communications par messagerie 
électronique arrivent en temps utile, sont sécurisées ou dénuées de toute 
erreur ou virus.

This message and any attachments (the 'Message') are intended solely for the 
addressees. The information contained in this Message is confidential. Any use 
of information contained in this Message not in accord with its purpose, any 
dissemination or disclosure,  either whole or partial, is prohibited except 
formal approval.
If you are not the addressee, you may not copy, forward, disclose or use any 
part of it. If you have received this message in error, please delete it and 
all copies from your system and notify the sender immediately by return message.