Re: [petsc-users] GAMG and Hypre preconditioner

2023-06-27 Thread Zisheng Ye via petsc-users
Hi Jed Thanks for your reply. I have sent the log files to petsc-ma...@mcs.anl.gov. Zisheng From: Jed Brown Sent: Tuesday, June 27, 2023 1:02 PM To: Zisheng Ye ; petsc-users@mcs.anl.gov Subject: Re: [petsc-users] GAMG and Hypre preconditioner [External Sender

Re: [petsc-users] GAMG and Hypre preconditioner

2023-06-27 Thread Jed Brown
Zisheng Ye via petsc-users writes: > Dear PETSc Team > > We are testing the GPU support in PETSc's KSPSolve, especially for the GAMG > and Hypre preconditioners. We have encountered several issues that we would > like to ask for your suggestions. > > First, we have couple of questions when

Re: [petsc-users] GAMG failure

2023-03-28 Thread Mark Adams
On Tue, Mar 28, 2023 at 12:38 PM Blaise Bourdin wrote: > > > On Mar 27, 2023, at 9:11 PM, Mark Adams wrote: > > Yes, the eigen estimates are converging slowly. > > BTW, have you tried hypre? It is a good solver (lots lots more woman years) > These eigen estimates are conceptually simple, but

Re: [petsc-users] GAMG failure

2023-03-28 Thread Jed Brown
This suite has been good for my solid mechanics solvers. (It's written here as a coarse grid solver because we do matrix-free p-MG first, but you can use it directly.) https://github.com/hypre-space/hypre/issues/601#issuecomment-1069426997 Blaise Bourdin writes: > On Mar 27, 2023, at 9:11

Re: [petsc-users] GAMG failure

2023-03-28 Thread Blaise Bourdin
On Mar 27, 2023, at 9:11 PM, Mark Adams wrote: Yes, the eigen estimates are converging slowly. BTW, have you tried hypre? It is a good solver (lots lots more woman years) These eigen estimates are conceptually simple, but they can lead to problems like this (hypre and an

Re: [petsc-users] GAMG failure

2023-03-27 Thread Mark Adams
Yes, the eigen estimates are converging slowly. BTW, have you tried hypre? It is a good solver (lots lots more woman years) These eigen estimates are conceptually simple, but they can lead to problems like this (hypre and an eigen estimate free smoother). But try this (good to have options

Re: [petsc-users] GAMG failure

2023-03-27 Thread Jed Brown
Try -pc_gamg_reuse_interpolation 0. I thought this was disabled by default, but I see pc_gamg->reuse_prol = PETSC_TRUE in the code. Blaise Bourdin writes: > On Mar 24, 2023, at 3:21 PM, Mark Adams wrote: > > * Do you set: > > PetscCall(MatSetOption(Amat, MAT_SPD, PETSC_TRUE)); > >

Re: [petsc-users] GAMG failure

2023-03-27 Thread Blaise Bourdin
On Mar 24, 2023, at 3:21 PM, Mark Adams wrote: * Do you set:     PetscCall(MatSetOption(Amat, MAT_SPD, PETSC_TRUE));     PetscCall(MatSetOption(Amat, MAT_SPD_ETERNAL, PETSC_TRUE)); Yes Do that to get CG Eigen estimates.

Re: [petsc-users] GAMG failure

2023-03-24 Thread Mark Adams
* Do you set: PetscCall(MatSetOption(Amat, MAT_SPD, PETSC_TRUE)); PetscCall(MatSetOption(Amat, MAT_SPD_ETERNAL, PETSC_TRUE)); Do that to get CG Eigen estimates. Outright failure is usually caused by a bad Eigen estimate. -pc_gamg_esteig_ksp_monitor_singular_value Will print out the

Re: [petsc-users] GAMG failure

2023-03-24 Thread Jed Brown
You can -pc_gamg_threshold .02 to slow the coarsening and either stronger smoother or increase number of iterations used for estimation (or increase tolerance). I assume your system is SPD and you've set the near-null space. Blaise Bourdin writes: > Hi, > > I am having issue with GAMG for

Re: [petsc-users] gamg out of memory with gpu

2022-12-26 Thread Matthew Knepley
On Mon, Dec 26, 2022 at 10:29 AM Edoardo Centofanti < edoardo.centofant...@universitadipavia.it> wrote: > Thank you for your answer. Can you provide me the full path of the example > you have in mind? The one I found does not seem to exploit the algebraic > multigrid, but just the geometric one.

Re: [petsc-users] gamg out of memory with gpu

2022-12-26 Thread Edoardo Centofanti
Thank you for your answer. Can you provide me the full path of the example you have in mind? The one I found does not seem to exploit the algebraic multigrid, but just the geometric one. Thanks, Edoardo Il giorno lun 26 dic 2022 alle ore 15:39 Matthew Knepley ha scritto: > On Mon, Dec 26, 2022

Re: [petsc-users] gamg out of memory with gpu

2022-12-26 Thread Matthew Knepley
On Mon, Dec 26, 2022 at 4:41 AM Edoardo Centofanti < edoardo.centofant...@universitadipavia.it> wrote: > Hi PETSc Users, > > I am experimenting some issues with the GAMG precondtioner when used with > GPU. > In particular, it seems to go out of memory very easily (around 5000 > dofs are enough to

Re: [petsc-users] GAMG and linearized elasticity

2022-12-13 Thread Jed Brown
Do you have slip/symmetry boundary conditions, where some components are constrained? In that case, there is no uniform block size and I think you'll need DMPlexCreateRigidBody() and MatSetNearNullSpace(). The PCSetCoordinates() code won't work for non-constant block size. -pc_type gamg should

Re: [petsc-users] GAMG crash during setup when using multiple GPUs

2022-02-11 Thread Sajid Ali Syed
Division Fermi National Accelerator Laboratory s-sajid-ali.github.io<http://s-sajid-ali.github.io> From: Mark Adams Sent: Thursday, February 10, 2022 8:47 PM To: Junchao Zhang Cc: Sajid Ali Syed ; petsc-users@mcs.anl.gov Subject: Re: [petsc-users] GAMG

Re: [petsc-users] GAMG crash during setup when using multiple GPUs

2022-02-10 Thread Mark Adams
formation to help with debugging this. >> >> Thank You, >> Sajid Ali (he/him) | Research Associate >> Scientific Computing Division >> Fermi National Accelerator Laboratory >> s-sajid-ali.github.io >> >> ---------- >> *From:* Junc

Re: [petsc-users] GAMG crash during setup when using multiple GPUs

2022-02-10 Thread Junchao Zhang
From:* Junchao Zhang > *Sent:* Thursday, February 10, 2022 1:43 PM > *To:* Sajid Ali Syed > *Cc:* petsc-users@mcs.anl.gov > *Subject:* Re: [petsc-users] GAMG crash during setup when using multiple > GPUs > > Also, try "-use_gpu_aware_mpi 0" to see if there is a dif

Re: [petsc-users] GAMG crash during setup when using multiple GPUs

2022-02-10 Thread Junchao Zhang
Also, try "-use_gpu_aware_mpi 0" to see if there is a difference. --Junchao Zhang On Thu, Feb 10, 2022 at 1:40 PM Junchao Zhang wrote: > Did it fail without GPU at 64 MPI ranks? > > --Junchao Zhang > > > On Thu, Feb 10, 2022 at 1:22 PM Sajid Ali Syed wrote: > >> Hi PETSc-developers, >> >>

Re: [petsc-users] GAMG crash during setup when using multiple GPUs

2022-02-10 Thread Junchao Zhang
Did it fail without GPU at 64 MPI ranks? --Junchao Zhang On Thu, Feb 10, 2022 at 1:22 PM Sajid Ali Syed wrote: > Hi PETSc-developers, > > I’m seeing the following crash that occurs during the setup phase of the > preconditioner when using multiple GPUs. The relevant error trace is shown >

Re: [petsc-users] GAMG memory consumption

2021-11-24 Thread Dave May
I think your run with -pc_type mg is defining a multigrid hierarchy with a only single level. (A single level mg PC would also explain the 100+ iterations required to converge.) The gamg configuration is definitely coarsening your problem and has a deeper hierarchy. A single level hierarchy will

Re: [petsc-users] GAMG memory consumption

2021-11-24 Thread Mark Adams
As Matt said GAMG uses more memory. But these numbers look odd: max == min and total = max + min, for both cases. I would use https://petsc.org/release/docs/manualpages/Sys/PetscMallocDump.html to look at this more closely. On Wed, Nov 24, 2021 at 1:03 PM Matthew Knepley wrote: > On Wed, Nov

Re: [petsc-users] GAMG memory consumption

2021-11-24 Thread Matthew Knepley
On Wed, Nov 24, 2021 at 12:26 PM Karthikeyan Chockalingam - STFC UKRI < karthikeyan.chockalin...@stfc.ac.uk> wrote: > Hello, > > > > I would like to understand why more memory is consumed by -pc_type gamg > compared to -pc_type mg for the same problem size > > > > ksp/ksp/tutorial: ./ex45

Re: [petsc-users] gamg student questions

2021-10-17 Thread Matthew Knepley
On Sun, Oct 17, 2021 at 9:04 AM Mark Adams wrote: > Hi Daniel, [this is a PETSc users list question so let me move it there] > > The behavior that you are seeing is a bit odd but not surprising. > > First, you should start with simple problems and get AMG (you might want > to try this exercise

Re: [petsc-users] gamg student questions

2021-10-17 Thread Mark Adams
Hi Daniel, [this is a PETSc users list question so let me move it there] The behavior that you are seeing is a bit odd but not surprising. First, you should start with simple problems and get AMG (you might want to try this exercise with hypre as well: --download-hypre and use -pc_type hypre, or

Re: [petsc-users] GAMG preconditioning

2021-04-12 Thread Barry Smith
Please send -log_view for the ilu and GAMG case. Barry > On Apr 12, 2021, at 10:34 AM, Milan Pelletier via petsc-users > wrote: > > Dear all, > > I am currently trying to use PETSc with CG solver and GAMG preconditioner. > I have started with the following set of parameters: >

Re: [petsc-users] GAMG preconditioning

2021-04-12 Thread Mark Adams
Can you briefly describe your application,? AMG usually only works well for straightforward elliptic problems, at least right out of the box. On Mon, Apr 12, 2021 at 11:35 AM Milan Pelletier via petsc-users < petsc-users@mcs.anl.gov> wrote: > Dear all, > > I am currently trying to use PETSc

Re: [petsc-users] GAMG parameters for ideal coarsening ratio

2020-03-17 Thread Mark Adams
On Tue, Mar 17, 2020 at 1:42 PM Sajid Ali wrote: > Hi Mark/Jed, > > The problem I'm solving is scalar helmholtz in 2D, (u_t = A*u_xx + A*u_yy > + F_t*u, with the familiar 5 point central difference as the derivative > approximation, > I assume this is definite HelmHoltz. The time integrator

Re: [petsc-users] GAMG parameters for ideal coarsening ratio

2020-03-17 Thread Sajid Ali
Hi Mark/Jed, The problem I'm solving is scalar helmholtz in 2D, (u_t = A*u_xx + A*u_yy + F_t*u, with the familiar 5 point central difference as the derivative approximation, I'm also attaching the result of -info | grep GAMG if that helps). My goal is to get weak and strong scaling results for

Re: [petsc-users] GAMG parameters for ideal coarsening ratio

2020-03-16 Thread Jed Brown
Sajid Ali writes: > Hi PETSc-developers, > > As per the manual, the ideal gamg parameters are those which result in > MatPtAP time being roughly similar to (or just slightly larger) than KSP > solve times. The way to adjust this is by changing the threshold for > coarsening and/or squaring the

Re: [petsc-users] GAMG scalability for serendipity 20 nodes hexahedra

2019-06-27 Thread TARDIEU Nicolas via petsc-users
Objet : Re: [petsc-users] GAMG scalability for serendipity 20 nodes hexahedra   I get growth with Q2 elements also. I've never seen anyone report scaling of high order elements with generic AMG. First, discretizations are very important for AMG solver. All optimal solvers really. I've ne

Re: [petsc-users] GAMG scalability for serendipity 20 nodes hexahedra

2019-06-26 Thread Mark Adams via petsc-users
I get growth with Q2 elements also. I've never seen anyone report scaling of high order elements with generic AMG. First, discretizations are very important for AMG solver. All optimal solvers really. I've never looked at serendipity elements. It might be a good idea to try Q2 as well. SNES ex56

Re: [petsc-users] GAMG parallel convergence sensitivity

2019-03-14 Thread Jed Brown via petsc-users
Mark Lohry writes: > It seems to me with these semi-implicit methods the CFL limit is still so > close to the explicit limit (that paper stops at 30), I don't really see > the purpose unless you're running purely incompressible? That's just my > ignorance speaking though. I'm currently running

Re: [petsc-users] GAMG parallel convergence sensitivity

2019-03-13 Thread Jed Brown via petsc-users
Mark Lohry via petsc-users writes: > For what it's worth, I'm regularly solving much larger problems (1M-100M > unknowns, unsteady) with this discretization and AMG setup on 500+ cores > with impressively great convergence, dramatically better than ILU/ASM. This > just happens to be the first

Re: [petsc-users] GAMG parallel convergence sensitivity

2019-03-13 Thread Mark Adams via petsc-users
> > > > Any thoughts here? Is there anything obviously wrong with my setup? > Fast and robust solvers for NS require specialized methods that are not provided in PETSc and the methods tend to require tighter integration with the meshing and discretization than the algebraic interface supports. I

Re: [petsc-users] GAMG scaling

2018-12-24 Thread Mark Adams via petsc-users
On Tue, Dec 25, 2018 at 12:10 AM Jed Brown wrote: > Mark Adams writes: > > > On Mon, Dec 24, 2018 at 4:56 PM Jed Brown wrote: > > > >> Mark Adams via petsc-users writes: > >> > >> > Anyway, my data for this is in my SC 2004 paper (MakeNextMat_private > in > >> > attached, NB, this is code

Re: [petsc-users] GAMG scaling

2018-12-24 Thread Jed Brown via petsc-users
Mark Adams writes: > On Mon, Dec 24, 2018 at 4:56 PM Jed Brown wrote: > >> Mark Adams via petsc-users writes: >> >> > Anyway, my data for this is in my SC 2004 paper (MakeNextMat_private in >> > attached, NB, this is code that I wrote in grad school). It is memory >> > efficient and simple,

Re: [petsc-users] GAMG scaling

2018-12-24 Thread Mark Adams via petsc-users
On Mon, Dec 24, 2018 at 4:56 PM Jed Brown wrote: > Mark Adams via petsc-users writes: > > > Anyway, my data for this is in my SC 2004 paper (MakeNextMat_private in > > attached, NB, this is code that I wrote in grad school). It is memory > > efficient and simple, just four nested loops i,j,I,J:

Re: [petsc-users] GAMG scaling

2018-12-24 Thread Jed Brown via petsc-users
Mark Adams via petsc-users writes: > Anyway, my data for this is in my SC 2004 paper (MakeNextMat_private in > attached, NB, this is code that I wrote in grad school). It is memory > efficient and simple, just four nested loops i,j,I,J: C(I,J) = > P(i,I)*A(i,j)*P(j,J). In eyeballing the numbers

Re: [petsc-users] GAMG scaling

2018-12-22 Thread Mark Adams via petsc-users
Wow, this is an old thread. Sorry if I sound like an old fart talking about the good old days but I originally did RAP. in Prometheus, in a non work optimal way that might be of interest. Not hard to implement. I bring this up because we continue to struggle with this damn thing. I think this

Re: [petsc-users] GAMG scaling

2018-12-22 Thread Mark Adams via petsc-users
OK, so this thread has drifted, see title :) On Fri, Dec 21, 2018 at 10:01 PM Fande Kong wrote: > Sorry, hit the wrong button. > > > > On Fri, Dec 21, 2018 at 7:56 PM Fande Kong wrote: > >> >> >> On Fri, Dec 21, 2018 at 9:44 AM Mark Adams wrote: >> >>> Also, you mentioned that you are using

Re: [petsc-users] GAMG scaling

2018-12-21 Thread Fande Kong via petsc-users
Sorry, hit the wrong button. On Fri, Dec 21, 2018 at 7:56 PM Fande Kong wrote: > > > On Fri, Dec 21, 2018 at 9:44 AM Mark Adams wrote: > >> Also, you mentioned that you are using 10 levels. This is very strange >> with GAMG. You can run with -info and grep on GAMG to see the sizes and the >>

Re: [petsc-users] GAMG scaling

2018-12-21 Thread Fande Kong via petsc-users
Thanks so much, Hong, If any new finding, please let me know. On Fri, Dec 21, 2018 at 9:36 AM Zhang, Hong wrote: > Fande: > I will explore it and get back to you. > Does anyone know how to profile memory usage? > We are using gperftools

Re: [petsc-users] GAMG scaling

2018-12-21 Thread Matthew Knepley via petsc-users
On Fri, Dec 21, 2018 at 12:55 PM Zhang, Hong wrote: > Matt: > >> Does anyone know how to profile memory usage? >>> >> >> The best serial way is to use Massif, which is part of valgrind. I think >> it might work in parallel if you >> only look at one process at a time. >> > > Can you give an

Re: [petsc-users] GAMG scaling

2018-12-21 Thread Zhang, Hong via petsc-users
Matt: Does anyone know how to profile memory usage? The best serial way is to use Massif, which is part of valgrind. I think it might work in parallel if you only look at one process at a time. Can you give an example of using Massif? For example, how to use it on

Re: [petsc-users] GAMG scaling

2018-12-21 Thread Matthew Knepley via petsc-users
On Fri, Dec 21, 2018 at 11:36 AM Zhang, Hong via petsc-users < petsc-users@mcs.anl.gov> wrote: > Fande: > I will explore it and get back to you. > Does anyone know how to profile memory usage? > The best serial way is to use Massif, which is part of valgrind. I think it might work in parallel if

Re: [petsc-users] GAMG scaling

2018-12-21 Thread Mark Adams via petsc-users
Also, you mentioned that you are using 10 levels. This is very strange with GAMG. You can run with -info and grep on GAMG to see the sizes and the number of non-zeros per level. You should coarsen at a rate of about 2^D to 3^D with GAMG (with 10 levels this would imply a very large fine grid

Re: [petsc-users] GAMG scaling

2018-12-21 Thread Zhang, Hong via petsc-users
Fande: I will explore it and get back to you. Does anyone know how to profile memory usage? Hong Thanks, Hong, I just briefly went through the code. I was wondering if it is possible to destroy "c->ptap" (that caches a lot of intermediate data) to release the memory after the coarse matrix is

Re: [petsc-users] GAMG scaling

2018-12-20 Thread Fande Kong via petsc-users
Thanks, Hong, I just briefly went through the code. I was wondering if it is possible to destroy "c->ptap" (that caches a lot of intermediate data) to release the memory after the coarse matrix is assembled. I understand you may still want to reuse these data structures by default but for my

Re: [petsc-users] GAMG scaling

2018-12-20 Thread Zhang, Hong via petsc-users
We use nonscalable implementation as default, and switch to scalable for matrices over finer grids. You may use option '-matptap_via scalable' to force scalable PtAP implementation for all PtAP. Let me know if it works. Hong On Thu, Dec 20, 2018 at 8:16 PM Smith, Barry F.

Re: [petsc-users] GAMG scaling

2018-12-20 Thread Smith, Barry F. via petsc-users
See MatPtAP_MPIAIJ_MPIAIJ(). It switches to scalable automatically for "large" problems, which is determined by some heuristic. Barry > On Dec 20, 2018, at 6:46 PM, Fande Kong via petsc-users > wrote: > > > > On Thu, Dec 20, 2018 at 4:43 PM Zhang, Hong wrote: > Fande: > Hong, >

Re: [petsc-users] GAMG scaling

2018-12-20 Thread Smith, Barry F. via petsc-users
> On Dec 20, 2018, at 5:51 PM, Zhang, Hong via petsc-users > wrote: > > Fande: > Hong, > Thanks for your improvements on PtAP that is critical for MG-type algorithms. > > On Wed, May 3, 2017 at 10:17 AM Hong wrote: > Mark, > Below is the copy of my email sent to you on Feb 27: > > I

Re: [petsc-users] GAMG scaling

2018-12-20 Thread Zhang, Hong via petsc-users
Fande: Hong, Thanks for your improvements on PtAP that is critical for MG-type algorithms. On Wed, May 3, 2017 at 10:17 AM Hong mailto:hzh...@mcs.anl.gov>> wrote: Mark, Below is the copy of my email sent to you on Feb 27: I implemented scalable MatPtAP and did comparisons of three

Re: [petsc-users] GAMG Parallel Performance

2018-11-15 Thread Smith, Barry F. via petsc-users
> On Nov 15, 2018, at 1:02 PM, Mark Adams wrote: > > There is a lot of load imbalance in VecMAXPY also. The partitioning could be > bad and if not its the machine. > > On Thu, Nov 15, 2018 at 1:56 PM Smith, Barry F. via petsc-users > wrote: > > Something is odd about your

Re: [petsc-users] GAMG Parallel Performance

2018-11-15 Thread Mark Adams via petsc-users
There is a lot of load imbalance in VecMAXPY also. The partitioning could be bad and if not its the machine. On Thu, Nov 15, 2018 at 1:56 PM Smith, Barry F. via petsc-users < petsc-users@mcs.anl.gov> wrote: > > Something is odd about your configuration. Just consider the time for > VecMAXPY

Re: [petsc-users] GAMG Parallel Performance

2018-11-15 Thread Smith, Barry F. via petsc-users
Something is odd about your configuration. Just consider the time for VecMAXPY which is an embarrassingly parallel operation. On 1000 MPI processes it produces Time

Re: [petsc-users] GAMG Parallel Performance

2018-11-15 Thread Matthew Knepley via petsc-users
On Thu, Nov 15, 2018 at 11:52 AM Karin via petsc-users < petsc-users@mcs.anl.gov> wrote: > Dear PETSc team, > > I am solving a linear transient dynamic problem, based on a discretization > with finite elements. To do that, I am using FGMRES with GAMG as a > preconditioner. I consider here 10 time

Re: [petsc-users] GAMG advice

2017-11-10 Thread Mark Adams
On Thu, Nov 9, 2017 at 2:19 PM, David Nolte wrote: > Hi Mark, > > thanks for clarifying. > When I wrote the initial question I had somehow overlooked the fact that > the GAMG standard smoother was Chebychev while ML uses SOR. All the other > comments concerning threshold

Re: [petsc-users] GAMG advice

2017-11-09 Thread David Nolte
Hi Mark, thanks for clarifying. When I wrote the initial question I had somehow overlooked the fact that the GAMG standard smoother was Chebychev while ML uses SOR. All the other comments concerning threshold etc were based on this mistake. The following settings work quite well, of course LU is

Re: [petsc-users] GAMG advice

2017-11-08 Thread Mark Adams
On Wed, Nov 1, 2017 at 5:45 PM, David Nolte wrote: > Thanks Barry. > By simply replacing chebychev by richardson I get similar performance > with GAMG and ML That too (I assumed you were using the same, I could not see cheby in your view data). I guess SOR works for the

Re: [petsc-users] GAMG advice

2017-11-08 Thread Mark Adams
On Fri, Oct 20, 2017 at 11:10 PM, Barry Smith wrote: > > David, > >GAMG picks the number of levels based on how the coarsening process etc > proceeds. You cannot hardwire it to a particular value. Yes you can. GAMG will respect -pc_mg_levels N, but we don't recommend

Re: [petsc-users] GAMG advice

2017-11-08 Thread Mark Adams
> > > Now I'd like to try GAMG instead of ML. However, I don't know how to set > it up to get similar performance. > The obvious/naive > > -pc_type gamg > -pc_gamg_type agg > > # with and without > -pc_gamg_threshold 0.03 > -pc_mg_levels 3 > > This looks fine. I would not set the

Re: [petsc-users] GAMG advice

2017-11-01 Thread David Nolte
Thanks Barry. By simply replacing chebychev by richardson I get similar performance with GAMG and ML (GAMG even slightly faster): -pc_type gamg    

Re: [petsc-users] GAMG advice

2017-10-20 Thread Barry Smith
David, GAMG picks the number of levels based on how the coarsening process etc proceeds. You cannot hardwire it to a particular value. You can run with -info to get more info potentially on the decisions GAMG is making. Barry > On Oct 20, 2017, at 2:06 PM, David Nolte

Re: [petsc-users] GAMG advice

2017-10-20 Thread David Nolte
PS: I didn't realize at first, it looks as if the -pc_mg_levels 3 option was not taken into account:   type: gamg     MG: type is MULTIPLICATIVE, levels=1 cycles=v On 10/20/2017 03:32 PM, David Nolte wrote: > Dear all, > > I have some problems using GAMG as a preconditioner for (F)GMRES. >

Re: [petsc-users] GAMG scaling

2017-05-04 Thread Hong
Mark, Fixed https://bitbucket.org/petsc/petsc/commits/68eacb73b84ae7f3fd7363217d47f23a8f967155 Run ex56 gives mpiexec -n 8 ./ex56 -ne 13 ... -h |grep via -mattransposematmult_via Algorithmic approach (choose one of) scalable nonscalable matmatmult (MatTransposeMatMult) -matmatmult_via

Re: [petsc-users] GAMG scaling

2017-05-04 Thread Mark Adams
Thanks Hong, I am not seeing these options with -help ... On Wed, May 3, 2017 at 10:05 PM, Hong wrote: > I basically used 'runex56' and set '-ne' be compatible with np. > Then I used option > '-matptap_via scalable' > '-matptap_via hypre' > '-matptap_via nonscalable' > > I

Re: [petsc-users] GAMG scaling

2017-05-03 Thread Hong
I basically used 'runex56' and set '-ne' be compatible with np. Then I used option '-matptap_via scalable' '-matptap_via hypre' '-matptap_via nonscalable' I attached a job script below. In master branch, I set default as 'nonscalable' for small - medium size matrices, and automatically switch to

Re: [petsc-users] GAMG scaling

2017-05-03 Thread Mark Adams
Hong,the input files do not seem to be accessible. What are the command line option? (I don't see a "rap" or "scale" in the source). On Wed, May 3, 2017 at 12:17 PM, Hong wrote: > Mark, > Below is the copy of my email sent to you on Feb 27: > > I implemented scalable

Re: [petsc-users] GAMG scaling

2017-05-03 Thread Hong
Mark, Below is the copy of my email sent to you on Feb 27: I implemented scalable MatPtAP and did comparisons of three implementations using ex56.c on alcf cetus machine (this machine has small memory, 1GB/core): - nonscalable PtAP: use an array of length PN to do dense axpy - scalable PtAP:

Re: [petsc-users] GAMG for the unsymmetrical matrix

2017-04-19 Thread Kong, Fande
Thanks, Mark, Now, the total compute time using GAMG is competitive with ASM. Looks like I could not use something like: "-mg_level_1_ksp_type gmres" because this option makes the compute time much worse. Fande, On Thu, Apr 13, 2017 at 9:14 AM, Mark Adams wrote: > > > On

Re: [petsc-users] GAMG for the unsymmetrical matrix

2017-04-13 Thread Mark Adams
On Wed, Apr 12, 2017 at 7:04 PM, Kong, Fande wrote: > > > On Sun, Apr 9, 2017 at 6:04 AM, Mark Adams wrote: > >> You seem to have two levels here and 3M eqs on the fine grid and 37 on >> the coarse grid. I don't understand that. >> >> You are also calling

Re: [petsc-users] GAMG for the unsymmetrical matrix

2017-04-13 Thread Mark Adams
On Wed, Apr 12, 2017 at 1:31 PM, Kong, Fande wrote: > Hi Mark, > > Thanks for your reply. > > On Wed, Apr 12, 2017 at 9:16 AM, Mark Adams wrote: > >> The problem comes from setting the number of MG levels (-pc_mg_levels 2). >> Not your fault, it looks like

Re: [petsc-users] GAMG for the unsymmetrical matrix

2017-04-12 Thread Kong, Fande
On Sun, Apr 9, 2017 at 6:04 AM, Mark Adams wrote: > You seem to have two levels here and 3M eqs on the fine grid and 37 on > the coarse grid. I don't understand that. > > You are also calling the AMG setup a lot, but not spending much time > in it. Try running with -info and

Re: [petsc-users] GAMG for the unsymmetrical matrix

2017-04-12 Thread Kong, Fande
Hi Mark, Thanks for your reply. On Wed, Apr 12, 2017 at 9:16 AM, Mark Adams wrote: > The problem comes from setting the number of MG levels (-pc_mg_levels 2). > Not your fault, it looks like the GAMG logic is faulty, in your version at > least. > What I want is that GAMG

Re: [petsc-users] GAMG for the unsymmetrical matrix

2017-04-12 Thread Mark Adams
The problem comes from setting the number of MG levels (-pc_mg_levels 2). Not your fault, it looks like the GAMG logic is faulty, in your version at least. GAMG will force the coarsest grid to one processor by default, in newer versions. You can override the default with:

Re: [petsc-users] GAMG for the unsymmetrical matrix

2017-04-10 Thread Kong, Fande
On Sun, Apr 9, 2017 at 6:04 AM, Mark Adams wrote: > You seem to have two levels here and 3M eqs on the fine grid and 37 on > the coarse grid. 37 is on the sub domain. rows=18145, cols=18145 on the entire coarse grid. > I don't understand that. > > You are also calling

Re: [petsc-users] GAMG for the unsymmetrical matrix

2017-04-09 Thread Mark Adams
You seem to have two levels here and 3M eqs on the fine grid and 37 on the coarse grid. I don't understand that. You are also calling the AMG setup a lot, but not spending much time in it. Try running with -info and grep on "GAMG". On Fri, Apr 7, 2017 at 5:29 PM, Kong, Fande

Re: [petsc-users] GAMG for the unsymmetrical matrix

2017-04-07 Thread Kong, Fande
On Fri, Apr 7, 2017 at 3:52 PM, Barry Smith wrote: > > > On Apr 7, 2017, at 4:46 PM, Kong, Fande wrote: > > > > > > > > On Fri, Apr 7, 2017 at 3:39 PM, Barry Smith wrote: > > > > Using Petsc Release Version 3.7.5, unknown > > > >

Re: [petsc-users] GAMG for the unsymmetrical matrix

2017-04-07 Thread Barry Smith
> On Apr 7, 2017, at 4:46 PM, Kong, Fande wrote: > > > > On Fri, Apr 7, 2017 at 3:39 PM, Barry Smith wrote: > > Using Petsc Release Version 3.7.5, unknown > >So are you using the release or are you using master branch? > > I am working on the

Re: [petsc-users] GAMG for the unsymmetrical matrix

2017-04-07 Thread Barry Smith
Using Petsc Release Version 3.7.5, unknown So are you using the release or are you using master branch? If you use master the ASM will be even faster. > On Apr 7, 2017, at 4:29 PM, Kong, Fande wrote: > > Thanks, Barry. > > It works. > > GAMG is three times

Re: [petsc-users] GAMG for the unsymmetrical matrix

2017-04-07 Thread Kong, Fande
Thanks, Barry. It works. GAMG is three times better than ASM in terms of the number of linear iterations, but it is five times slower than ASM. Any suggestions to improve the performance of GAMG? Log files are attached. Fande, On Thu, Apr 6, 2017 at 3:39 PM, Barry Smith

Re: [petsc-users] GAMG for the unsymmetrical matrix

2017-04-06 Thread Barry Smith
> On Apr 6, 2017, at 9:39 AM, Kong, Fande wrote: > > Thanks, Mark and Barry, > > It works pretty wells in terms of the number of linear iterations (using > "-pc_gamg_sym_graph true"), but it is horrible in the compute time. I am > using the two-level method via

Re: [petsc-users] GAMG for the unsymmetrical matrix

2017-04-06 Thread Mark Adams
On Thu, Apr 6, 2017 at 7:39 AM, Kong, Fande wrote: > Thanks, Mark and Barry, > > It works pretty wells in terms of the number of linear iterations (using > "-pc_gamg_sym_graph true"), but it is horrible in the compute time. I am > using the two-level method via "-pc_mg_levels

Re: [petsc-users] GAMG for the unsymmetrical matrix

2017-04-06 Thread Kong, Fande
Thanks, Mark and Barry, It works pretty wells in terms of the number of linear iterations (using "-pc_gamg_sym_graph true"), but it is horrible in the compute time. I am using the two-level method via "-pc_mg_levels 2". The reason why the compute time is larger than other preconditioning options

Re: [petsc-users] GAMG for the unsymmetrical matrix

2017-04-06 Thread Mark Adams
On Tue, Apr 4, 2017 at 10:10 AM, Barry Smith wrote: > >> Does this mean that GAMG works for the symmetrical matrix only? > > No, it means that for non symmetric nonzero structure you need the extra > flag. So use the extra flag. The reason we don't always use the flag is >

Re: [petsc-users] GAMG for the unsymmetrical matrix

2017-04-04 Thread Barry Smith
> Does this mean that GAMG works for the symmetrical matrix only? No, it means that for non symmetric nonzero structure you need the extra flag. So use the extra flag. The reason we don't always use the flag is because it adds extra cost and isn't needed if the matrix already has a symmetric

Re: [petsc-users] GAMG huge hash being requested

2017-02-21 Thread Lawrence Mitchell
Hi Justin, On 21/02/17 06:01, Justin Chang wrote: > Okay thanks Now done. Cheers, Lawrence signature.asc Description: OpenPGP digital signature

Re: [petsc-users] GAMG huge hash being requested

2017-02-20 Thread Justin Chang
Okay thanks On Sun, Feb 19, 2017 at 2:32 PM, Lawrence Mitchell < lawrence.mitch...@imperial.ac.uk> wrote: > > > > On 19 Feb 2017, at 18:55, Justin Chang wrote: > > > > Okay, it doesn't seem like the Firedrake fork (which is what I am using) > has this latest fix. Lawrence,

Re: [petsc-users] GAMG huge hash being requested

2017-02-19 Thread Lawrence Mitchell
> On 19 Feb 2017, at 18:55, Justin Chang wrote: > > Okay, it doesn't seem like the Firedrake fork (which is what I am using) has > this latest fix. Lawrence, when do you think it's possible you folks can > incorporate these fixes I'll fast forward our branch pointer on

Re: [petsc-users] GAMG huge hash being requested

2017-02-19 Thread Justin Chang
Okay, it doesn't seem like the Firedrake fork (which is what I am using) has this latest fix. Lawrence, when do you think it's possible you folks can incorporate these fixes? On Sun, Feb 19, 2017 at 8:56 AM, Matthew Knepley wrote: > Satish fixed this error. I believe the fix

Re: [petsc-users] GAMG huge hash being requested

2017-02-19 Thread Matthew Knepley
Satish fixed this error. I believe the fix is now in master. Thanks, Matt On Sun, Feb 19, 2017 at 3:05 AM, Justin Chang wrote: > Hi all, > > So I am attempting to employ the DG1 finite element method on the poisson > equation using GAMG. When I attempt to solve a

Re: [petsc-users] GAMG

2016-11-01 Thread Mark Adams
> > > The labeling is right, I re-checked. That's the funny part, I can't get > GAMG to work with PCSetCoordinates (which BTW, I think its documentation > does not address the issue of DOF ordering). > Yep, this needs to be made clear. I guess people do actually look at the manual so I will add

Re: [petsc-users] GAMG

2016-10-31 Thread Jeremy Theler
On Mon, 2016-10-31 at 08:44 -0600, Jed Brown wrote: > > After understanding Matt's point about the near nullspace (and reading > > some interesting comments from Jed on scicomp stackexchange) I did built > > my own vectors (I had to take a look at MatNullSpaceCreateRigidBody() > > because I found

Re: [petsc-users] GAMG

2016-10-31 Thread Jed Brown
"Kong, Fande" writes: > If the boundary values are not zero, no way to maintain symmetry unless we > reduce the extra part of the matrix. Not updating the columns is better in > this situation. The inhomogeneity of the boundary condition has nothing to do with operator

Re: [petsc-users] GAMG

2016-10-31 Thread Matthew Knepley
On Mon, Oct 31, 2016 at 10:29 AM, Kong, Fande wrote: > On Mon, Oct 31, 2016 at 8:44 AM, Jed Brown wrote: > >> Jeremy Theler writes: >> >> > Hi again >> > >> > I have been wokring on these issues. Long story short: it is about the >> >

Re: [petsc-users] GAMG

2016-10-31 Thread Kong, Fande
On Mon, Oct 31, 2016 at 8:44 AM, Jed Brown wrote: > Jeremy Theler writes: > > > Hi again > > > > I have been wokring on these issues. Long story short: it is about the > > ordering of the unknown fields in the vector. > > > > Long story: > > The physics

Re: [petsc-users] GAMG

2016-10-31 Thread Jed Brown
Jeremy Theler writes: > Hi again > > I have been wokring on these issues. Long story short: it is about the > ordering of the unknown fields in the vector. > > Long story: > The physics is linear elastic problem, you can see it does work with LU > over a simple cube (warp

Re: [petsc-users] GAMG

2016-10-28 Thread Mark Adams
> > >> > AMG (the agglomeration kind) needs to know the near null space of your > operator in order > to work. You have an elasticity problem (I think), and if you take that > operator without boundary > conditions, the energy is invariant to translations and rotations. The > space of translations

Re: [petsc-users] GAMG

2016-10-28 Thread Matthew Knepley
On Fri, Oct 28, 2016 at 8:38 AM, Jeremy Theler wrote: > > > > > If I do not call PCSetCoordinates() the error goes away but > > convergence > > is slow. > > Is it possible that your coordinates lie on a 2D surface? All this > > does is make the 6

Re: [petsc-users] GAMG

2016-10-28 Thread Mark Adams
So this is a fully 3D problem, or is it a very flat disc? What is the worst aspect ratio (or whatever) of an element, approximately. That is, is this a bad mesh? You might want to start with a simple problem like a cube. The eigen estimates (Smooth P0: max eigen=1.09e+01) are huge and they

  1   2   3   >