Hi Jed
Thanks for your reply. I have sent the log files to petsc-ma...@mcs.anl.gov.
Zisheng
From: Jed Brown
Sent: Tuesday, June 27, 2023 1:02 PM
To: Zisheng Ye ; petsc-users@mcs.anl.gov
Subject: Re: [petsc-users] GAMG and Hypre preconditioner
[External Sender
Zisheng Ye via petsc-users writes:
> Dear PETSc Team
>
> We are testing the GPU support in PETSc's KSPSolve, especially for the GAMG
> and Hypre preconditioners. We have encountered several issues that we would
> like to ask for your suggestions.
>
> First, we have couple of questions when
Dear PETSc Team
We are testing the GPU support in PETSc's KSPSolve, especially for the GAMG and
Hypre preconditioners. We have encountered several issues that we would like to
ask for your suggestions.
First, we have couple of questions when working with a single MPI rank:
1. We have
On Tue, Mar 28, 2023 at 12:38 PM Blaise Bourdin wrote:
>
>
> On Mar 27, 2023, at 9:11 PM, Mark Adams wrote:
>
> Yes, the eigen estimates are converging slowly.
>
> BTW, have you tried hypre? It is a good solver (lots lots more woman years)
> These eigen estimates are conceptually simple, but
This suite has been good for my solid mechanics solvers. (It's written here as
a coarse grid solver because we do matrix-free p-MG first, but you can use it
directly.)
https://github.com/hypre-space/hypre/issues/601#issuecomment-1069426997
Blaise Bourdin writes:
> On Mar 27, 2023, at 9:11
On Mar 27, 2023, at 9:11 PM, Mark Adams wrote:
Yes, the eigen estimates are converging slowly.
BTW, have you tried hypre? It is a good solver (lots lots more woman years)
These eigen estimates are conceptually simple, but they can lead to problems like this (hypre and an
Yes, the eigen estimates are converging slowly.
BTW, have you tried hypre? It is a good solver (lots lots more woman years)
These eigen estimates are conceptually simple, but they can lead to
problems like this (hypre and an eigen estimate free smoother).
But try this (good to have options
Try -pc_gamg_reuse_interpolation 0. I thought this was disabled by default, but
I see pc_gamg->reuse_prol = PETSC_TRUE in the code.
Blaise Bourdin writes:
> On Mar 24, 2023, at 3:21 PM, Mark Adams wrote:
>
> * Do you set:
>
> PetscCall(MatSetOption(Amat, MAT_SPD, PETSC_TRUE));
>
>
On Mar 24, 2023, at 3:21 PM, Mark Adams wrote:
* Do you set:
PetscCall(MatSetOption(Amat, MAT_SPD, PETSC_TRUE));
PetscCall(MatSetOption(Amat, MAT_SPD_ETERNAL, PETSC_TRUE));
Yes
Do that to get CG Eigen estimates.
* Do you set:
PetscCall(MatSetOption(Amat, MAT_SPD, PETSC_TRUE));
PetscCall(MatSetOption(Amat, MAT_SPD_ETERNAL, PETSC_TRUE));
Do that to get CG Eigen estimates. Outright failure is usually caused by a
bad Eigen estimate.
-pc_gamg_esteig_ksp_monitor_singular_value
Will print out the
You can -pc_gamg_threshold .02 to slow the coarsening and either stronger
smoother or increase number of iterations used for estimation (or increase
tolerance). I assume your system is SPD and you've set the near-null space.
Blaise Bourdin writes:
> Hi,
>
> I am having issue with GAMG for
Hi,
I am having issue with GAMG for some very ill-conditioned 2D linearized
elasticity problems (sharp variation of elastic moduli with thin regions of
nearly incompressible material). I use snes_type newtonls, linesearch_type cp,
and pc_type gamg without any further options. pc_type Jacobi
On Mon, Dec 26, 2022 at 10:29 AM Edoardo Centofanti <
edoardo.centofant...@universitadipavia.it> wrote:
> Thank you for your answer. Can you provide me the full path of the example
> you have in mind? The one I found does not seem to exploit the algebraic
> multigrid, but just the geometric one.
Thank you for your answer. Can you provide me the full path of the example
you have in mind? The one I found does not seem to exploit the algebraic
multigrid, but just the geometric one.
Thanks,
Edoardo
Il giorno lun 26 dic 2022 alle ore 15:39 Matthew Knepley
ha scritto:
> On Mon, Dec 26, 2022
On Mon, Dec 26, 2022 at 4:41 AM Edoardo Centofanti <
edoardo.centofant...@universitadipavia.it> wrote:
> Hi PETSc Users,
>
> I am experimenting some issues with the GAMG precondtioner when used with
> GPU.
> In particular, it seems to go out of memory very easily (around 5000
> dofs are enough to
Hi PETSc Users,
I am experimenting some issues with the GAMG precondtioner when used with
GPU.
In particular, it seems to go out of memory very easily (around 5000
dofs are enough to make it throw the "[0]PETSC ERROR: cuda error 2
(cudaErrorMemoryAllocation) : out of memory" error).
I have these
Do you have slip/symmetry boundary conditions, where some components are
constrained? In that case, there is no uniform block size and I think you'll
need DMPlexCreateRigidBody() and MatSetNearNullSpace().
The PCSetCoordinates() code won't work for non-constant block size.
-pc_type gamg should
Hi,
I am getting close to finish porting a code from petsc 3.3 / sieve to main / dmplex, but am now encountering difficulties
I am reasonably sure that the Jacobian and residual are correct. The codes handle boundary conditions differently (MatZeroRowCols vs dmplex constraints) so it
Division
Fermi National Accelerator Laboratory
s-sajid-ali.github.io<http://s-sajid-ali.github.io>
From: Mark Adams
Sent: Thursday, February 10, 2022 8:47 PM
To: Junchao Zhang
Cc: Sajid Ali Syed ; petsc-users@mcs.anl.gov
Subject: Re: [petsc-users] GAMG
formation to help with debugging this.
>>
>> Thank You,
>> Sajid Ali (he/him) | Research Associate
>> Scientific Computing Division
>> Fermi National Accelerator Laboratory
>> s-sajid-ali.github.io
>>
>> ------
>> *From:* Junc
From:* Junchao Zhang
> *Sent:* Thursday, February 10, 2022 1:43 PM
> *To:* Sajid Ali Syed
> *Cc:* petsc-users@mcs.anl.gov
> *Subject:* Re: [petsc-users] GAMG crash during setup when using multiple
> GPUs
>
> Also, try "-use_gpu_aware_mpi 0" to see if there is a dif
Also, try "-use_gpu_aware_mpi 0" to see if there is a difference.
--Junchao Zhang
On Thu, Feb 10, 2022 at 1:40 PM Junchao Zhang
wrote:
> Did it fail without GPU at 64 MPI ranks?
>
> --Junchao Zhang
>
>
> On Thu, Feb 10, 2022 at 1:22 PM Sajid Ali Syed wrote:
>
>> Hi PETSc-developers,
>>
>>
Did it fail without GPU at 64 MPI ranks?
--Junchao Zhang
On Thu, Feb 10, 2022 at 1:22 PM Sajid Ali Syed wrote:
> Hi PETSc-developers,
>
> I’m seeing the following crash that occurs during the setup phase of the
> preconditioner when using multiple GPUs. The relevant error trace is shown
>
I think your run with -pc_type mg is defining a multigrid hierarchy with a
only single level. (A single level mg PC would also explain the 100+
iterations required to converge.) The gamg configuration is definitely
coarsening your problem and has a deeper hierarchy. A single level
hierarchy will
As Matt said GAMG uses more memory.
But these numbers look odd: max == min and total = max + min, for both
cases.
I would use
https://petsc.org/release/docs/manualpages/Sys/PetscMallocDump.html to look
at this more closely.
On Wed, Nov 24, 2021 at 1:03 PM Matthew Knepley wrote:
> On Wed, Nov
On Wed, Nov 24, 2021 at 12:26 PM Karthikeyan Chockalingam - STFC UKRI <
karthikeyan.chockalin...@stfc.ac.uk> wrote:
> Hello,
>
>
>
> I would like to understand why more memory is consumed by -pc_type gamg
> compared to -pc_type mg for the same problem size
>
>
>
> ksp/ksp/tutorial: ./ex45
Hello,
I would like to understand why more memory is consumed by -pc_type gamg
compared to -pc_type mg for the same problem size
ksp/ksp/tutorial: ./ex45 -da_grid_x 368 -da_grid_x 368 -da_grid_x 368 -ksp_type
cg
-pc_type mg
Maximum (over computational time) process memory:total
On Sun, Oct 17, 2021 at 9:04 AM Mark Adams wrote:
> Hi Daniel, [this is a PETSc users list question so let me move it there]
>
> The behavior that you are seeing is a bit odd but not surprising.
>
> First, you should start with simple problems and get AMG (you might want
> to try this exercise
Hi Daniel, [this is a PETSc users list question so let me move it there]
The behavior that you are seeing is a bit odd but not surprising.
First, you should start with simple problems and get AMG (you might want to
try this exercise with hypre as well: --download-hypre and use -pc_type
hypre, or
Please send -log_view for the ilu and GAMG case.
Barry
> On Apr 12, 2021, at 10:34 AM, Milan Pelletier via petsc-users
> wrote:
>
> Dear all,
>
> I am currently trying to use PETSc with CG solver and GAMG preconditioner.
> I have started with the following set of parameters:
>
Can you briefly describe your application,?
AMG usually only works well for straightforward elliptic problems, at least
right out of the box.
On Mon, Apr 12, 2021 at 11:35 AM Milan Pelletier via petsc-users <
petsc-users@mcs.anl.gov> wrote:
> Dear all,
>
> I am currently trying to use PETSc
Dear all,
I am currently trying to use PETSc with CG solver and GAMG preconditioner.
I have started with the following set of parameters:
-ksp_type cg
-pc_type gamg
-pc_gamg_agg_nsmooths 1
-pc_gamg_threshold 0.02
-mg_levels_ksp_type chebyshev
-mg_levels_pc_type sor
-mg_levels_ksp_max_it 2
On Tue, Mar 17, 2020 at 1:42 PM Sajid Ali
wrote:
> Hi Mark/Jed,
>
> The problem I'm solving is scalar helmholtz in 2D, (u_t = A*u_xx + A*u_yy
> + F_t*u, with the familiar 5 point central difference as the derivative
> approximation,
>
I assume this is definite HelmHoltz. The time integrator
Hi Mark/Jed,
The problem I'm solving is scalar helmholtz in 2D, (u_t = A*u_xx + A*u_yy +
F_t*u, with the familiar 5 point central difference as the derivative
approximation, I'm also attaching the result of -info | grep GAMG if that
helps). My goal is to get weak and strong scaling results for
Sajid Ali writes:
> Hi PETSc-developers,
>
> As per the manual, the ideal gamg parameters are those which result in
> MatPtAP time being roughly similar to (or just slightly larger) than KSP
> solve times. The way to adjust this is by changing the threshold for
> coarsening and/or squaring the
Objet : Re: [petsc-users] GAMG scalability for serendipity 20 nodes hexahedra
I get growth with Q2 elements also. I've never seen anyone report scaling of
high order elements with generic AMG.
First, discretizations are very important for AMG solver. All optimal solvers
really. I've ne
I get growth with Q2 elements also. I've never seen anyone report scaling
of high order elements with generic AMG.
First, discretizations are very important for AMG solver. All optimal
solvers really. I've never looked at serendipity elements. It might be a
good idea to try Q2 as well.
SNES ex56
Dear PETSc team,
I have run a simple weak scalability test based on canonical 3D elasticity
problem : a cube, meshed with 8 nodes hexaedra, clamped on one of its face and
submited to a pressure load on the opposite face.
I am using the FGMRES ksp with GAMG as preconditioner. I have set the
Mark Lohry writes:
> It seems to me with these semi-implicit methods the CFL limit is still so
> close to the explicit limit (that paper stops at 30), I don't really see
> the purpose unless you're running purely incompressible? That's just my
> ignorance speaking though. I'm currently running
Mark Lohry via petsc-users writes:
> For what it's worth, I'm regularly solving much larger problems (1M-100M
> unknowns, unsteady) with this discretization and AMG setup on 500+ cores
> with impressively great convergence, dramatically better than ILU/ASM. This
> just happens to be the first
>
>
>
> Any thoughts here? Is there anything obviously wrong with my setup?
>
Fast and robust solvers for NS require specialized methods that are not
provided in PETSc and the methods tend to require tighter integration with
the meshing and discretization than the algebraic interface supports.
I
On Tue, Dec 25, 2018 at 12:10 AM Jed Brown wrote:
> Mark Adams writes:
>
> > On Mon, Dec 24, 2018 at 4:56 PM Jed Brown wrote:
> >
> >> Mark Adams via petsc-users writes:
> >>
> >> > Anyway, my data for this is in my SC 2004 paper (MakeNextMat_private
> in
> >> > attached, NB, this is code
Mark Adams writes:
> On Mon, Dec 24, 2018 at 4:56 PM Jed Brown wrote:
>
>> Mark Adams via petsc-users writes:
>>
>> > Anyway, my data for this is in my SC 2004 paper (MakeNextMat_private in
>> > attached, NB, this is code that I wrote in grad school). It is memory
>> > efficient and simple,
On Mon, Dec 24, 2018 at 4:56 PM Jed Brown wrote:
> Mark Adams via petsc-users writes:
>
> > Anyway, my data for this is in my SC 2004 paper (MakeNextMat_private in
> > attached, NB, this is code that I wrote in grad school). It is memory
> > efficient and simple, just four nested loops i,j,I,J:
Mark Adams via petsc-users writes:
> Anyway, my data for this is in my SC 2004 paper (MakeNextMat_private in
> attached, NB, this is code that I wrote in grad school). It is memory
> efficient and simple, just four nested loops i,j,I,J: C(I,J) =
> P(i,I)*A(i,j)*P(j,J). In eyeballing the numbers
Wow, this is an old thread.
Sorry if I sound like an old fart talking about the good old days but I
originally did RAP. in Prometheus, in a non work optimal way that might be
of interest. Not hard to implement. I bring this up because we continue to
struggle with this damn thing. I think this
OK, so this thread has drifted, see title :)
On Fri, Dec 21, 2018 at 10:01 PM Fande Kong wrote:
> Sorry, hit the wrong button.
>
>
>
> On Fri, Dec 21, 2018 at 7:56 PM Fande Kong wrote:
>
>>
>>
>> On Fri, Dec 21, 2018 at 9:44 AM Mark Adams wrote:
>>
>>> Also, you mentioned that you are using
Sorry, hit the wrong button.
On Fri, Dec 21, 2018 at 7:56 PM Fande Kong wrote:
>
>
> On Fri, Dec 21, 2018 at 9:44 AM Mark Adams wrote:
>
>> Also, you mentioned that you are using 10 levels. This is very strange
>> with GAMG. You can run with -info and grep on GAMG to see the sizes and the
>>
Thanks so much, Hong,
If any new finding, please let me know.
On Fri, Dec 21, 2018 at 9:36 AM Zhang, Hong wrote:
> Fande:
> I will explore it and get back to you.
> Does anyone know how to profile memory usage?
>
We are using gperftools
On Fri, Dec 21, 2018 at 12:55 PM Zhang, Hong wrote:
> Matt:
>
>> Does anyone know how to profile memory usage?
>>>
>>
>> The best serial way is to use Massif, which is part of valgrind. I think
>> it might work in parallel if you
>> only look at one process at a time.
>>
>
> Can you give an
Matt:
Does anyone know how to profile memory usage?
The best serial way is to use Massif, which is part of valgrind. I think it
might work in parallel if you
only look at one process at a time.
Can you give an example of using Massif?
For example, how to use it on
On Fri, Dec 21, 2018 at 11:36 AM Zhang, Hong via petsc-users <
petsc-users@mcs.anl.gov> wrote:
> Fande:
> I will explore it and get back to you.
> Does anyone know how to profile memory usage?
>
The best serial way is to use Massif, which is part of valgrind. I think it
might work in parallel if
Also, you mentioned that you are using 10 levels. This is very strange with
GAMG. You can run with -info and grep on GAMG to see the sizes and the
number of non-zeros per level. You should coarsen at a rate of about 2^D to
3^D with GAMG (with 10 levels this would imply a very large fine grid
Fande:
I will explore it and get back to you.
Does anyone know how to profile memory usage?
Hong
Thanks, Hong,
I just briefly went through the code. I was wondering if it is possible to
destroy "c->ptap" (that caches a lot of intermediate data) to release the
memory after the coarse matrix is
Thanks, Hong,
I just briefly went through the code. I was wondering if it is possible to
destroy "c->ptap" (that caches a lot of intermediate data) to release the
memory after the coarse matrix is assembled. I understand you may still
want to reuse these data structures by default but for my
We use nonscalable implementation as default, and switch to scalable for
matrices over finer grids. You may use option '-matptap_via scalable' to force
scalable PtAP implementation for all PtAP. Let me know if it works.
Hong
On Thu, Dec 20, 2018 at 8:16 PM Smith, Barry F.
See MatPtAP_MPIAIJ_MPIAIJ(). It switches to scalable automatically for
"large" problems, which is determined by some heuristic.
Barry
> On Dec 20, 2018, at 6:46 PM, Fande Kong via petsc-users
> wrote:
>
>
>
> On Thu, Dec 20, 2018 at 4:43 PM Zhang, Hong wrote:
> Fande:
> Hong,
>
> On Dec 20, 2018, at 5:51 PM, Zhang, Hong via petsc-users
> wrote:
>
> Fande:
> Hong,
> Thanks for your improvements on PtAP that is critical for MG-type algorithms.
>
> On Wed, May 3, 2017 at 10:17 AM Hong wrote:
> Mark,
> Below is the copy of my email sent to you on Feb 27:
>
> I
Fande:
Hong,
Thanks for your improvements on PtAP that is critical for MG-type algorithms.
On Wed, May 3, 2017 at 10:17 AM Hong
mailto:hzh...@mcs.anl.gov>> wrote:
Mark,
Below is the copy of my email sent to you on Feb 27:
I implemented scalable MatPtAP and did comparisons of three
> On Nov 15, 2018, at 1:02 PM, Mark Adams wrote:
>
> There is a lot of load imbalance in VecMAXPY also. The partitioning could be
> bad and if not its the machine.
>
> On Thu, Nov 15, 2018 at 1:56 PM Smith, Barry F. via petsc-users
> wrote:
>
> Something is odd about your
There is a lot of load imbalance in VecMAXPY also. The partitioning could
be bad and if not its the machine.
On Thu, Nov 15, 2018 at 1:56 PM Smith, Barry F. via petsc-users <
petsc-users@mcs.anl.gov> wrote:
>
> Something is odd about your configuration. Just consider the time for
> VecMAXPY
Something is odd about your configuration. Just consider the time for
VecMAXPY which is an embarrassingly parallel operation. On 1000 MPI processes
it produces
Time
On Thu, Nov 15, 2018 at 11:52 AM Karin via petsc-users <
petsc-users@mcs.anl.gov> wrote:
> Dear PETSc team,
>
> I am solving a linear transient dynamic problem, based on a discretization
> with finite elements. To do that, I am using FGMRES with GAMG as a
> preconditioner. I consider here 10 time
Dear PETSc team,
I am solving a linear transient dynamic problem, based on a discretization
with finite elements. To do that, I am using FGMRES with GAMG as a
preconditioner. I consider here 10 time steps.
The problem has round to 118e6 dof and I am running on 1000, 1500 and 2000
procs. So I have
On Thu, Nov 9, 2017 at 2:19 PM, David Nolte wrote:
> Hi Mark,
>
> thanks for clarifying.
> When I wrote the initial question I had somehow overlooked the fact that
> the GAMG standard smoother was Chebychev while ML uses SOR. All the other
> comments concerning threshold
Hi Mark,
thanks for clarifying.
When I wrote the initial question I had somehow overlooked the fact that
the GAMG standard smoother was Chebychev while ML uses SOR. All the
other comments concerning threshold etc were based on this mistake.
The following settings work quite well, of course LU is
On Wed, Nov 1, 2017 at 5:45 PM, David Nolte wrote:
> Thanks Barry.
> By simply replacing chebychev by richardson I get similar performance
> with GAMG and ML
That too (I assumed you were using the same, I could not see cheby in your
view data).
I guess SOR works for the
On Fri, Oct 20, 2017 at 11:10 PM, Barry Smith wrote:
>
> David,
>
>GAMG picks the number of levels based on how the coarsening process etc
> proceeds. You cannot hardwire it to a particular value.
Yes you can. GAMG will respect -pc_mg_levels N, but we don't recommend
>
>
> Now I'd like to try GAMG instead of ML. However, I don't know how to set
> it up to get similar performance.
> The obvious/naive
>
> -pc_type gamg
> -pc_gamg_type agg
>
> # with and without
> -pc_gamg_threshold 0.03
> -pc_mg_levels 3
>
>
This looks fine. I would not set the
Thanks Barry.
By simply replacing chebychev by richardson I get similar performance
with GAMG and ML (GAMG even slightly faster):
-pc_type
gamg
David,
GAMG picks the number of levels based on how the coarsening process etc
proceeds. You cannot hardwire it to a particular value. You can run with -info
to get more info potentially on the decisions GAMG is making.
Barry
> On Oct 20, 2017, at 2:06 PM, David Nolte
PS: I didn't realize at first, it looks as if the -pc_mg_levels 3 option
was not taken into account:
type: gamg
MG: type is MULTIPLICATIVE, levels=1 cycles=v
On 10/20/2017 03:32 PM, David Nolte wrote:
> Dear all,
>
> I have some problems using GAMG as a preconditioner for (F)GMRES.
>
Dear all,
I have some problems using GAMG as a preconditioner for (F)GMRES.
Background: I am solving the incompressible, unsteady Navier-Stokes
equations with a coupled mixed FEM approach, using P1/P1 elements for
velocity and pressure on an unstructured tetrahedron mesh with about
2mio DOFs (and
Mark,
Fixed
https://bitbucket.org/petsc/petsc/commits/68eacb73b84ae7f3fd7363217d47f23a8f967155
Run ex56 gives
mpiexec -n 8 ./ex56 -ne 13 ... -h |grep via
-mattransposematmult_via Algorithmic approach (choose one of)
scalable nonscalable matmatmult (MatTransposeMatMult)
-matmatmult_via
Thanks Hong,
I am not seeing these options with -help ...
On Wed, May 3, 2017 at 10:05 PM, Hong wrote:
> I basically used 'runex56' and set '-ne' be compatible with np.
> Then I used option
> '-matptap_via scalable'
> '-matptap_via hypre'
> '-matptap_via nonscalable'
>
> I
I basically used 'runex56' and set '-ne' be compatible with np.
Then I used option
'-matptap_via scalable'
'-matptap_via hypre'
'-matptap_via nonscalable'
I attached a job script below.
In master branch, I set default as 'nonscalable' for small - medium size
matrices, and automatically switch to
Hong,the input files do not seem to be accessible. What are the command
line option? (I don't see a "rap" or "scale" in the source).
On Wed, May 3, 2017 at 12:17 PM, Hong wrote:
> Mark,
> Below is the copy of my email sent to you on Feb 27:
>
> I implemented scalable
Mark,
Below is the copy of my email sent to you on Feb 27:
I implemented scalable MatPtAP and did comparisons of three implementations
using ex56.c on alcf cetus machine (this machine has small memory,
1GB/core):
- nonscalable PtAP: use an array of length PN to do dense axpy
- scalable PtAP:
(Hong), what is the current state of optimizing RAP for scaling?
Nate, is driving 3D elasticity problems at scaling with GAMG and we are
working out performance problems. They are hitting problems at ~1.5B dof
problems on a basic Cray (XC30 I think).
Thanks,
Mark
Thanks, Mark,
Now, the total compute time using GAMG is competitive with ASM. Looks like
I could not use something like: "-mg_level_1_ksp_type gmres" because this
option makes the compute time much worse.
Fande,
On Thu, Apr 13, 2017 at 9:14 AM, Mark Adams wrote:
>
>
> On
On Wed, Apr 12, 2017 at 7:04 PM, Kong, Fande wrote:
>
>
> On Sun, Apr 9, 2017 at 6:04 AM, Mark Adams wrote:
>
>> You seem to have two levels here and 3M eqs on the fine grid and 37 on
>> the coarse grid. I don't understand that.
>>
>> You are also calling
On Wed, Apr 12, 2017 at 1:31 PM, Kong, Fande wrote:
> Hi Mark,
>
> Thanks for your reply.
>
> On Wed, Apr 12, 2017 at 9:16 AM, Mark Adams wrote:
>
>> The problem comes from setting the number of MG levels (-pc_mg_levels 2).
>> Not your fault, it looks like
On Sun, Apr 9, 2017 at 6:04 AM, Mark Adams wrote:
> You seem to have two levels here and 3M eqs on the fine grid and 37 on
> the coarse grid. I don't understand that.
>
> You are also calling the AMG setup a lot, but not spending much time
> in it. Try running with -info and
Hi Mark,
Thanks for your reply.
On Wed, Apr 12, 2017 at 9:16 AM, Mark Adams wrote:
> The problem comes from setting the number of MG levels (-pc_mg_levels 2).
> Not your fault, it looks like the GAMG logic is faulty, in your version at
> least.
>
What I want is that GAMG
The problem comes from setting the number of MG levels (-pc_mg_levels 2).
Not your fault, it looks like the GAMG logic is faulty, in your version at
least.
GAMG will force the coarsest grid to one processor by default, in newer
versions. You can override the default with:
On Sun, Apr 9, 2017 at 6:04 AM, Mark Adams wrote:
> You seem to have two levels here and 3M eqs on the fine grid and 37 on
> the coarse grid.
37 is on the sub domain.
rows=18145, cols=18145 on the entire coarse grid.
> I don't understand that.
>
> You are also calling
You seem to have two levels here and 3M eqs on the fine grid and 37 on
the coarse grid. I don't understand that.
You are also calling the AMG setup a lot, but not spending much time
in it. Try running with -info and grep on "GAMG".
On Fri, Apr 7, 2017 at 5:29 PM, Kong, Fande
On Fri, Apr 7, 2017 at 3:52 PM, Barry Smith wrote:
>
> > On Apr 7, 2017, at 4:46 PM, Kong, Fande wrote:
> >
> >
> >
> > On Fri, Apr 7, 2017 at 3:39 PM, Barry Smith wrote:
> >
> > Using Petsc Release Version 3.7.5, unknown
> >
> >
> On Apr 7, 2017, at 4:46 PM, Kong, Fande wrote:
>
>
>
> On Fri, Apr 7, 2017 at 3:39 PM, Barry Smith wrote:
>
> Using Petsc Release Version 3.7.5, unknown
>
>So are you using the release or are you using master branch?
>
> I am working on the
Using Petsc Release Version 3.7.5, unknown
So are you using the release or are you using master branch?
If you use master the ASM will be even faster.
> On Apr 7, 2017, at 4:29 PM, Kong, Fande wrote:
>
> Thanks, Barry.
>
> It works.
>
> GAMG is three times
Thanks, Barry.
It works.
GAMG is three times better than ASM in terms of the number of linear
iterations, but it is five times slower than ASM. Any suggestions to
improve the performance of GAMG? Log files are attached.
Fande,
On Thu, Apr 6, 2017 at 3:39 PM, Barry Smith
> On Apr 6, 2017, at 9:39 AM, Kong, Fande wrote:
>
> Thanks, Mark and Barry,
>
> It works pretty wells in terms of the number of linear iterations (using
> "-pc_gamg_sym_graph true"), but it is horrible in the compute time. I am
> using the two-level method via
On Thu, Apr 6, 2017 at 7:39 AM, Kong, Fande wrote:
> Thanks, Mark and Barry,
>
> It works pretty wells in terms of the number of linear iterations (using
> "-pc_gamg_sym_graph true"), but it is horrible in the compute time. I am
> using the two-level method via "-pc_mg_levels
Thanks, Mark and Barry,
It works pretty wells in terms of the number of linear iterations (using
"-pc_gamg_sym_graph true"), but it is horrible in the compute time. I am
using the two-level method via "-pc_mg_levels 2". The reason why the
compute time is larger than other preconditioning options
On Tue, Apr 4, 2017 at 10:10 AM, Barry Smith wrote:
>
>> Does this mean that GAMG works for the symmetrical matrix only?
>
> No, it means that for non symmetric nonzero structure you need the extra
> flag. So use the extra flag. The reason we don't always use the flag is
>
> Does this mean that GAMG works for the symmetrical matrix only?
No, it means that for non symmetric nonzero structure you need the extra
flag. So use the extra flag. The reason we don't always use the flag is because
it adds extra cost and isn't needed if the matrix already has a symmetric
Hi All,
I am using GAMG to solve a group of coupled diffusion equations, but the
resulting matrix is not symmetrical. I got the following error messages:
*[0]PETSC ERROR: Petsc has generated inconsistent data[0]PETSC ERROR: Have
un-symmetric graph (apparently). Use
Hi Justin,
On 21/02/17 06:01, Justin Chang wrote:
> Okay thanks
Now done.
Cheers,
Lawrence
signature.asc
Description: OpenPGP digital signature
Okay thanks
On Sun, Feb 19, 2017 at 2:32 PM, Lawrence Mitchell <
lawrence.mitch...@imperial.ac.uk> wrote:
>
>
> > On 19 Feb 2017, at 18:55, Justin Chang wrote:
> >
> > Okay, it doesn't seem like the Firedrake fork (which is what I am using)
> has this latest fix. Lawrence,
> On 19 Feb 2017, at 18:55, Justin Chang wrote:
>
> Okay, it doesn't seem like the Firedrake fork (which is what I am using) has
> this latest fix. Lawrence, when do you think it's possible you folks can
> incorporate these fixes
I'll fast forward our branch pointer on
1 - 100 of 288 matches
Mail list logo