Re: [petsc-users] GAMG for the unsymmetrical matrix

2017-04-12 Thread Kong, Fande
On Sun, Apr 9, 2017 at 6:04 AM, Mark Adams  wrote:

> You seem to have two levels here and 3M eqs on the fine grid and 37 on
> the coarse grid. I don't understand that.
>
> You are also calling the AMG setup a lot, but not spending much time
> in it. Try running with -info and grep on "GAMG".
>

I got the following output:

[0] PCSetUp_GAMG(): level 0) N=3020875, n data rows=1, n data cols=1,
nnz/row (ave)=71, np=384
[0] PCGAMGFilterGraph():  100.% nnz after filtering, with threshold 0.,
73.6364 nnz ave. (N=3020875)
[0] PCGAMGCoarsen_AGG(): Square Graph on level 1 of 1 to square
[0] PCGAMGProlongator_AGG(): New grid 18162 nodes
[0] PCGAMGOptProlongator_AGG(): Smooth P0: max eigen=1.978702e+00
min=2.559747e-02 PC=jacobi
[0] PCGAMGCreateLevel_GAMG(): Aggregate processors noop: new_size=384,
neq(loc)=40
[0] PCSetUp_GAMG(): 1) N=18162, n data cols=1, nnz/row (ave)=94, 384 active
pes
[0] PCSetUp_GAMG(): 2 levels, grid complexity = 1.00795
[0] PCSetUp_GAMG(): level 0) N=3020875, n data rows=1, n data cols=1,
nnz/row (ave)=71, np=384
[0] PCGAMGFilterGraph():  100.% nnz after filtering, with threshold 0.,
73.6364 nnz ave. (N=3020875)
[0] PCGAMGCoarsen_AGG(): Square Graph on level 1 of 1 to square
[0] PCGAMGProlongator_AGG(): New grid 18145 nodes
[0] PCGAMGOptProlongator_AGG(): Smooth P0: max eigen=1.978584e+00
min=2.557887e-02 PC=jacobi
[0] PCGAMGCreateLevel_GAMG(): Aggregate processors noop: new_size=384,
neq(loc)=37
[0] PCSetUp_GAMG(): 1) N=18145, n data cols=1, nnz/row (ave)=94, 384 active
pes
[0] PCSetUp_GAMG(): 2 levels, grid complexity = 1.00792
GAMG specific options
PCGAMGGraph_AGG   40 1.0 8.0759e+00 1.0 3.56e+07 2.3 1.6e+06 1.9e+04
7.6e+02  2  0  2  4  2   2  0  2  4  2  1170
PCGAMGCoarse_AGG  40 1.0 7.1698e+01 1.0 4.05e+09 2.3 4.0e+06 5.1e+04
1.2e+03 18 37  5 27  3  18 37  5 27  3 14632
PCGAMGProl_AGG40 1.0 9.2650e-01 1.2 0.00e+00 0.0 9.8e+05 2.9e+03
9.6e+02  0  0  1  0  2   0  0  1  0  2 0
PCGAMGPOpt_AGG40 1.0 2.4484e+00 1.0 4.72e+08 2.3 3.1e+06 2.3e+03
1.9e+03  1  4  4  1  4   1  4  4  1  4 51328
GAMG: createProl  40 1.0 8.3786e+01 1.0 4.56e+09 2.3 9.6e+06 2.5e+04
4.8e+03 21 42 12 32 10  21 42 12 32 10 14134
GAMG: partLevel   40 1.0 6.7755e+00 1.1 2.59e+08 2.3 2.9e+06 2.5e+03
1.5e+03  2  2  4  1  3   2  2  4  1  3  9431








>
>
> On Fri, Apr 7, 2017 at 5:29 PM, Kong, Fande  wrote:
> > Thanks, Barry.
> >
> > It works.
> >
> > GAMG is three times better than ASM in terms of the number of linear
> > iterations, but it is five times slower than ASM. Any suggestions to
> improve
> > the performance of GAMG? Log files are attached.
> >
> > Fande,
> >
> > On Thu, Apr 6, 2017 at 3:39 PM, Barry Smith  wrote:
> >>
> >>
> >> > On Apr 6, 2017, at 9:39 AM, Kong, Fande  wrote:
> >> >
> >> > Thanks, Mark and Barry,
> >> >
> >> > It works pretty wells in terms of the number of linear iterations
> (using
> >> > "-pc_gamg_sym_graph true"), but it is horrible in the compute time. I
> am
> >> > using the two-level method via "-pc_mg_levels 2". The reason why the
> compute
> >> > time is larger than other preconditioning options is that a matrix
> free
> >> > method is used in the fine level and in my particular problem the
> function
> >> > evaluation is expensive.
> >> >
> >> > I am using "-snes_mf_operator 1" to turn on the Jacobian-free Newton,
> >> > but I do not think I want to make the preconditioning part
> matrix-free.  Do
> >> > you guys know how to turn off the matrix-free method for GAMG?
> >>
> >>-pc_use_amat false
> >>
> >> >
> >> > Here is the detailed solver:
> >> >
> >> > SNES Object: 384 MPI processes
> >> >   type: newtonls
> >> >   maximum iterations=200, maximum function evaluations=1
> >> >   tolerances: relative=1e-08, absolute=1e-08, solution=1e-50
> >> >   total number of linear solver iterations=20
> >> >   total number of function evaluations=166
> >> >   norm schedule ALWAYS
> >> >   SNESLineSearch Object:   384 MPI processes
> >> > type: bt
> >> >   interpolation: cubic
> >> >   alpha=1.00e-04
> >> > maxstep=1.00e+08, minlambda=1.00e-12
> >> > tolerances: relative=1.00e-08, absolute=1.00e-15,
> >> > lambda=1.00e-08
> >> > maximum iterations=40
> >> >   KSP Object:   384 MPI processes
> >> > type: gmres
> >> >   GMRES: restart=100, using Classical (unmodified) Gram-Schmidt
> >> > Orthogonalization with no iterative refinement
> >> >   GMRES: happy breakdown tolerance 1e-30
> >> > maximum iterations=100, initial guess is zero
> >> > tolerances:  relative=0.001, absolute=1e-50, divergence=1.
> >> > right preconditioning
> >> > using UNPRECONDITIONED norm type for convergence test
> >> >   PC Object:   384 MPI processes
> >> > type: gamg
> >> >   MG: type is MULTIPLICATIVE, levels=2 cycles=v
> >> > Cycles per PCApply=1
> >> > Using Galerkin computed coarse 

Re: [petsc-users] GAMG for the unsymmetrical matrix

2017-04-12 Thread Kong, Fande
Hi Mark,

Thanks for your reply.

On Wed, Apr 12, 2017 at 9:16 AM, Mark Adams  wrote:

> The problem comes from setting the number of MG levels (-pc_mg_levels 2).
> Not your fault, it looks like the GAMG logic is faulty, in your version at
> least.
>

What I want is that GAMG coarsens the fine matrix once and then stops doing
anything.  I did not see any benefits to have more levels if the number of
processors is small.


>
> GAMG will force the coarsest grid to one processor by default, in newer
> versions. You can override the default with:
>
> -pc_gamg_use_parallel_coarse_grid_solver
>
> Your coarse grid solver is ASM with these 37 equation per process and 512
> processes. That is bad.
>

Why this is bad? The subdomain problem is too small?


> Note, you could run this on one process to see the proper convergence
> rate.
>

Convergence rate for which part? coarse solver, subdomain solver?


> You can fix this with parameters:
>
> >   -pc_gamg_process_eq_limit <50>: Limit (goal) on number of equations
> per process on coarse grids (PCGAMGSetProcEqLim)
> >   -pc_gamg_coarse_eq_limit <50>: Limit on number of equations for the
> coarse grid (PCGAMGSetCoarseEqLim)
>
> If you really want two levels then set something like
> -pc_gamg_coarse_eq_limit 18145 (or higher) -pc_gamg_coarse_eq_limit 18145
> (or higher).
>


May have something like: make the coarse problem 1/8 large as the original
problem? Otherwise, this number is just problem dependent.



> You can run with -info and grep on GAMG and you will meta-data for each
> level. you should see "npe=1" for the coarsest, last, grid. Or use a
> parallel direct solver.
>

I will try.


>
> Note, you should not see much degradation as you increase the number of
> levels. 18145 eqs on a 3D problem will probably be noticeable. I generally
> aim for about 3000.
>

It should be fine as long as the coarse problem is solved by a parallel
solver.


Fande,


>
>
> On Mon, Apr 10, 2017 at 12:17 PM, Kong, Fande  wrote:
>
>>
>>
>> On Sun, Apr 9, 2017 at 6:04 AM, Mark Adams  wrote:
>>
>>> You seem to have two levels here and 3M eqs on the fine grid and 37 on
>>> the coarse grid.
>>
>>
>> 37 is on the sub domain.
>>
>>  rows=18145, cols=18145 on the entire coarse grid.
>>
>>
>>
>>
>>
>>> I don't understand that.
>>>
>>> You are also calling the AMG setup a lot, but not spending much time
>>> in it. Try running with -info and grep on "GAMG".
>>>
>>>
>>> On Fri, Apr 7, 2017 at 5:29 PM, Kong, Fande  wrote:
>>> > Thanks, Barry.
>>> >
>>> > It works.
>>> >
>>> > GAMG is three times better than ASM in terms of the number of linear
>>> > iterations, but it is five times slower than ASM. Any suggestions to
>>> improve
>>> > the performance of GAMG? Log files are attached.
>>> >
>>> > Fande,
>>> >
>>> > On Thu, Apr 6, 2017 at 3:39 PM, Barry Smith 
>>> wrote:
>>> >>
>>> >>
>>> >> > On Apr 6, 2017, at 9:39 AM, Kong, Fande  wrote:
>>> >> >
>>> >> > Thanks, Mark and Barry,
>>> >> >
>>> >> > It works pretty wells in terms of the number of linear iterations
>>> (using
>>> >> > "-pc_gamg_sym_graph true"), but it is horrible in the compute time.
>>> I am
>>> >> > using the two-level method via "-pc_mg_levels 2". The reason why
>>> the compute
>>> >> > time is larger than other preconditioning options is that a matrix
>>> free
>>> >> > method is used in the fine level and in my particular problem the
>>> function
>>> >> > evaluation is expensive.
>>> >> >
>>> >> > I am using "-snes_mf_operator 1" to turn on the Jacobian-free
>>> Newton,
>>> >> > but I do not think I want to make the preconditioning part
>>> matrix-free.  Do
>>> >> > you guys know how to turn off the matrix-free method for GAMG?
>>> >>
>>> >>-pc_use_amat false
>>> >>
>>> >> >
>>> >> > Here is the detailed solver:
>>> >> >
>>> >> > SNES Object: 384 MPI processes
>>> >> >   type: newtonls
>>> >> >   maximum iterations=200, maximum function evaluations=1
>>> >> >   tolerances: relative=1e-08, absolute=1e-08, solution=1e-50
>>> >> >   total number of linear solver iterations=20
>>> >> >   total number of function evaluations=166
>>> >> >   norm schedule ALWAYS
>>> >> >   SNESLineSearch Object:   384 MPI processes
>>> >> > type: bt
>>> >> >   interpolation: cubic
>>> >> >   alpha=1.00e-04
>>> >> > maxstep=1.00e+08, minlambda=1.00e-12
>>> >> > tolerances: relative=1.00e-08, absolute=1.00e-15,
>>> >> > lambda=1.00e-08
>>> >> > maximum iterations=40
>>> >> >   KSP Object:   384 MPI processes
>>> >> > type: gmres
>>> >> >   GMRES: restart=100, using Classical (unmodified) Gram-Schmidt
>>> >> > Orthogonalization with no iterative refinement
>>> >> >   GMRES: happy breakdown tolerance 1e-30
>>> >> > maximum iterations=100, initial guess is zero
>>> >> > tolerances:  relative=0.001, absolute=1e-50, divergence=1.
>>> >> > right preconditioning

Re: [petsc-users] how to use petsc4py with mpi subcommunicators?

2017-04-12 Thread Rodrigo Felicio
Thanks, Gaetan, for your suggestions and for running the code…Now I know for 
sure there is something wrong with my installation!
Cheers
Rodrigo

From: Gaetan Kenway [mailto:gaet...@gmail.com]
Sent: Wednesday, April 12, 2017 12:20 PM
To: Rodrigo Felicio
Cc: petsc-users@mcs.anl.gov
Subject: Re: [petsc-users] how to use petsc4py with mpi subcommunicators?

Maybe try doing
pComm=MPI.COMM_WORLD instead of the PETSc.COMM_WORLD. I know it shouldn't 
matter, but it's worth a shot. Also then you won't need the tompi4py() i guess.

Gaetan

On Wed, Apr 12, 2017 at 10:17 AM, Gaetan Kenway 
> wrote:
Hi Rodrigo

I just ran your example on Nasa's Pleiades system. Here's what I got:

PBS r459i4n11:~> time mpiexec -n 5 python3.5 another_split_ex.py
number of subcomms = 2.5
petsc rank=2, petsc size=5
sub rank 1/3, color:0
petsc rank=4, petsc size=5
sub rank 2/3, color:0
petsc rank=0, petsc size=5
sub rank 0/3, color:0
KSP Object: 2 MPI processes
  type: cg
  maximum iterations=1, initial guess is zero
  tolerances:  relative=1e-05, absolute=1e-50, divergence=1.
  left preconditioning
  using DEFAULT norm type for convergence test
PC Object: 2 MPI processes
  type: none
  PC has not been set up so information may be incomplete
  linear system matrix = precond matrix:
  Mat Object:   2 MPI processes
type: mpidense
rows=100, cols=100
total: nonzeros=1, allocated nonzeros=1
total number of mallocs used during MatSetValues calls =0
petsc rank=1, petsc size=5
sub rank 0/2, color:1
creating A in subcomm 1= 2, 0
petsc rank=3, petsc size=5
sub rank 1/2, color:1
creating A in subcomm 1= 2, 1

real0m1.236s
user0m0.088s
sys 0m0.008s

So everything looks like it went through fine. I know this doesn't help you 
directly, but we can confirm at least the python code itself is fine.

Gaetan

On Wed, Apr 12, 2017 at 10:10 AM, Rodrigo Felicio 
> wrote:
Going over my older codes I found out that I have already tried the approach of 
splitting PETSc.COMM_WORLD, but whenever I try to create a matrix using a 
subcommuicator, the program fails. For example, executing the following python 
code attached to this msg,  I get the following output

time mpirun -n 5 python another_split_ex.py
petsc rank=2, petsc size=5
petsc rank=3, petsc size=5
petsc rank=0, petsc size=5
petsc rank=1, petsc size=5
petsc rank=4, petsc size=5
number of subcomms = 2
sub rank 0/3, color:0
sub rank 0/2, color:1
sub rank 1/3, color:0
sub rank 1/2, color:1
sub rank 2/3, color:0
creating A in subcomm 1= 2, 1
creating A in subcomm 1= 2, 0
Traceback (most recent call last):
  File "another_split_ex.py", line 43, in 
Traceback (most recent call last):
  File "another_split_ex.py", line 43, in 
A = PETSc.Mat().createDense([n,n], comm=subcomm)
  File "PETSc/Mat.pyx", line 390, in petsc4py.PETSc.Mat.createDense 
(src/petsc4py.PETSc.c:113792)
A = PETSc.Mat().createDense([n,n], comm=subcomm)
  File "PETSc/Mat.pyx", line 390, in petsc4py.PETSc.Mat.createDense 
(src/petsc4py.PETSc.c:113792)
  File "PETSc/petscmat.pxi", line 602, in petsc4py.PETSc.Mat_Create 
(src/petsc4py.PETSc.c:25274)
  File "PETSc/petscmat.pxi", line 602, in petsc4py.PETSc.Mat_Create 
(src/petsc4py.PETSc.c:25274)
  File "PETSc/petscsys.pxi", line 104, in petsc4py.PETSc.Sys_Layout 
(src/petsc4py.PETSc.c:13666)
  File "PETSc/petscsys.pxi", line 104, in petsc4py.PETSc.Sys_Layout 
(src/petsc4py.PETSc.c:13666)
petsc4py.PETSc.Error: petsc4py.PETSc.Errorerror code 608517
[1] PetscSplitOwnership() line 86 in ~/mylocal/petsc/src/sys/utils/psplit.c
: error code 134826245
[3] PetscSplitOwnership() line 86 in ~/mylocal/petsc/src/sys/utils/psplit.c


Checking the traceback, all I can say is that when the subcommunicator object 
reaches psplit.c code it gets somehow corrupted, because PetscSplitOwnership() 
fails to retrieve the size of the subcommunicator ... :-(

regards
Rodrigo




This email and any files transmitted with it are confidential and are intended 
solely for the use of the individual or entity to whom they are addressed. If 
you are not the original recipient or the person responsible for delivering the 
email to the intended recipient, be advised that you have received this email 
in error, and that any use, dissemination, forwarding, printing, or copying of 
this email is strictly prohibited. If you received this email in error, please 
immediately notify the sender and delete the original.





This email and any files transmitted with it are confidential and are intended 
solely for the use of the individual or entity to whom they are addressed. If 
you are not the original recipient or the person responsible for delivering the 
email to the intended recipient, be advised that you have received this email 
in error, and that any use, dissemination, forwarding, printing, or copying of 
this 

Re: [petsc-users] how to use petsc4py with mpi subcommunicators?

2017-04-12 Thread Gaetan Kenway
Maybe try doing
pComm=MPI.COMM_WORLD instead of the PETSc.COMM_WORLD. I know it shouldn't
matter, but it's worth a shot. Also then you won't need the tompi4py() i
guess.

Gaetan

On Wed, Apr 12, 2017 at 10:17 AM, Gaetan Kenway  wrote:

> Hi Rodrigo
>
> I just ran your example on Nasa's Pleiades system. Here's what I got:
>
> PBS r459i4n11:~> time mpiexec -n 5 python3.5 another_split_ex.py
> number of subcomms = 2.5
> petsc rank=2, petsc size=5
> sub rank 1/3, color:0
> petsc rank=4, petsc size=5
> sub rank 2/3, color:0
> petsc rank=0, petsc size=5
> sub rank 0/3, color:0
> KSP Object: 2 MPI processes
>   type: cg
>   maximum iterations=1, initial guess is zero
>   tolerances:  relative=1e-05, absolute=1e-50, divergence=1.
>   left preconditioning
>   using DEFAULT norm type for convergence test
> PC Object: 2 MPI processes
>   type: none
>   PC has not been set up so information may be incomplete
>   linear system matrix = precond matrix:
>   Mat Object:   2 MPI processes
> type: mpidense
> rows=100, cols=100
> total: nonzeros=1, allocated nonzeros=1
> total number of mallocs used during MatSetValues calls =0
> petsc rank=1, petsc size=5
> sub rank 0/2, color:1
> creating A in subcomm 1= 2, 0
> petsc rank=3, petsc size=5
> sub rank 1/2, color:1
> creating A in subcomm 1= 2, 1
>
> real0m1.236s
> user0m0.088s
> sys 0m0.008s
>
> So everything looks like it went through fine. I know this doesn't help
> you directly, but we can confirm at least the python code itself is fine.
>
> Gaetan
>
> On Wed, Apr 12, 2017 at 10:10 AM, Rodrigo Felicio <
> rodrigo.feli...@iongeo.com> wrote:
>
>> Going over my older codes I found out that I have already tried the
>> approach of splitting PETSc.COMM_WORLD, but whenever I try to create a
>> matrix using a subcommuicator, the program fails. For example, executing
>> the following python code attached to this msg,  I get the following output
>>
>> time mpirun -n 5 python another_split_ex.py
>> petsc rank=2, petsc size=5
>> petsc rank=3, petsc size=5
>> petsc rank=0, petsc size=5
>> petsc rank=1, petsc size=5
>> petsc rank=4, petsc size=5
>> number of subcomms = 2
>> sub rank 0/3, color:0
>> sub rank 0/2, color:1
>> sub rank 1/3, color:0
>> sub rank 1/2, color:1
>> sub rank 2/3, color:0
>> creating A in subcomm 1= 2, 1
>> creating A in subcomm 1= 2, 0
>> Traceback (most recent call last):
>>   File "another_split_ex.py", line 43, in 
>> Traceback (most recent call last):
>>   File "another_split_ex.py", line 43, in 
>> A = PETSc.Mat().createDense([n,n], comm=subcomm)
>>   File "PETSc/Mat.pyx", line 390, in petsc4py.PETSc.Mat.createDense
>> (src/petsc4py.PETSc.c:113792)
>> A = PETSc.Mat().createDense([n,n], comm=subcomm)
>>   File "PETSc/Mat.pyx", line 390, in petsc4py.PETSc.Mat.createDense
>> (src/petsc4py.PETSc.c:113792)
>>   File "PETSc/petscmat.pxi", line 602, in petsc4py.PETSc.Mat_Create
>> (src/petsc4py.PETSc.c:25274)
>>   File "PETSc/petscmat.pxi", line 602, in petsc4py.PETSc.Mat_Create
>> (src/petsc4py.PETSc.c:25274)
>>   File "PETSc/petscsys.pxi", line 104, in petsc4py.PETSc.Sys_Layout
>> (src/petsc4py.PETSc.c:13666)
>>   File "PETSc/petscsys.pxi", line 104, in petsc4py.PETSc.Sys_Layout
>> (src/petsc4py.PETSc.c:13666)
>> petsc4py.PETSc.Error: petsc4py.PETSc.Errorerror code 608517
>> [1] PetscSplitOwnership() line 86 in ~/mylocal/petsc/src/sys/utils/
>> psplit.c
>> : error code 134826245
>> [3] PetscSplitOwnership() line 86 in ~/mylocal/petsc/src/sys/utils/
>> psplit.c
>>
>>
>> Checking the traceback, all I can say is that when the subcommunicator
>> object reaches psplit.c code it gets somehow corrupted, because
>> PetscSplitOwnership() fails to retrieve the size of the subcommunicator ...
>> :-(
>>
>> regards
>> Rodrigo
>>
>> 
>>
>>
>> This email and any files transmitted with it are confidential and are
>> intended solely for the use of the individual or entity to whom they are
>> addressed. If you are not the original recipient or the person responsible
>> for delivering the email to the intended recipient, be advised that you
>> have received this email in error, and that any use, dissemination,
>> forwarding, printing, or copying of this email is strictly prohibited. If
>> you received this email in error, please immediately notify the sender and
>> delete the original.
>>
>>
>> 
>>
>>
>> This email and any files transmitted with it are confidential and are
>> intended solely for the use of the individual or entity to whom they are
>> addressed. If you are not the original recipient or the person responsible
>> for delivering the email to the intended recipient, be advised that you
>> have received this email in error, and that any use, dissemination,
>> forwarding, printing, or copying of this email is strictly prohibited. If
>> you received this email in error, please immediately notify the sender and
>> delete the original.
>>
>>

Re: [petsc-users] how to use petsc4py with mpi subcommunicators?

2017-04-12 Thread Gaetan Kenway
Hi Rodrigo

I just ran your example on Nasa's Pleiades system. Here's what I got:

PBS r459i4n11:~> time mpiexec -n 5 python3.5 another_split_ex.py
number of subcomms = 2.5
petsc rank=2, petsc size=5
sub rank 1/3, color:0
petsc rank=4, petsc size=5
sub rank 2/3, color:0
petsc rank=0, petsc size=5
sub rank 0/3, color:0
KSP Object: 2 MPI processes
  type: cg
  maximum iterations=1, initial guess is zero
  tolerances:  relative=1e-05, absolute=1e-50, divergence=1.
  left preconditioning
  using DEFAULT norm type for convergence test
PC Object: 2 MPI processes
  type: none
  PC has not been set up so information may be incomplete
  linear system matrix = precond matrix:
  Mat Object:   2 MPI processes
type: mpidense
rows=100, cols=100
total: nonzeros=1, allocated nonzeros=1
total number of mallocs used during MatSetValues calls =0
petsc rank=1, petsc size=5
sub rank 0/2, color:1
creating A in subcomm 1= 2, 0
petsc rank=3, petsc size=5
sub rank 1/2, color:1
creating A in subcomm 1= 2, 1

real0m1.236s
user0m0.088s
sys 0m0.008s

So everything looks like it went through fine. I know this doesn't help you
directly, but we can confirm at least the python code itself is fine.

Gaetan

On Wed, Apr 12, 2017 at 10:10 AM, Rodrigo Felicio <
rodrigo.feli...@iongeo.com> wrote:

> Going over my older codes I found out that I have already tried the
> approach of splitting PETSc.COMM_WORLD, but whenever I try to create a
> matrix using a subcommuicator, the program fails. For example, executing
> the following python code attached to this msg,  I get the following output
>
> time mpirun -n 5 python another_split_ex.py
> petsc rank=2, petsc size=5
> petsc rank=3, petsc size=5
> petsc rank=0, petsc size=5
> petsc rank=1, petsc size=5
> petsc rank=4, petsc size=5
> number of subcomms = 2
> sub rank 0/3, color:0
> sub rank 0/2, color:1
> sub rank 1/3, color:0
> sub rank 1/2, color:1
> sub rank 2/3, color:0
> creating A in subcomm 1= 2, 1
> creating A in subcomm 1= 2, 0
> Traceback (most recent call last):
>   File "another_split_ex.py", line 43, in 
> Traceback (most recent call last):
>   File "another_split_ex.py", line 43, in 
> A = PETSc.Mat().createDense([n,n], comm=subcomm)
>   File "PETSc/Mat.pyx", line 390, in petsc4py.PETSc.Mat.createDense
> (src/petsc4py.PETSc.c:113792)
> A = PETSc.Mat().createDense([n,n], comm=subcomm)
>   File "PETSc/Mat.pyx", line 390, in petsc4py.PETSc.Mat.createDense
> (src/petsc4py.PETSc.c:113792)
>   File "PETSc/petscmat.pxi", line 602, in petsc4py.PETSc.Mat_Create
> (src/petsc4py.PETSc.c:25274)
>   File "PETSc/petscmat.pxi", line 602, in petsc4py.PETSc.Mat_Create
> (src/petsc4py.PETSc.c:25274)
>   File "PETSc/petscsys.pxi", line 104, in petsc4py.PETSc.Sys_Layout
> (src/petsc4py.PETSc.c:13666)
>   File "PETSc/petscsys.pxi", line 104, in petsc4py.PETSc.Sys_Layout
> (src/petsc4py.PETSc.c:13666)
> petsc4py.PETSc.Error: petsc4py.PETSc.Errorerror code 608517
> [1] PetscSplitOwnership() line 86 in ~/mylocal/petsc/src/sys/utils/
> psplit.c
> : error code 134826245
> [3] PetscSplitOwnership() line 86 in ~/mylocal/petsc/src/sys/utils/
> psplit.c
>
>
> Checking the traceback, all I can say is that when the subcommunicator
> object reaches psplit.c code it gets somehow corrupted, because
> PetscSplitOwnership() fails to retrieve the size of the subcommunicator ...
> :-(
>
> regards
> Rodrigo
>
> 
>
>
> This email and any files transmitted with it are confidential and are
> intended solely for the use of the individual or entity to whom they are
> addressed. If you are not the original recipient or the person responsible
> for delivering the email to the intended recipient, be advised that you
> have received this email in error, and that any use, dissemination,
> forwarding, printing, or copying of this email is strictly prohibited. If
> you received this email in error, please immediately notify the sender and
> delete the original.
>
>
> 
>
>
> This email and any files transmitted with it are confidential and are
> intended solely for the use of the individual or entity to whom they are
> addressed. If you are not the original recipient or the person responsible
> for delivering the email to the intended recipient, be advised that you
> have received this email in error, and that any use, dissemination,
> forwarding, printing, or copying of this email is strictly prohibited. If
> you received this email in error, please immediately notify the sender and
> delete the original.
>
>


Re: [petsc-users] how to use petsc4py with mpi subcommunicators?

2017-04-12 Thread Rodrigo Felicio
Going over my older codes I found out that I have already tried the approach of 
splitting PETSc.COMM_WORLD, but whenever I try to create a matrix using a 
subcommuicator, the program fails. For example, executing the following python 
code attached to this msg,  I get the following output

time mpirun -n 5 python another_split_ex.py
petsc rank=2, petsc size=5
petsc rank=3, petsc size=5
petsc rank=0, petsc size=5
petsc rank=1, petsc size=5
petsc rank=4, petsc size=5
number of subcomms = 2
sub rank 0/3, color:0
sub rank 0/2, color:1
sub rank 1/3, color:0
sub rank 1/2, color:1
sub rank 2/3, color:0
creating A in subcomm 1= 2, 1
creating A in subcomm 1= 2, 0
Traceback (most recent call last):
  File "another_split_ex.py", line 43, in 
Traceback (most recent call last):
  File "another_split_ex.py", line 43, in 
A = PETSc.Mat().createDense([n,n], comm=subcomm)
  File "PETSc/Mat.pyx", line 390, in petsc4py.PETSc.Mat.createDense 
(src/petsc4py.PETSc.c:113792)
A = PETSc.Mat().createDense([n,n], comm=subcomm)
  File "PETSc/Mat.pyx", line 390, in petsc4py.PETSc.Mat.createDense 
(src/petsc4py.PETSc.c:113792)
  File "PETSc/petscmat.pxi", line 602, in petsc4py.PETSc.Mat_Create 
(src/petsc4py.PETSc.c:25274)
  File "PETSc/petscmat.pxi", line 602, in petsc4py.PETSc.Mat_Create 
(src/petsc4py.PETSc.c:25274)
  File "PETSc/petscsys.pxi", line 104, in petsc4py.PETSc.Sys_Layout 
(src/petsc4py.PETSc.c:13666)
  File "PETSc/petscsys.pxi", line 104, in petsc4py.PETSc.Sys_Layout 
(src/petsc4py.PETSc.c:13666)
petsc4py.PETSc.Error: petsc4py.PETSc.Errorerror code 608517
[1] PetscSplitOwnership() line 86 in ~/mylocal/petsc/src/sys/utils/psplit.c
: error code 134826245
[3] PetscSplitOwnership() line 86 in ~/mylocal/petsc/src/sys/utils/psplit.c


Checking the traceback, all I can say is that when the subcommunicator object 
reaches psplit.c code it gets somehow corrupted, because PetscSplitOwnership() 
fails to retrieve the size of the subcommunicator ... :-(

regards
Rodrigo




This email and any files transmitted with it are confidential and are intended 
solely for the use of the individual or entity to whom they are addressed. If 
you are not the original recipient or the person responsible for delivering the 
email to the intended recipient, be advised that you have received this email 
in error, and that any use, dissemination, forwarding, printing, or copying of 
this email is strictly prohibited. If you received this email in error, please 
immediately notify the sender and delete the original.





This email and any files transmitted with it are confidential and are intended 
solely for the use of the individual or entity to whom they are addressed. If 
you are not the original recipient or the person responsible for delivering the 
email to the intended recipient, be advised that you have received this email 
in error, and that any use, dissemination, forwarding, printing, or copying of 
this email is strictly prohibited. If you received this email in error, please 
immediately notify the sender and delete the original.

import petsc4py, sys
petsc4py.init(sys.argv)
from petsc4py import PETSc
import mpi4py.rc
mpi4py.rc.finalize=False
#from mpi4py import MPI

import numpy as np


pComm = PETSc.COMM_WORLD
pRank = pComm.getRank()
pSize = pComm.Get_size()
#PETSc.Sys.Print('petsc rank={}, petsc size={}'.format(pRank, pSize))
print('petsc rank={}, petsc size={}'.format(pRank, pSize))

# break into communicators

NumProcsPerSubComm = 2 
color = pRank % NumProcsPerSubComm 
NumSubComms = pSize/NumProcsPerSubComm 
#subcomm = PETSc.Comm(MPI.COMM_WORLD.Split(color))
#subcomm = MPI.COMM_WORLD.Split(color)
subcomm = pComm.tompi4py().Split(color)
subRank = subcomm.Get_rank()
subSize = subcomm.Get_size()

PETSc.Sys.Print('number of subcomms = {}'.format(NumSubComms))
for i in range(pSize):
pComm.barrier()
#PETSc.Sys.Print('sub rank {}, psub size{}'.format(pRank, pSize))
if (pComm.rank == i):
print('sub rank {}/{}, color:{}'.format(subRank, subSize, color))


n =100
a = np.random.randn(n, n)
asym = a + a.T

# if subcomm != MPI.COMM_NULL:
if color == 1:
print('creating A in subcomm {}= {}, {}'.format(color, subSize, subRank))
A = PETSc.Mat().createDense([n,n], comm=subcomm)
A.setUp()
row1, row2 = A.getOwnershipRange()
PETSc.Sys.Print('rows={},{}'.format(row1, row2))
for ip in range(row1, row2):
for k in range(n):
A.setValues(ip, k, asym[ip, k])

A.assemble()

ksp = PETSc.KSP()
ksp.create(subcomm)

ksp.setType('cg')
ksp.getPC().setType('none')

x, b = A.getVecs()
x.set(0)
b.set(1)

ksp.setOperators(A)
ksp.setFromOptions()
ksp.view()
ksp.solve(b, x)
PETSc.Sys.Print('||x|| = {}'.format(x.norm()))


Re: [petsc-users] how to use petsc4py with mpi subcommunicators?

2017-04-12 Thread Gaetan Kenway
One other quick note:

Sometimes it appears that mpi4py for petsc4py do not always play nicely
together. I think you want to do the petsc4py import first and then the
mpi4py import. Then you can split the MPI.COMM_WORLD all you want and
create any petsc4py objects on them.  Or if that doesn't work, swap the
import order. If I recall correctly, you could get a warning on exit that
something in mpi4py wasn't cleaned up correctly.

Gaetan

On Wed, Apr 12, 2017 at 7:30 AM, Rodrigo Felicio  wrote:

> Thanks Jed and Gaetan.
> I will try that approach of splitting PETSc.COMM_WORLD,  but I still need
> to load mpi4py (probably after PETSc), because PETSc.Comm is very limited,
> i.e., it does not have the split function, for example. My goal is to be
> able to set different matrices and vectors for each subcommunicator, and I
> am guessing that I can create them using something like
> PETSc.Mat().createAij(comm=subcomm)
>
> Kind regards
> Rodrigo
>
> 
>
>
> This email and any files transmitted with it are confidential and are
> intended solely for the use of the individual or entity to whom they are
> addressed. If you are not the original recipient or the person responsible
> for delivering the email to the intended recipient, be advised that you
> have received this email in error, and that any use, dissemination,
> forwarding, printing, or copying of this email is strictly prohibited. If
> you received this email in error, please immediately notify the sender and
> delete the original.
>
>


[petsc-users] dmplex face normals orientation

2017-04-12 Thread Ingo Gaertner
Hello,
I have problems determining the orientation of the face normals of a DMPlex.

I create a DMPlex, for example with DMPlexCreateHexBoxMesh().
Next, I get the face normals using DMPlexComputeGeometryFVM(DM dm, Vec
*cellgeom, Vec *facegeom). facegeom gives the correct normals, but I don't
know how the inside/outside is defined with respect to the adjacant cells?

Finally, I iterate over all cells. For each cell I iterate over the
bounding faces (obtained from DMPlexGetCone) and try to obtain their
orientation with respect to the current cell using
DMPlexGetConeOrientation(). However, the six integers for the orientation
are the same for each cell. I expect them to flip between neighbour cells,
because if a face normal is pointing outside for any cell, the same normal
is pointing inside for its neighbour. Apparently I have a misunderstanding
here.

How can I make use of the face normals in facegeom and the orientation
values from DMPlexGetConeOrientation() to get the outside face normals for
each cell?

Thank you
Ingo


Virenfrei.
www.avast.com

<#DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2>


Re: [petsc-users] GAMG for the unsymmetrical matrix

2017-04-12 Thread Mark Adams
The problem comes from setting the number of MG levels (-pc_mg_levels 2).
Not your fault, it looks like the GAMG logic is faulty, in your version at
least.

GAMG will force the coarsest grid to one processor by default, in newer
versions. You can override the default with:

-pc_gamg_use_parallel_coarse_grid_solver

Your coarse grid solver is ASM with these 37 equation per process and 512
processes. That is bad. Note, you could run this on one process to see the
proper convergence rate.  You can fix this with parameters:

>   -pc_gamg_process_eq_limit <50>: Limit (goal) on number of equations per
process on coarse grids (PCGAMGSetProcEqLim)
>   -pc_gamg_coarse_eq_limit <50>: Limit on number of equations for the
coarse grid (PCGAMGSetCoarseEqLim)

If you really want two levels then set something like
-pc_gamg_coarse_eq_limit 18145 (or higher) -pc_gamg_coarse_eq_limit 18145
(or higher). You can run with -info and grep on GAMG and you will meta-data
for each level. you should see "npe=1" for the coarsest, last, grid. Or use
a parallel direct solver.

Note, you should not see much degradation as you increase the number of
levels. 18145 eqs on a 3D problem will probably be noticeable. I generally
aim for about 3000.


On Mon, Apr 10, 2017 at 12:17 PM, Kong, Fande  wrote:

>
>
> On Sun, Apr 9, 2017 at 6:04 AM, Mark Adams  wrote:
>
>> You seem to have two levels here and 3M eqs on the fine grid and 37 on
>> the coarse grid.
>
>
> 37 is on the sub domain.
>
>  rows=18145, cols=18145 on the entire coarse grid.
>
>
>
>
>
>> I don't understand that.
>>
>> You are also calling the AMG setup a lot, but not spending much time
>> in it. Try running with -info and grep on "GAMG".
>>
>>
>> On Fri, Apr 7, 2017 at 5:29 PM, Kong, Fande  wrote:
>> > Thanks, Barry.
>> >
>> > It works.
>> >
>> > GAMG is three times better than ASM in terms of the number of linear
>> > iterations, but it is five times slower than ASM. Any suggestions to
>> improve
>> > the performance of GAMG? Log files are attached.
>> >
>> > Fande,
>> >
>> > On Thu, Apr 6, 2017 at 3:39 PM, Barry Smith  wrote:
>> >>
>> >>
>> >> > On Apr 6, 2017, at 9:39 AM, Kong, Fande  wrote:
>> >> >
>> >> > Thanks, Mark and Barry,
>> >> >
>> >> > It works pretty wells in terms of the number of linear iterations
>> (using
>> >> > "-pc_gamg_sym_graph true"), but it is horrible in the compute time.
>> I am
>> >> > using the two-level method via "-pc_mg_levels 2". The reason why the
>> compute
>> >> > time is larger than other preconditioning options is that a matrix
>> free
>> >> > method is used in the fine level and in my particular problem the
>> function
>> >> > evaluation is expensive.
>> >> >
>> >> > I am using "-snes_mf_operator 1" to turn on the Jacobian-free Newton,
>> >> > but I do not think I want to make the preconditioning part
>> matrix-free.  Do
>> >> > you guys know how to turn off the matrix-free method for GAMG?
>> >>
>> >>-pc_use_amat false
>> >>
>> >> >
>> >> > Here is the detailed solver:
>> >> >
>> >> > SNES Object: 384 MPI processes
>> >> >   type: newtonls
>> >> >   maximum iterations=200, maximum function evaluations=1
>> >> >   tolerances: relative=1e-08, absolute=1e-08, solution=1e-50
>> >> >   total number of linear solver iterations=20
>> >> >   total number of function evaluations=166
>> >> >   norm schedule ALWAYS
>> >> >   SNESLineSearch Object:   384 MPI processes
>> >> > type: bt
>> >> >   interpolation: cubic
>> >> >   alpha=1.00e-04
>> >> > maxstep=1.00e+08, minlambda=1.00e-12
>> >> > tolerances: relative=1.00e-08, absolute=1.00e-15,
>> >> > lambda=1.00e-08
>> >> > maximum iterations=40
>> >> >   KSP Object:   384 MPI processes
>> >> > type: gmres
>> >> >   GMRES: restart=100, using Classical (unmodified) Gram-Schmidt
>> >> > Orthogonalization with no iterative refinement
>> >> >   GMRES: happy breakdown tolerance 1e-30
>> >> > maximum iterations=100, initial guess is zero
>> >> > tolerances:  relative=0.001, absolute=1e-50, divergence=1.
>> >> > right preconditioning
>> >> > using UNPRECONDITIONED norm type for convergence test
>> >> >   PC Object:   384 MPI processes
>> >> > type: gamg
>> >> >   MG: type is MULTIPLICATIVE, levels=2 cycles=v
>> >> > Cycles per PCApply=1
>> >> > Using Galerkin computed coarse grid matrices
>> >> > GAMG specific options
>> >> >   Threshold for dropping small values from graph 0.
>> >> >   AGG specific options
>> >> > Symmetric graph true
>> >> > Coarse grid solver -- level ---
>> >> >   KSP Object:  (mg_coarse_)   384 MPI processes
>> >> > type: preonly
>> >> > maximum iterations=1, initial guess is zero
>> >> > tolerances:  relative=1e-05, absolute=1e-50,
>> divergence=1.
>> >> > left 

Re: [petsc-users] how to use petsc4py with mpi subcommunicators?

2017-04-12 Thread Rodrigo Felicio
Thanks Jed and Gaetan.
I will try that approach of splitting PETSc.COMM_WORLD,  but I still need to 
load mpi4py (probably after PETSc), because PETSc.Comm is very limited, i.e., 
it does not have the split function, for example. My goal is to be able to set 
different matrices and vectors for each subcommunicator, and I am guessing that 
I can create them using something like PETSc.Mat().createAij(comm=subcomm)

Kind regards
Rodrigo




This email and any files transmitted with it are confidential and are intended 
solely for the use of the individual or entity to whom they are addressed. If 
you are not the original recipient or the person responsible for delivering the 
email to the intended recipient, be advised that you have received this email 
in error, and that any use, dissemination, forwarding, printing, or copying of 
this email is strictly prohibited. If you received this email in error, please 
immediately notify the sender and delete the original.



[petsc-users] -log_summary: User Manual outdated

2017-04-12 Thread Joachim Wuttke

pp. 174, 183 in the current User Manual describe option -log_summary.

Running code with this option yields

 WARNING:   -log_summary is being deprecated; switch to -log_view

Btw either form of the option is missing in the Index.




smime.p7s
Description: S/MIME Cryptographic Signature