Re: [petsc-users] Field data in KSPSetCompute[RHS/Operators]

2017-04-13 Thread Barry Smith

  Store the information in the context argument for these two functions (and 
use the same context argument for both, for simplicity).

   Barry

> On Apr 13, 2017, at 9:13 PM, Young, Matthew, Adam  wrote:
> 
> I'd like to develop a hybrid fluid/PIC code based on 
> petsc/petsc-3.7/src/ksp/ksp/examples/tutorials/ex50.c.html, in which 
> KSPSolve() solves an equation for the electrostatic potential at each time 
> step. To do so, I need KSPSetComputeOperators() and KSPSetComputeRHS() to 
> know about scalar fields (e.g. density) that I compute by gathering the 
> particles before solving for the potential. Should I pass them via an 
> application context, store them in a DM dof, or something else?
> 
> --Matt
> 
> 
> Matthew Young
> PhD Candidate
> Astronomy Department
> Boston University



Re: [petsc-users] including petsc Mat and Vec in a user-defined structure in FORTRAN

2017-04-13 Thread Barry Smith

  VecCreate() does not take a size argument (perhaps you mean VecCreateSeq()?) 
hence when you try to get the size it is confused.

   Barry



> On Apr 13, 2017, at 8:29 PM, Lailai Zhu  wrote:
> 
> Dear petsc developers and users,
> 
> I am currently using fortran version of petsc 3.7.*.
> I tried to define Mat or Vec variables in a user-defined structure like below,
> 
> module myMOD
>type, public :: myStr
>   Mat A
>   Vec x,b
>end type myStr
> 
> interface myStr !! user-defined constructor
>   module procedure new_Str
> end interface myStr
> 
> contains
>function new_Str()
> type(myStr) :: new_Str
> call  VecCreate(petsc_comm_self,10,new_str%x,ierr)
> call  vecgetsize(new_str%x, size, ierr)
>end function new_Str
> 
> end module myMOD
> 
> 
> then i define an instance of myStr in another file like below
> 
> type(myStr),save :: mystr1
> mystr1 = myStr()
> 
> It compiles and the veccreate executes without problem, however error occurs 
> on the vecgetsize part,
> telling me
> 
> [0]PETSC ERROR: - Error Message 
> --
> [0]PETSC ERROR: Null argument, when expecting valid pointer
> [0]PETSC ERROR: Null Object: Parameter # 1
> [0]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html for 
> trouble shooting.
> [0]PETSC ERROR: Petsc Release Version 3.7.5, Jan, 01, 2017
> [0]PETSC ERROR: ./nek5000 on a pet3.7.5-mpich-intel named quququ by user Thu 
> Apr 13 21:11:05 2017
> [0]PETSC ERROR: Configure options --with-c++-support 
> --with-shared-libraries=1 --known-mpi-shared-libraries=1 --with-batch=0 
> --with-mpi=1 --with-debugging=1 -download-fblaslapack=1 --download-blacs=1 
> --download-scalapack=1 --download-plapack=1 --with-cc=mpicc --with-cxx=mpicxx 
> --with-fc=mpifort --download-dscpack=1
> [0]PETSC ERROR: #1 VecGetSize() line 667 in 
> /home/user/petsc/3.7.5/mpich_intel/src/vec/vec/interface/vector.c
> 
> It seems to me that the petsc vector x belong to the derived type variable is 
> recognized in the veccreate subroutine, but
> not known by the vecgetsize one. Is there way to work this around? or perhaps 
> one cannot use petsc objects in such ways?
> Thanks in advance,
> 
> best,
> lailai



[petsc-users] Field data in KSPSetCompute[RHS/Operators]

2017-04-13 Thread Young, Matthew, Adam
I'd like to develop a hybrid fluid/PIC code based on 
petsc/petsc-3.7/src/ksp/ksp/examples/tutorials/ex50.c.html, in which KSPSolve() 
solves an equation for the electrostatic potential at each time step. To do so, 
I need KSPSetComputeOperators() and KSPSetComputeRHS() to know about scalar 
fields (e.g. density) that I compute by gathering the particles before solving 
for the potential. Should I pass them via an application context, store them in 
a DM dof, or something else?


--Matt



Matthew Young
PhD Candidate
Astronomy Department
Boston University




[petsc-users] including petsc Mat and Vec in a user-defined structure in FORTRAN

2017-04-13 Thread Lailai Zhu

Dear petsc developers and users,

I am currently using fortran version of petsc 3.7.*.
I tried to define Mat or Vec variables in a user-defined structure like 
below,


module myMOD
type, public :: myStr
   Mat A
   Vec x,b
end type myStr

interface myStr !! user-defined constructor
   module procedure new_Str
end interface myStr

contains
function new_Str()
 type(myStr) :: new_Str
 call  VecCreate(petsc_comm_self,10,new_str%x,ierr)
 call  vecgetsize(new_str%x, size, ierr)
end function new_Str

end module myMOD


then i define an instance of myStr in another file like below

type(myStr),save :: mystr1
mystr1 = myStr()

It compiles and the veccreate executes without problem, however error 
occurs on the vecgetsize part,

telling me

[0]PETSC ERROR: - Error Message 
--

[0]PETSC ERROR: Null argument, when expecting valid pointer
[0]PETSC ERROR: Null Object: Parameter # 1
[0]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html 
for trouble shooting.

[0]PETSC ERROR: Petsc Release Version 3.7.5, Jan, 01, 2017
[0]PETSC ERROR: ./nek5000 on a pet3.7.5-mpich-intel named quququ by user 
Thu Apr 13 21:11:05 2017
[0]PETSC ERROR: Configure options --with-c++-support 
--with-shared-libraries=1 --known-mpi-shared-libraries=1 --with-batch=0 
--with-mpi=1 --with-debugging=1 -download-fblaslapack=1 
--download-blacs=1 --download-scalapack=1 --download-plapack=1 
--with-cc=mpicc --with-cxx=mpicxx --with-fc=mpifort --download-dscpack=1
[0]PETSC ERROR: #1 VecGetSize() line 667 in 
/home/user/petsc/3.7.5/mpich_intel/src/vec/vec/interface/vector.c


It seems to me that the petsc vector x belong to the derived type 
variable is recognized in the veccreate subroutine, but
not known by the vecgetsize one. Is there way to work this around? or 
perhaps one cannot use petsc objects in such ways?

Thanks in advance,

best,
lailai


Re: [petsc-users] specifying vertex coordinates using DMPlexCreateFromCellListParallel

2017-04-13 Thread Barletta, Ivano
I'm interested in this topic as well

Thanks
Ivano

2017-04-11 16:21 GMT+02:00 Hassan Raiesi 
:

> Hello,
>
>
>
> I’m trying to use DMPlexCreateFromCellListParallel to create a DM from an
> already partitioned mesh,
>
> It requires an array of numVertices*spaceDim numbers, but how should one
> order the coordinates of the vertices?
>
> we only pass the global vertex numbers using ‘const int cells[]’ to define
> the cell-connectivity, so passing the vertex coordinates in local ordering
> wouldn’t make sense?
>
> If it needs to be in global ordering, should I sort the global index of
> the node numbers owned by each rank (as they wont be continuous).
>
>
>
>
>
> Thank you
>
>
>
> Hassan Raiesi,
>
> Bombardier Aerospace
>
> www.bombardier.com
>
>
>


Re: [petsc-users] GAMG for the unsymmetrical matrix

2017-04-13 Thread Mark Adams
On Wed, Apr 12, 2017 at 7:04 PM, Kong, Fande  wrote:

>
>
> On Sun, Apr 9, 2017 at 6:04 AM, Mark Adams  wrote:
>
>> You seem to have two levels here and 3M eqs on the fine grid and 37 on
>> the coarse grid. I don't understand that.
>>
>> You are also calling the AMG setup a lot, but not spending much time
>> in it. Try running with -info and grep on "GAMG".
>>
>
> I got the following output:
>
> [0] PCSetUp_GAMG(): level 0) N=3020875, n data rows=1, n data cols=1,
> nnz/row (ave)=71, np=384
> [0] PCGAMGFilterGraph():  100.% nnz after filtering, with threshold
> 0., 73.6364 nnz ave. (N=3020875)
> [0] PCGAMGCoarsen_AGG(): Square Graph on level 1 of 1 to square
> [0] PCGAMGProlongator_AGG(): New grid 18162 nodes
> [0] PCGAMGOptProlongator_AGG(): Smooth P0: max eigen=1.978702e+00
> min=2.559747e-02 PC=jacobi
> [0] PCGAMGCreateLevel_GAMG(): Aggregate processors noop: new_size=384,
> neq(loc)=40
> [0] PCSetUp_GAMG(): 1) N=18162, n data cols=1, nnz/row (ave)=94, 384
> active pes
> [0] PCSetUp_GAMG(): 2 levels, grid complexity = 1.00795
> [0] PCSetUp_GAMG(): level 0) N=3020875, n data rows=1, n data cols=1,
> nnz/row (ave)=71, np=384
> [0] PCGAMGFilterGraph():  100.% nnz after filtering, with threshold
> 0., 73.6364 nnz ave. (N=3020875)
> [0] PCGAMGCoarsen_AGG(): Square Graph on level 1 of 1 to square
> [0] PCGAMGProlongator_AGG(): New grid 18145 nodes
> [0] PCGAMGOptProlongator_AGG(): Smooth P0: max eigen=1.978584e+00
> min=2.557887e-02 PC=jacobi
> [0] PCGAMGCreateLevel_GAMG(): Aggregate processors noop: new_size=384,
> neq(loc)=37
> [0] PCSetUp_GAMG(): 1) N=18145, n data cols=1, nnz/row (ave)=94, 384
> active pes
>

You are still doing two levels. Just use the parameters that I told you and
you should see that 1) this coarsest (last) grid has "1 active pes" and 2)
the overall solve time and overall convergence rate is much better.


> [0] PCSetUp_GAMG(): 2 levels, grid complexity = 1.00792
> GAMG specific options
> PCGAMGGraph_AGG   40 1.0 8.0759e+00 1.0 3.56e+07 2.3 1.6e+06 1.9e+04
> 7.6e+02  2  0  2  4  2   2  0  2  4  2  1170
> PCGAMGCoarse_AGG  40 1.0 7.1698e+01 1.0 4.05e+09 2.3 4.0e+06 5.1e+04
> 1.2e+03 18 37  5 27  3  18 37  5 27  3 14632
> PCGAMGProl_AGG40 1.0 9.2650e-01 1.2 0.00e+00 0.0 9.8e+05 2.9e+03
> 9.6e+02  0  0  1  0  2   0  0  1  0  2 0
> PCGAMGPOpt_AGG40 1.0 2.4484e+00 1.0 4.72e+08 2.3 3.1e+06 2.3e+03
> 1.9e+03  1  4  4  1  4   1  4  4  1  4 51328
> GAMG: createProl  40 1.0 8.3786e+01 1.0 4.56e+09 2.3 9.6e+06 2.5e+04
> 4.8e+03 21 42 12 32 10  21 42 12 32 10 14134
> GAMG: partLevel   40 1.0 6.7755e+00 1.1 2.59e+08 2.3 2.9e+06 2.5e+03
> 1.5e+03  2  2  4  1  3   2  2  4  1  3  9431
>
>
>
>
>
>
>
>
>>
>>
>> On Fri, Apr 7, 2017 at 5:29 PM, Kong, Fande  wrote:
>> > Thanks, Barry.
>> >
>> > It works.
>> >
>> > GAMG is three times better than ASM in terms of the number of linear
>> > iterations, but it is five times slower than ASM. Any suggestions to
>> improve
>> > the performance of GAMG? Log files are attached.
>> >
>> > Fande,
>> >
>> > On Thu, Apr 6, 2017 at 3:39 PM, Barry Smith  wrote:
>> >>
>> >>
>> >> > On Apr 6, 2017, at 9:39 AM, Kong, Fande  wrote:
>> >> >
>> >> > Thanks, Mark and Barry,
>> >> >
>> >> > It works pretty wells in terms of the number of linear iterations
>> (using
>> >> > "-pc_gamg_sym_graph true"), but it is horrible in the compute time.
>> I am
>> >> > using the two-level method via "-pc_mg_levels 2". The reason why the
>> compute
>> >> > time is larger than other preconditioning options is that a matrix
>> free
>> >> > method is used in the fine level and in my particular problem the
>> function
>> >> > evaluation is expensive.
>> >> >
>> >> > I am using "-snes_mf_operator 1" to turn on the Jacobian-free Newton,
>> >> > but I do not think I want to make the preconditioning part
>> matrix-free.  Do
>> >> > you guys know how to turn off the matrix-free method for GAMG?
>> >>
>> >>-pc_use_amat false
>> >>
>> >> >
>> >> > Here is the detailed solver:
>> >> >
>> >> > SNES Object: 384 MPI processes
>> >> >   type: newtonls
>> >> >   maximum iterations=200, maximum function evaluations=1
>> >> >   tolerances: relative=1e-08, absolute=1e-08, solution=1e-50
>> >> >   total number of linear solver iterations=20
>> >> >   total number of function evaluations=166
>> >> >   norm schedule ALWAYS
>> >> >   SNESLineSearch Object:   384 MPI processes
>> >> > type: bt
>> >> >   interpolation: cubic
>> >> >   alpha=1.00e-04
>> >> > maxstep=1.00e+08, minlambda=1.00e-12
>> >> > tolerances: relative=1.00e-08, absolute=1.00e-15,
>> >> > lambda=1.00e-08
>> >> > maximum iterations=40
>> >> >   KSP Object:   384 MPI processes
>> >> > type: gmres
>> >> >   GMRES: restart=100, using Classical (unmodified) Gram-Schmidt
>> >> > Orthogonalization with no iterative refinement
>> >> >   GMRES: 

Re: [petsc-users] GAMG for the unsymmetrical matrix

2017-04-13 Thread Mark Adams
On Wed, Apr 12, 2017 at 1:31 PM, Kong, Fande  wrote:

> Hi Mark,
>
> Thanks for your reply.
>
> On Wed, Apr 12, 2017 at 9:16 AM, Mark Adams  wrote:
>
>> The problem comes from setting the number of MG levels (-pc_mg_levels 2).
>> Not your fault, it looks like the GAMG logic is faulty, in your version at
>> least.
>>
>
> What I want is that GAMG coarsens the fine matrix once and then stops
> doing anything.  I did not see any benefits to have more levels if the
> number of processors is small.
>

The number of levels is a math issue and has nothing to do with
parallelism. If you do just one level your coarse grid is very large and
expensive to solve, so you want to keep coarsening. There is rarely a need
to use -pc_mg_levels


>
>
>>
>> GAMG will force the coarsest grid to one processor by default, in newer
>> versions. You can override the default with:
>>
>> -pc_gamg_use_parallel_coarse_grid_solver
>>
>> Your coarse grid solver is ASM with these 37 equation per process and 512
>> processes. That is bad.
>>
>
> Why this is bad? The subdomain problem is too small?
>

Because ASM with 512 blocks is a weak solver. You want the coarse grid to
be solved exactly.


>
>
>> Note, you could run this on one process to see the proper convergence
>> rate.
>>
>
> Convergence rate for which part? coarse solver, subdomain solver?
>

The overall converge rate.


>
>
>> You can fix this with parameters:
>>
>> >   -pc_gamg_process_eq_limit <50>: Limit (goal) on number of equations
>> per process on coarse grids (PCGAMGSetProcEqLim)
>> >   -pc_gamg_coarse_eq_limit <50>: Limit on number of equations for the
>> coarse grid (PCGAMGSetCoarseEqLim)
>>
>> If you really want two levels then set something like
>> -pc_gamg_coarse_eq_limit 18145 (or higher) -pc_gamg_coarse_eq_limit 18145
>> (or higher).
>>
>
>
> May have something like: make the coarse problem 1/8 large as the original
> problem? Otherwise, this number is just problem dependent.
>

GAMG will stop automatically so that you do not need problem dependant
parameters.


>
>
>
>> You can run with -info and grep on GAMG and you will meta-data for each
>> level. you should see "npe=1" for the coarsest, last, grid. Or use a
>> parallel direct solver.
>>
>
> I will try.
>
>
>>
>> Note, you should not see much degradation as you increase the number of
>> levels. 18145 eqs on a 3D problem will probably be noticeable. I generally
>> aim for about 3000.
>>
>
> It should be fine as long as the coarse problem is solved by a parallel
> solver.
>

>
> Fande,
>
>
>>
>>
>> On Mon, Apr 10, 2017 at 12:17 PM, Kong, Fande  wrote:
>>
>>>
>>>
>>> On Sun, Apr 9, 2017 at 6:04 AM, Mark Adams  wrote:
>>>
 You seem to have two levels here and 3M eqs on the fine grid and 37 on
 the coarse grid.
>>>
>>>
>>> 37 is on the sub domain.
>>>
>>>  rows=18145, cols=18145 on the entire coarse grid.
>>>
>>>
>>>
>>>
>>>
 I don't understand that.

 You are also calling the AMG setup a lot, but not spending much time
 in it. Try running with -info and grep on "GAMG".


 On Fri, Apr 7, 2017 at 5:29 PM, Kong, Fande  wrote:
 > Thanks, Barry.
 >
 > It works.
 >
 > GAMG is three times better than ASM in terms of the number of linear
 > iterations, but it is five times slower than ASM. Any suggestions to
 improve
 > the performance of GAMG? Log files are attached.
 >
 > Fande,
 >
 > On Thu, Apr 6, 2017 at 3:39 PM, Barry Smith 
 wrote:
 >>
 >>
 >> > On Apr 6, 2017, at 9:39 AM, Kong, Fande 
 wrote:
 >> >
 >> > Thanks, Mark and Barry,
 >> >
 >> > It works pretty wells in terms of the number of linear iterations
 (using
 >> > "-pc_gamg_sym_graph true"), but it is horrible in the compute
 time. I am
 >> > using the two-level method via "-pc_mg_levels 2". The reason why
 the compute
 >> > time is larger than other preconditioning options is that a matrix
 free
 >> > method is used in the fine level and in my particular problem the
 function
 >> > evaluation is expensive.
 >> >
 >> > I am using "-snes_mf_operator 1" to turn on the Jacobian-free
 Newton,
 >> > but I do not think I want to make the preconditioning part
 matrix-free.  Do
 >> > you guys know how to turn off the matrix-free method for GAMG?
 >>
 >>-pc_use_amat false
 >>
 >> >
 >> > Here is the detailed solver:
 >> >
 >> > SNES Object: 384 MPI processes
 >> >   type: newtonls
 >> >   maximum iterations=200, maximum function evaluations=1
 >> >   tolerances: relative=1e-08, absolute=1e-08, solution=1e-50
 >> >   total number of linear solver iterations=20
 >> >   total number of function evaluations=166
 >> >   norm schedule ALWAYS
 >> >   SNESLineSearch Object:   384 MPI processes
 >> 

Re: [petsc-users] how to use petsc4py with mpi subcommunicators?

2017-04-13 Thread Lisandro Dalcin
On 11 April 2017 at 17:31, Rodrigo Felicio 
wrote:

> Thanks, Jed, but using color == 0 lead to the same error msg. Is there no
> way to set PETSc.COMM_WORLD to a subcomm instead of MPI.COMM_WORLD in
> python?
>
>

You can do it, but it is a bit tricky, and related to the fact that
PETSC_COMM_WORLD has to be set to subcomm before calling PetscInitialize(),
which happens automatically the first time Python executes "from petsc4py
import PETSc". The way to do it would be the following:

import sys, petsc4py
petsc4py.init(sys.argv, comm=subcomm)
from petsc4py import PETSc



-- 
Lisandro Dalcin

Research Scientist
Computer, Electrical and Mathematical Sciences & Engineering (CEMSE)
Extreme Computing Research Center (ECRC)
King Abdullah University of Science and Technology (KAUST)
http://ecrc.kaust.edu.sa/

4700 King Abdullah University of Science and Technology
al-Khawarizmi Bldg (Bldg 1), Office # 0109
Thuwal 23955-6900, Kingdom of Saudi Arabia
http://www.kaust.edu.sa

Office Phone: +966 12 808-0459