Re: [petsc-users] (no subject)

2023-12-11 Thread Barry Smith

   The snes options are not relevant since the parts of a PCFIELDSPLIT are 
always linear problems.

By default PCFIELDSPLIT uses a KSP type of preonly on each split (that is 
it applies the preconditioner exactly once inside the PCApply_FieldSplit() 
hence the -fieldsplit_*_ksp_ options are not relevent. You can use 
-fieldsplit_ksp_type gmres for example to have it use gmres on each of the 
splits, but note that then you should use -ksp_type fgmres since using gmres 
inside a preconditioner results in a nonlinear preconditioner.

You can always run with -ksp_view to see the solver being used and the 
prefixes that currently make sense.

  Barry


> On Dec 11, 2023, at 2:51 AM, 1807580692 <1807580...@qq.com> wrote:
> 
> Hello, I have encountered some problems. Here are some of my configurations.
> OS Version and Type:  Linux daihuanhe-Aspire-A315-55G 5.15.0-89-generic 
> #99~20.04.1-Ubuntu SMP Thu Nov 2 15:16:47 UTC 2023 x86_64 x86_64 x86_64 
> GNU/Linux
> PETSc Version: #define PETSC_VERSION_RELEASE1
>   #define PETSC_VERSION_MAJOR  3
>   #define PETSC_VERSION_MINOR  19
>   #define PETSC_VERSION_SUBMINOR   0
>   #define PETSC_RELEASE_DATE   "Mar 30, 2023"
>   #define PETSC_VERSION_DATE   "unknown"
> MPI implementation: MPICH 
> Compiler and version: Gnu C
> The problem is when I type 
> “mpiexec -n 4 ./ex19 -lidvelocity 100 -prandtl 0.72 -grashof 1 -da_grid_x 
> 64 -da_grid_y 64 -snes_type newtonls -ksp_type gmres -pc_type fieldsplit 
> -pc_fieldsplit_type symmetric_multiplicative -pc_fieldsplit_block_size 4 
> -pc_fieldsplit_0_fields 0,1,2,3 -pc_fieldsplit_1_fields 0,1,2,3 
> -fieldsplit_0_pc_type asm -fieldsplit_0_pc_asm_type restrict 
> -fieldsplit_0_pc_asm_overlap 5 -fieldsplit_0_sub_pc_type lu 
> -fieldsplit_1_pc_type asm -fieldsplit_1_pc_asm_type restrict 
> -fieldsplit_1_pc_asm_overlap 5 -fieldsplit_1_sub_pc_type lu  -snes_monitor 
> -snes_converged_reason -fieldsplit_0_ksp_atol 1e-10  -fieldsplit_1_ksp_atol 
> 1e-10  -fieldsplit_0_ksp_rtol 1e-6  -fieldsplit_1_ksp_rtol 1e-6 
> -fieldsplit_0_snes_atol 1e-10  -fieldsplit_1_snes_atol 1e-10  
> -fieldsplit_0_snes_rtol 1e-6  -fieldsplit_1_snes_rtol 1e-6”
> in the command line, where my path is /petsc/src/snes/tutorials.
> 
> It returns 
> “WARNING! There are options you set that were not used!
> WARNING! could be spelling mistake, etc!
> There are 8 unused database options. They are:
> Option left: name:-fieldsplit_0_ksp_atol value: 1e-10 source: command line
> Option left: name:-fieldsplit_0_ksp_rtol value: 1e-6 source: command line
> Option left: name:-fieldsplit_0_snes_atol value: 1e-10 source: command line
> Option left: name:-fieldsplit_0_snes_rtol value: 1e-6 source: command line
> Option left: name:-fieldsplit_1_ksp_atol value: 1e-10 source: command line
> Option left: name:-fieldsplit_1_ksp_rtol value: 1e-6 source: command line
> Option left: name:-fieldsplit_1_snes_atol value: 1e-10 source: command line
> Option left: name:-fieldsplit_1_snes_rtol value: 1e-6 source: command line”.
> Please tell me what should I do?Thank you very much.
>   
> 1807580692
> 1807580...@qq.com
>  
> 
>  



Re: [petsc-users] (no subject)

2023-07-24 Thread Barry Smith


 Perhaps you need

>  make PETSC_DIR=~/asd/petsc-3.19.3 PETSC_ARCH=arch-mswin-c-opt all


> On Jul 24, 2023, at 1:11 PM, Константин via petsc-users 
>  wrote:
> 
> Good evening. After configuring petsc I had to write this comand on cygwin64.
> $ make PETSC_DIR=/home/itugr/asd/petsc-3.19.3 PETSC_ARCH=arch-mswin-c-opt all
> But I have such problem
> makefile:26: /home/itugr/asd/petsc-3.19.3/lib/petsc/conf/rules.utils: No such 
> file or directory
> make[1]: *** No rule to make target 
> '/home/itugr/asd/petsc-3.19.3/lib/petsc/conf/rules.utils'.  Stop.
> make: *** [GNUmakefile:9: all] Error 2
> So, i have this directory, but can't make it.
> itugr@LAPTOP-UJI8JB1K ~/asd/petsc-3.19.3/lib/petsc/conf
> $ dir
> bfort-base.txt  bfort-mpi.txt  bfort-petsc.txt  petscvariables  rules  
> rules.doc  rules.utils  test  uncrustify.cfg  variables
> --
> Константин



Re: [petsc-users] (no subject)

2023-07-11 Thread Matthew Knepley
On Tue, Jul 11, 2023 at 3:58 PM Константин via petsc-users <
petsc-users@mcs.anl.gov> wrote:

> Hello, I'm trying to build petsc on windows. And when I make it I have
> such problem
>
>
Did you run configure first?

  Thanks,

 Matt


> --
> Константин
>


-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener

https://www.cse.buffalo.edu/~knepley/ 


Re: [petsc-users] (no subject)

2022-02-17 Thread Mark Adams
Please keep on list,

On Thu, Feb 17, 2022 at 12:36 PM Bojan Niceno <
bojan.niceno.scient...@gmail.com> wrote:

> Dear Mark,
>
> Sorry for mistakenly calling you Adam before.
>
> I was thinking about the o_nnz as you suggested, but then something else
> occurred to me.  So, I determine the d_nnz and o_nnz based on METIS domain
> decomposition which I perform outside of PETSc, before I even call PETSc
> initialize.  Hence, if PETSc works out its own domain decomposition
>

PETSc does not work out its own decomposition. You specify the
decomposition completely with
https://petsc.org/main/docs/manualpages/Mat/MatCreateAIJ.html#MatCreateAIJ



> and communication patterns, they might be different from mine, and
> therefore MatSetValue misses some entries.  Will PETSc follow the same
> domain decomposition which it works from calls to MatSetValue from
> different processors, or will it re-shuffle the matrix entries?
>
> Cheers,
>
> Bojan
>
> On Thu, Feb 17, 2022 at 6:14 PM Bojan Niceno <
> bojan.niceno.scient...@gmail.com> wrote:
>
>> Thanks a lot for the hints Adam :-)
>>
>> Cheers,
>>
>> Bojan
>>
>> On Thu, Feb 17, 2022 at 6:05 PM Mark Adams  wrote:
>>
>>>
>>>
>>> On Thu, Feb 17, 2022 at 11:46 AM Bojan Niceno <
>>> bojan.niceno.scient...@gmail.com> wrote:
>>>
 Dear all,


 I am experiencing difficulties when using PETSc in parallel in an
 unstructured CFD code.  It uses CRS format to store its matrices.  I use
 the following sequence of PETSc call in the hope to get PETSc solving my
 linear systems in parallel.  Before I continue, I would just like to say
 that the code is MPI parallel since long time ago, and performs its own
 domain decomposition through METIS, and it works out its communication
 patterns which work with its home-grown (non-PETSc) linear solvers.
 Anyhow, I issue the following calls:

 err = PetscInitialize(0, NULL, (char*)0, help);

 err = MatCreate(MPI_COMM_WORLD, A);
 In the above, I use MPI_COMM_WORLD instead of PETSC_COMM_SELF because
 the call to MPI_Init is invoked outside of PETSc, from the main program.

 err = MatSetSizes(A, m, m, M, M);
 Since my matrices are exclusively square, M is set to the total number
 of computational cells, while m is equal to the number of computational
 cells within each subdomain/processor.  (Not all processors necessarily
 have the same m, it depends on domain decomposition.)  I do not distinguish
 between m (M) and n (N) since matrices are all square.  Am I wrong to
 assume that?

 err = MatSetType(A, MATAIJ);
 I set the matrix to be of type MATAIJ, to cover runs on one and on more
 processors.  By the way, on one processors everything works fine

 err = MatMPIAIJSetPreallocation(A, 0, d_nnz, 0, o_nnz);
 err = MatSeqAIJSetPreallocation(A, 0, d_nnz);
 The two lines above specify matrix preallocation.  Since d_nz and o_nz
 vary from cell to cell (row to row), I set them to zero and provide arrays
 with number of diagonal and off diagonal zeroes instead.  To my
 understanding, that is legit since d_nz and o_nz are neglected if d_nnz and
 o_nnz are provided.  Am I wrong?

 Finally, inside a loop through rows and columns I call:

 err = MatSetValue(A, row, col, value, INSERT_VALUES);
 Here I make sure that row and col point to global cell (unknown)
 numbers.

 Yet, when I run the code on more than one processor, I get the error:

 [3]PETSC ERROR: - Error Message
 --
 [3]PETSC ERROR: Argument out of range
 [3]PETSC ERROR: New nonzero at (21,356) caused a malloc
 Use MatSetOption(A, MAT_NEW_NONZERO_ALLOCATION_ERR, PETSC_FALSE) to
 turn off this check

 [3]PETSC ERROR: #1 MatSetValues_MPIAIJ() at
 /home/niceno/Development/petsc-debug/src/mat/impls/aij/mpi/mpiaij.c:517
 [3]PETSC ERROR: #2 MatSetValues() at
 /home/niceno/Development/petsc-debug/src/mat/interface/matrix.c:1398
 [3]PETSC ERROR: #3 MatSetValues_MPIAIJ() at
 /home/niceno/Development/petsc-debug/src/mat/impls/aij/mpi/mpiaij.c:517
 [3]PETSC ERROR: #4 MatSetValues() at
 /home/niceno/Development/petsc-debug/src/mat/interface/matrix.c:1398

 and so forth, for roughly 10% of all matrix entries.  I checked if
 these errors occur only for off-diagonal parts of the matrix entries, but
 that is not the case.

 Error code is 63; PETSC_ERR_ARG_OUTOFRANGE

 Does anyone have an idea what am I doing wrong?  Is any of my
 assumptions above (like thinking n(N) is always m(M) for square matrices,
 that I can send zeros as d_nz and o_nz if I provide arrays d_nnz[] and
 o_nnz[] wrong?

>>>
>>> That is correct.
>>>
>>>
 Any idea how to debug it, where to look for an error?


>>> I would guess that you are counting your o_nnz 

Re: [petsc-users] (no subject)

2022-02-17 Thread Mark Adams
On Thu, Feb 17, 2022 at 11:46 AM Bojan Niceno <
bojan.niceno.scient...@gmail.com> wrote:

> Dear all,
>
>
> I am experiencing difficulties when using PETSc in parallel in an
> unstructured CFD code.  It uses CRS format to store its matrices.  I use
> the following sequence of PETSc call in the hope to get PETSc solving my
> linear systems in parallel.  Before I continue, I would just like to say
> that the code is MPI parallel since long time ago, and performs its own
> domain decomposition through METIS, and it works out its communication
> patterns which work with its home-grown (non-PETSc) linear solvers.
> Anyhow, I issue the following calls:
>
> err = PetscInitialize(0, NULL, (char*)0, help);
>
> err = MatCreate(MPI_COMM_WORLD, A);
> In the above, I use MPI_COMM_WORLD instead of PETSC_COMM_SELF because the
> call to MPI_Init is invoked outside of PETSc, from the main program.
>
> err = MatSetSizes(A, m, m, M, M);
> Since my matrices are exclusively square, M is set to the total number of
> computational cells, while m is equal to the number of computational cells
> within each subdomain/processor.  (Not all processors necessarily have the
> same m, it depends on domain decomposition.)  I do not distinguish between
> m (M) and n (N) since matrices are all square.  Am I wrong to assume that?
>
> err = MatSetType(A, MATAIJ);
> I set the matrix to be of type MATAIJ, to cover runs on one and on more
> processors.  By the way, on one processors everything works fine
>
> err = MatMPIAIJSetPreallocation(A, 0, d_nnz, 0, o_nnz);
> err = MatSeqAIJSetPreallocation(A, 0, d_nnz);
> The two lines above specify matrix preallocation.  Since d_nz and o_nz
> vary from cell to cell (row to row), I set them to zero and provide arrays
> with number of diagonal and off diagonal zeroes instead.  To my
> understanding, that is legit since d_nz and o_nz are neglected if d_nnz and
> o_nnz are provided.  Am I wrong?
>
> Finally, inside a loop through rows and columns I call:
>
> err = MatSetValue(A, row, col, value, INSERT_VALUES);
> Here I make sure that row and col point to global cell (unknown) numbers.
>
> Yet, when I run the code on more than one processor, I get the error:
>
> [3]PETSC ERROR: - Error Message
> --
> [3]PETSC ERROR: Argument out of range
> [3]PETSC ERROR: New nonzero at (21,356) caused a malloc
> Use MatSetOption(A, MAT_NEW_NONZERO_ALLOCATION_ERR, PETSC_FALSE) to turn
> off this check
>
> [3]PETSC ERROR: #1 MatSetValues_MPIAIJ() at
> /home/niceno/Development/petsc-debug/src/mat/impls/aij/mpi/mpiaij.c:517
> [3]PETSC ERROR: #2 MatSetValues() at
> /home/niceno/Development/petsc-debug/src/mat/interface/matrix.c:1398
> [3]PETSC ERROR: #3 MatSetValues_MPIAIJ() at
> /home/niceno/Development/petsc-debug/src/mat/impls/aij/mpi/mpiaij.c:517
> [3]PETSC ERROR: #4 MatSetValues() at
> /home/niceno/Development/petsc-debug/src/mat/interface/matrix.c:1398
>
> and so forth, for roughly 10% of all matrix entries.  I checked if these
> errors occur only for off-diagonal parts of the matrix entries, but that is
> not the case.
>
> Error code is 63; PETSC_ERR_ARG_OUTOFRANGE
>
> Does anyone have an idea what am I doing wrong?  Is any of my assumptions
> above (like thinking n(N) is always m(M) for square matrices, that I can
> send zeros as d_nz and o_nz if I provide arrays d_nnz[] and o_nnz[] wrong?
>

That is correct.


> Any idea how to debug it, where to look for an error?
>
>
I would guess that you are counting your o_nnz  incorrectly. It looks like
a small number of equations per process because the 4th process has row 21,
apparently. Does that sound right?

And column 356 is going to be in the off-diagonal block (ie, "o").  I would
start with a serial matrix and run with -info. This will be noisy but you
will see things like"number of unneeded..." that you can verify that you
have set d_nnz perfectly (there should be 0 unneeded).
Then try two processors. If it fails you could a print statement in
everytime the row (eg, 21) is added to and check what your code for
computing o_nnz is doing.

I am carefully checking all the data I send to PETSc functions and looks
> correct to me, but maybe I lack some fundamental understanding of what
> should be provided to PETSc, as I write above?
>

It is a bit confusing at first. The man page gives a concrete example
https://petsc.org/main/docs/manualpages/Mat/MatCreateAIJ.html#MatCreateAIJ


>
>
> Best regards,
>
>
> Bojan Niceno
>
>


Re: [petsc-users] (no subject)

2019-12-03 Thread Li Luo
Thank you, I'll try that.

Best,
Li

On Wed, Dec 4, 2019 at 5:34 AM Smith, Barry F.  wrote:

>
>   From the code:
>
>   if (snes->lagjacobian == -2) {
> snes->lagjacobian = -1;
>
> ierr = PetscInfo(snes,"Recomputing Jacobian/preconditioner because lag
> is -2 (means compute Jacobian, but then never again) \n");CHKERRQ(ierr);
>   } else if (snes->lagjacobian == -1) {
> ierr = PetscInfo(snes,"Reusing Jacobian/preconditioner because lag is
> -1\n");CHKERRQ(ierr);
> ierr =
> PetscObjectTypeCompare((PetscObject)A,MATMFFD,);CHKERRQ(ierr);
> if (flag) {
>   ierr = MatAssemblyBegin(A,MAT_FINAL_ASSEMBLY);CHKERRQ(ierr);
>   ierr = MatAssemblyEnd(A,MAT_FINAL_ASSEMBLY);CHKERRQ(ierr);
> }
> PetscFunctionReturn(0);
>
>  So it does what the manual page says. If you use -2 it will compute the
> Jacobian the next time it is needed but then will never compute it again,.
> This means it reuses the one computed for all time-steps. It can make sense
> depending on how much the Jacobian changes to use with -snes_mf_operator.
>
>  To compute just once at the beginning of each new linear solver you can
> use for example 100 (this assumes that one nonlinear solver never needs
> more than 100 linear solvers.) This can also be used with -snes_mf_operator
> (or without).
>
>
>   Barry
>
>
> > On Dec 3, 2019, at 6:33 AM, Li Luo  wrote:
> >
> > If use -snes_mf,  the linear solver just never converges. And at the
> second time step, it reports an error as:
> > [0]PETSC ERROR: No support for this operation for this object type!
> > [0]PETSC ERROR: Mat type mffd!
> >
> > I still don't understand "-snes_lag_jacobian -2", from the manual, it
> seems Jacobian is computed only once by setting -1, but it is recomputed
> once by -2.
> > For a time-stepping scheme where each step calls SNESSolve, if using
> -snes_lag_jacobian -2, will the following time steps reuse the Jacobian
> built at the first time step?
> >
> > Best,
> > Li
> >
> >
> >
> > On Tue, Dec 3, 2019 at 12:10 AM Smith, Barry F. 
> wrote:
> >
> >
> > > On Dec 2, 2019, at 2:30 PM, Li Luo  wrote:
> > >
> > >  -snes_mf  fails to converge in my case, but  -ds_snes_mf_operator
> works, when the original analytic matrix is still used as the
> preconditioner.
> > > The timing is several times greater than using the analytic matrix for
> both Jacobian and preconditioner.
> >
> >ok, how does -snes_mf fail to converge? -ksp_monitor  ? does the
> linear solver just never converge?
> >
> >Using -snes_mf_operator will also build the Jacobian so in your case
> doesn't make much sense by itself since it is very expensive
> >
> > >
> > > For an implicit time-stepping scheme, if using -snes_lag_jacobian -2,
> is the Jacobian built only twice at the first time step then it is used for
> all later time steps? Or it is built twice at every time step?
> >
> >Check the manual page for SNESSolve it should just compute the
> Jacobian once and reuse it forever.
> >
> >You can also try -snes_mf  -snes_lag_jacobian -2 which should compute
> the Jacobian once, use that original one to build the preconditioner once
> and reuse the same preconditioner but use the matrix free to define the
> operator.
> >
> >
> >
> >Barry
> >
> > >
> > > Regards,
> > > Li
> > >
> > > On Mon, Dec 2, 2019 at 6:02 PM Smith, Barry F. 
> wrote:
> > >
> > > Ok it is spending 99+ percent of the time computing the Jacobians.
> > >
> > >
> > > MatFDColorApply  106 1.0 8.7622e+03 1.0 1.31e+08 1.1 6.9e+07
> 2.6e+03 1.1e+06 99  0 97 81 99  99  0 97 81 99 1
> > > MatFDColorFunc 60950 1.0 8.7560e+03 1.0 0.00e+00 0.0 6.9e+07
> 2.6e+03 1.1e+06 99  0 97 81 99  99  0 97 81 99 0
> > >
> > > It is requiring on average 12 KSP iterations per linear solve so the
> resulting linear system appears well conditioned, this means even if you
> compute the Jacobian analytically likely most of the time in the run will
> still be computing Jacobians.
> > >
> > > Try using -snes_mf with the logging and see what happens.   You can
> also try -snes_lag_jacobian_persists -snes_lag_jacobian -2
> > >
> > > Note there may be other ways of avoiding the costly computation of the
> Jacobian at each Newton step.
> > >
> > > Barry
> > >
> > >
> > > > On Dec 2, 2019, at 6:38 AM, Li Luo  wrote:
> > > >
> > > > Dear Barry,
> > > >
> > > > Here is my log.
> > > > Because I am using libMesh built on PETSc, there is more information
> from libMesh in the log file.
> > > > I ran 13 time steps for the simulation so there are repeated
> snes_view info.
> > > > The algorithm is simply NKS.
> > > >
> > > > Cheers,
> > > > Li
> > > >
> > > > On Mon, Dec 2, 2019 at 4:55 PM Smith, Barry F. 
> wrote:
> > > >
> > > >   Please send a run with optimization turned on (--with-debugging=0
> in ./configure) and -log_view  without the actual timing information we are
> just guessing where the time is spent.
> > > >
> > > >   If your problem has a natural block size then using baij should be
> a bit faster than aij, but not 

Re: [petsc-users] (no subject)

2019-12-03 Thread Smith, Barry F.


  From the code:

  if (snes->lagjacobian == -2) {
snes->lagjacobian = -1;

ierr = PetscInfo(snes,"Recomputing Jacobian/preconditioner because lag is 
-2 (means compute Jacobian, but then never again) \n");CHKERRQ(ierr);
  } else if (snes->lagjacobian == -1) {
ierr = PetscInfo(snes,"Reusing Jacobian/preconditioner because lag is 
-1\n");CHKERRQ(ierr);
ierr = PetscObjectTypeCompare((PetscObject)A,MATMFFD,);CHKERRQ(ierr);
if (flag) {
  ierr = MatAssemblyBegin(A,MAT_FINAL_ASSEMBLY);CHKERRQ(ierr);
  ierr = MatAssemblyEnd(A,MAT_FINAL_ASSEMBLY);CHKERRQ(ierr);
}
PetscFunctionReturn(0);

 So it does what the manual page says. If you use -2 it will compute the 
Jacobian the next time it is needed but then will never compute it again,. This 
means it reuses the one computed for all time-steps. It can make sense 
depending on how much the Jacobian changes to use with -snes_mf_operator. 

 To compute just once at the beginning of each new linear solver you can use 
for example 100 (this assumes that one nonlinear solver never needs more than 
100 linear solvers.) This can also be used with -snes_mf_operator (or without).


  Barry


> On Dec 3, 2019, at 6:33 AM, Li Luo  wrote:
> 
> If use -snes_mf,  the linear solver just never converges. And at the second 
> time step, it reports an error as:
> [0]PETSC ERROR: No support for this operation for this object type!
> [0]PETSC ERROR: Mat type mffd!
> 
> I still don't understand "-snes_lag_jacobian -2", from the manual, it seems 
> Jacobian is computed only once by setting -1, but it is recomputed once by -2.
> For a time-stepping scheme where each step calls SNESSolve, if using 
> -snes_lag_jacobian -2, will the following time steps reuse the Jacobian built 
> at the first time step?
> 
> Best,
> Li
> 
>  
> 
> On Tue, Dec 3, 2019 at 12:10 AM Smith, Barry F.  wrote:
> 
> 
> > On Dec 2, 2019, at 2:30 PM, Li Luo  wrote:
> > 
> >  -snes_mf  fails to converge in my case, but  -ds_snes_mf_operator works, 
> > when the original analytic matrix is still used as the preconditioner. 
> > The timing is several times greater than using the analytic matrix for both 
> > Jacobian and preconditioner.
> 
>ok, how does -snes_mf fail to converge? -ksp_monitor  ? does the linear 
> solver just never converge?  
> 
>Using -snes_mf_operator will also build the Jacobian so in your case 
> doesn't make much sense by itself since it is very expensive
> 
> > 
> > For an implicit time-stepping scheme, if using -snes_lag_jacobian -2, is 
> > the Jacobian built only twice at the first time step then it is used for 
> > all later time steps? Or it is built twice at every time step?
> 
>Check the manual page for SNESSolve it should just compute the Jacobian 
> once and reuse it forever.  
> 
>You can also try -snes_mf  -snes_lag_jacobian -2 which should compute the 
> Jacobian once, use that original one to build the preconditioner once and 
> reuse the same preconditioner but use the matrix free to define the operator.
> 
> 
> 
>Barry
> 
> > 
> > Regards,
> > Li
> > 
> > On Mon, Dec 2, 2019 at 6:02 PM Smith, Barry F.  wrote:
> > 
> > Ok it is spending 99+ percent of the time computing the Jacobians.
> > 
> > 
> > MatFDColorApply  106 1.0 8.7622e+03 1.0 1.31e+08 1.1 6.9e+07 2.6e+03 
> > 1.1e+06 99  0 97 81 99  99  0 97 81 99 1
> > MatFDColorFunc 60950 1.0 8.7560e+03 1.0 0.00e+00 0.0 6.9e+07 2.6e+03 
> > 1.1e+06 99  0 97 81 99  99  0 97 81 99 0
> > 
> > It is requiring on average 12 KSP iterations per linear solve so the 
> > resulting linear system appears well conditioned, this means even if you 
> > compute the Jacobian analytically likely most of the time in the run will 
> > still be computing Jacobians.
> > 
> > Try using -snes_mf with the logging and see what happens.   You can also 
> > try -snes_lag_jacobian_persists -snes_lag_jacobian -2
> > 
> > Note there may be other ways of avoiding the costly computation of the 
> > Jacobian at each Newton step.
> > 
> > Barry
> > 
> > 
> > > On Dec 2, 2019, at 6:38 AM, Li Luo  wrote:
> > > 
> > > Dear Barry,
> > > 
> > > Here is my log.
> > > Because I am using libMesh built on PETSc, there is more information from 
> > > libMesh in the log file.
> > > I ran 13 time steps for the simulation so there are repeated snes_view 
> > > info.
> > > The algorithm is simply NKS.
> > > 
> > > Cheers,
> > > Li
> > > 
> > > On Mon, Dec 2, 2019 at 4:55 PM Smith, Barry F.  wrote:
> > > 
> > >   Please send a run with optimization turned on (--with-debugging=0 in 
> > > ./configure) and -log_view  without the actual timing information we are 
> > > just guessing where the time is spent.
> > > 
> > >   If your problem has a natural block size then using baij should be a 
> > > bit faster than aij, but not dramatically better
> > > 
> > >Barry
> > > 
> > > 
> > > > On Dec 2, 2019, at 4:30 AM, Li Luo  wrote:
> > > > 
> > > > Thank you very much! It looks forming an analytic Jacobian is the 

Re: [petsc-users] (no subject)

2019-12-02 Thread Smith, Barry F.



> On Dec 2, 2019, at 2:30 PM, Li Luo  wrote:
> 
>  -snes_mf  fails to converge in my case, but  -ds_snes_mf_operator works, 
> when the original analytic matrix is still used as the preconditioner. 
> The timing is several times greater than using the analytic matrix for both 
> Jacobian and preconditioner.

   ok, how does -snes_mf fail to converge? -ksp_monitor  ? does the linear 
solver just never converge?  

   Using -snes_mf_operator will also build the Jacobian so in your case doesn't 
make much sense by itself since it is very expensive

> 
> For an implicit time-stepping scheme, if using -snes_lag_jacobian -2, is the 
> Jacobian built only twice at the first time step then it is used for all 
> later time steps? Or it is built twice at every time step?

   Check the manual page for SNESSolve it should just compute the Jacobian once 
and reuse it forever.  

   You can also try -snes_mf  -snes_lag_jacobian -2 which should compute the 
Jacobian once, use that original one to build the preconditioner once and reuse 
the same preconditioner but use the matrix free to define the operator.



   Barry

> 
> Regards,
> Li
> 
> On Mon, Dec 2, 2019 at 6:02 PM Smith, Barry F.  wrote:
> 
> Ok it is spending 99+ percent of the time computing the Jacobians.
> 
> 
> MatFDColorApply  106 1.0 8.7622e+03 1.0 1.31e+08 1.1 6.9e+07 2.6e+03 
> 1.1e+06 99  0 97 81 99  99  0 97 81 99 1
> MatFDColorFunc 60950 1.0 8.7560e+03 1.0 0.00e+00 0.0 6.9e+07 2.6e+03 
> 1.1e+06 99  0 97 81 99  99  0 97 81 99 0
> 
> It is requiring on average 12 KSP iterations per linear solve so the 
> resulting linear system appears well conditioned, this means even if you 
> compute the Jacobian analytically likely most of the time in the run will 
> still be computing Jacobians.
> 
> Try using -snes_mf with the logging and see what happens.   You can also try 
> -snes_lag_jacobian_persists -snes_lag_jacobian -2
> 
> Note there may be other ways of avoiding the costly computation of the 
> Jacobian at each Newton step.
> 
> Barry
> 
> 
> > On Dec 2, 2019, at 6:38 AM, Li Luo  wrote:
> > 
> > Dear Barry,
> > 
> > Here is my log.
> > Because I am using libMesh built on PETSc, there is more information from 
> > libMesh in the log file.
> > I ran 13 time steps for the simulation so there are repeated snes_view info.
> > The algorithm is simply NKS.
> > 
> > Cheers,
> > Li
> > 
> > On Mon, Dec 2, 2019 at 4:55 PM Smith, Barry F.  wrote:
> > 
> >   Please send a run with optimization turned on (--with-debugging=0 in 
> > ./configure) and -log_view  without the actual timing information we are 
> > just guessing where the time is spent.
> > 
> >   If your problem has a natural block size then using baij should be a bit 
> > faster than aij, but not dramatically better
> > 
> >Barry
> > 
> > 
> > > On Dec 2, 2019, at 4:30 AM, Li Luo  wrote:
> > > 
> > > Thank you very much! It looks forming an analytic Jacobian is the only 
> > > choice.
> > > 
> > > Best,
> > > Li
> > > 
> > > On Mon, Dec 2, 2019 at 3:21 PM Matthew Knepley  wrote:
> > > On Mon, Dec 2, 2019 at 4:04 AM Li Luo  wrote:
> > > Thank you for your reply.
> > > 
> > > The matrix is small with only 67500 rows, but is relatively dense since a 
> > > second-order discontinuous Galerkin FEM is used, nonzeros=23,036,400.
> > > 
> > > This is very dense, 0.5% fill or 340 nonzeros per row.
> > >  
> > > The number of colors is 539 as shown by using -mat_fd_coloring_view:
> > > 
> > > Coloring is not appropriate for this matrix since you have enormous dense 
> > > blocks (I am guessing). It could work if you statically
> > > condense them out or had a fast analytic Jacobian. With 540 colors, it 
> > > takes 540 matvecs to generate the action of the Jacobian.
> > > 
> > >   Thanks,
> > > 
> > >  Matt
> > >  
> > > MatFDColoring Object: 64 MPI processes
> > >   type not yet set
> > >   Error tolerance=1.49012e-08
> > >   Umin=1.49012e-06
> > >   Number of colors=539
> > >   Information for color 0
> > > Number of columns 1
> > >   378
> > > Number of rows 756
> > >   0 1188
> > >   1 1188
> > >   2 1188
> > >   3 1188
> > >   4 1188
> > >   5 1188
> > >   ...
> > > 
> > > Is this normal?
> > > When using MCFD, is there any difference using mpiaij and mpibaij?
> > > 
> > > Best,
> > > Li
> > > 
> > > On Mon, Dec 2, 2019 at 10:03 AM Smith, Barry F.  
> > > wrote:
> > > 
> > >   How many colors is it requiring?   And how long is the MatGetColoring() 
> > > taking? Are you running in parallel?  The MatGetColoring() MATCOLORINGSL 
> > > uses a sequential coloring algorithm so if your matrix is large and 
> > > parallel the coloring will take a long time. The parallel colorings are 
> > > MATCOLORINGGREEDY and MATCOLORINGJP
> > > 
> > >   Barry
> > > 
> > > 
> > > > On Dec 1, 2019, at 12:56 AM, Li Luo  wrote:
> > > > 
> > > > Dear Developers,
> > > > 
> > > > I tried to use the multi-color finite-difference (MC-FD) method for 
> > > > 

Re: [petsc-users] (no subject)

2019-12-02 Thread Smith, Barry F.


  Please send a run with optimization turned on (--with-debugging=0 in 
./configure) and -log_view  without the actual timing information we are just 
guessing where the time is spent.

  If your problem has a natural block size then using baij should be a bit 
faster than aij, but not dramatically better

   Barry


> On Dec 2, 2019, at 4:30 AM, Li Luo  wrote:
> 
> Thank you very much! It looks forming an analytic Jacobian is the only choice.
> 
> Best,
> Li
> 
> On Mon, Dec 2, 2019 at 3:21 PM Matthew Knepley  wrote:
> On Mon, Dec 2, 2019 at 4:04 AM Li Luo  wrote:
> Thank you for your reply.
> 
> The matrix is small with only 67500 rows, but is relatively dense since a 
> second-order discontinuous Galerkin FEM is used, nonzeros=23,036,400.
> 
> This is very dense, 0.5% fill or 340 nonzeros per row.
>  
> The number of colors is 539 as shown by using -mat_fd_coloring_view:
> 
> Coloring is not appropriate for this matrix since you have enormous dense 
> blocks (I am guessing). It could work if you statically
> condense them out or had a fast analytic Jacobian. With 540 colors, it takes 
> 540 matvecs to generate the action of the Jacobian.
> 
>   Thanks,
> 
>  Matt
>  
> MatFDColoring Object: 64 MPI processes
>   type not yet set
>   Error tolerance=1.49012e-08
>   Umin=1.49012e-06
>   Number of colors=539
>   Information for color 0
> Number of columns 1
>   378
> Number of rows 756
>   0 1188
>   1 1188
>   2 1188
>   3 1188
>   4 1188
>   5 1188
>   ...
> 
> Is this normal?
> When using MCFD, is there any difference using mpiaij and mpibaij?
> 
> Best,
> Li
> 
> On Mon, Dec 2, 2019 at 10:03 AM Smith, Barry F.  wrote:
> 
>   How many colors is it requiring?   And how long is the MatGetColoring() 
> taking? Are you running in parallel?  The MatGetColoring() MATCOLORINGSL uses 
> a sequential coloring algorithm so if your matrix is large and parallel the 
> coloring will take a long time. The parallel colorings are MATCOLORINGGREEDY 
> and MATCOLORINGJP
> 
>   Barry
> 
> 
> > On Dec 1, 2019, at 12:56 AM, Li Luo  wrote:
> > 
> > Dear Developers,
> > 
> > I tried to use the multi-color finite-difference (MC-FD) method for 
> > constructing the Jacobians. However, I find it is very slow compared to the 
> > exact Jacobian. 
> > My implementation of MC-FD Jacobian is posted below, would you please check 
> > whether I am correct? Anything missed? Thank you!
> > 
> > // Setup phase:
> >   MatStructure flag;
> >   ISColoring   iscoloring;
> >   ierr = MatGetColoring(Jac,MATCOLORINGSL,);
> >   ierr = MatFDColoringCreate(Jac,iscoloring,>matfdcoloring);
> >   ierr = 
> > MatFDColoringSetFunction(this->matfdcoloring,(PetscErrorCode 
> > (*)(void))__libmesh_petsc_snes_residual,(void *)this);
> >   ierr = MatFDColoringSetFromOptions(this->matfdcoloring);
> >   ierr = ISColoringDestroy();
> > 
> >  Apply:
> >   ierr = MatZeroEntries(*jac);CHKERRQ(ierr);
> >   ierr = 
> > MatFDColoringApply(*jac,solver->matfdcoloring,x,msflag,snes);
> > 
> > Best regards,
> > Li Luo
> > 
> > This message and its contents, including attachments are intended solely 
> > for the original recipient. If you are not the intended recipient or have 
> > received this message in error, please notify me immediately and delete 
> > this message from your computer system. Any unauthorized use or 
> > distribution is prohibited. Please consider the environment before printing 
> > this email.
> 
> 
> 
> -- 
> Postdoctoral Fellow
> Extreme Computing Research Center
> King Abdullah University of Science & Technology
> https://sites.google.com/site/rolyliluo/
> 
> This message and its contents, including attachments are intended solely for 
> the original recipient. If you are not the intended recipient or have 
> received this message in error, please notify me immediately and delete this 
> message from your computer system. Any unauthorized use or distribution is 
> prohibited. Please consider the environment before printing this email.
> 
> 
> -- 
> What most experimenters take for granted before they begin their experiments 
> is infinitely more interesting than any results to which their experiments 
> lead.
> -- Norbert Wiener
> 
> https://www.cse.buffalo.edu/~knepley/
> 
> 
> -- 
> Postdoctoral Fellow
> Extreme Computing Research Center
> King Abdullah University of Science & Technology
> https://sites.google.com/site/rolyliluo/
> 
> This message and its contents, including attachments are intended solely for 
> the original recipient. If you are not the intended recipient or have 
> received this message in error, please notify me immediately and delete this 
> message from your computer system. Any unauthorized use or distribution is 
> prohibited. Please consider the environment before printing this email.



Re: [petsc-users] (no subject)

2019-12-02 Thread Li Luo
Thank you very much! It looks forming an analytic Jacobian is the only
choice.

Best,
Li

On Mon, Dec 2, 2019 at 3:21 PM Matthew Knepley  wrote:

> On Mon, Dec 2, 2019 at 4:04 AM Li Luo  wrote:
>
>> Thank you for your reply.
>>
>> The matrix is small with only 67500 rows, but is relatively dense since a
>> second-order discontinuous Galerkin FEM is used, nonzeros=23,036,400.
>>
>
> This is very dense, 0.5% fill or 340 nonzeros per row.
>
>
>> The number of colors is 539 as shown by using -mat_fd_coloring_view:
>>
>
> Coloring is not appropriate for this matrix since you have enormous dense
> blocks (I am guessing). It could work if you statically
> condense them out or had a fast analytic Jacobian. With 540 colors, it
> takes 540 matvecs to generate the action of the Jacobian.
>
>   Thanks,
>
>  Matt
>
>
>> MatFDColoring Object: 64 MPI processes
>>   type not yet set
>>   Error tolerance=1.49012e-08
>>   Umin=1.49012e-06
>>   Number of colors=539
>>   Information for color 0
>> Number of columns 1
>>   378
>> Number of rows 756
>>   0 1188
>>   1 1188
>>   2 1188
>>   3 1188
>>   4 1188
>>   5 1188
>>   ...
>>
>> Is this normal?
>> When using MCFD, is there any difference using mpiaij and mpibaij?
>>
>> Best,
>> Li
>>
>> On Mon, Dec 2, 2019 at 10:03 AM Smith, Barry F. 
>> wrote:
>>
>>>
>>>   How many colors is it requiring?   And how long is the
>>> MatGetColoring() taking? Are you running in parallel?  The MatGetColoring()
>>> MATCOLORINGSL uses a sequential coloring algorithm so if your matrix is
>>> large and parallel the coloring will take a long time. The parallel
>>> colorings are MATCOLORINGGREEDY and MATCOLORINGJP
>>>
>>>   Barry
>>>
>>>
>>> > On Dec 1, 2019, at 12:56 AM, Li Luo  wrote:
>>> >
>>> > Dear Developers,
>>> >
>>> > I tried to use the multi-color finite-difference (MC-FD) method for
>>> constructing the Jacobians. However, I find it is very slow compared to the
>>> exact Jacobian.
>>> > My implementation of MC-FD Jacobian is posted below, would you please
>>> check whether I am correct? Anything missed? Thank you!
>>> >
>>> > // Setup phase:
>>> >   MatStructure flag;
>>> >   ISColoring   iscoloring;
>>> >   ierr = MatGetColoring(Jac,MATCOLORINGSL,);
>>> >   ierr =
>>> MatFDColoringCreate(Jac,iscoloring,>matfdcoloring);
>>> >   ierr =
>>> MatFDColoringSetFunction(this->matfdcoloring,(PetscErrorCode
>>> (*)(void))__libmesh_petsc_snes_residual,(void *)this);
>>> >   ierr = MatFDColoringSetFromOptions(this->matfdcoloring);
>>> >   ierr = ISColoringDestroy();
>>> >
>>> >  Apply:
>>> >   ierr = MatZeroEntries(*jac);CHKERRQ(ierr);
>>> >   ierr =
>>> MatFDColoringApply(*jac,solver->matfdcoloring,x,msflag,snes);
>>> >
>>> > Best regards,
>>> > Li Luo
>>> >
>>> > This message and its contents, including attachments are intended
>>> solely for the original recipient. If you are not the intended recipient or
>>> have received this message in error, please notify me immediately and
>>> delete this message from your computer system. Any unauthorized use or
>>> distribution is prohibited. Please consider the environment before printing
>>> this email.
>>>
>>>
>>
>> --
>>
>> Postdoctoral Fellow
>> Extreme Computing Research Center
>> King Abdullah University of Science & Technology
>> https://sites.google.com/site/rolyliluo/
>>
>> --
>> This message and its contents, including attachments are intended solely
>> for the original recipient. If you are not the intended recipient or have
>> received this message in error, please notify me immediately and delete
>> this message from your computer system. Any unauthorized use or
>> distribution is prohibited. Please consider the environment before printing
>> this email.
>
>
>
> --
> What most experimenters take for granted before they begin their
> experiments is infinitely more interesting than any results to which their
> experiments lead.
> -- Norbert Wiener
>
> https://www.cse.buffalo.edu/~knepley/
> 
>


-- 

Postdoctoral Fellow
Extreme Computing Research Center
King Abdullah University of Science & Technology
https://sites.google.com/site/rolyliluo/

-- 

This message and its contents, including attachments are intended solely 
for the original recipient. If you are not the intended recipient or have 
received this message in error, please notify me immediately and delete 
this message from your computer system. Any unauthorized use or 
distribution is prohibited. Please consider the environment before printing 
this email.


Re: [petsc-users] (no subject)

2019-12-02 Thread Matthew Knepley
On Mon, Dec 2, 2019 at 4:04 AM Li Luo  wrote:

> Thank you for your reply.
>
> The matrix is small with only 67500 rows, but is relatively dense since a
> second-order discontinuous Galerkin FEM is used, nonzeros=23,036,400.
>

This is very dense, 0.5% fill or 340 nonzeros per row.


> The number of colors is 539 as shown by using -mat_fd_coloring_view:
>

Coloring is not appropriate for this matrix since you have enormous dense
blocks (I am guessing). It could work if you statically
condense them out or had a fast analytic Jacobian. With 540 colors, it
takes 540 matvecs to generate the action of the Jacobian.

  Thanks,

 Matt


> MatFDColoring Object: 64 MPI processes
>   type not yet set
>   Error tolerance=1.49012e-08
>   Umin=1.49012e-06
>   Number of colors=539
>   Information for color 0
> Number of columns 1
>   378
> Number of rows 756
>   0 1188
>   1 1188
>   2 1188
>   3 1188
>   4 1188
>   5 1188
>   ...
>
> Is this normal?
> When using MCFD, is there any difference using mpiaij and mpibaij?
>
> Best,
> Li
>
> On Mon, Dec 2, 2019 at 10:03 AM Smith, Barry F. 
> wrote:
>
>>
>>   How many colors is it requiring?   And how long is the MatGetColoring()
>> taking? Are you running in parallel?  The MatGetColoring() MATCOLORINGSL
>> uses a sequential coloring algorithm so if your matrix is large and
>> parallel the coloring will take a long time. The parallel colorings are
>> MATCOLORINGGREEDY and MATCOLORINGJP
>>
>>   Barry
>>
>>
>> > On Dec 1, 2019, at 12:56 AM, Li Luo  wrote:
>> >
>> > Dear Developers,
>> >
>> > I tried to use the multi-color finite-difference (MC-FD) method for
>> constructing the Jacobians. However, I find it is very slow compared to the
>> exact Jacobian.
>> > My implementation of MC-FD Jacobian is posted below, would you please
>> check whether I am correct? Anything missed? Thank you!
>> >
>> > // Setup phase:
>> >   MatStructure flag;
>> >   ISColoring   iscoloring;
>> >   ierr = MatGetColoring(Jac,MATCOLORINGSL,);
>> >   ierr =
>> MatFDColoringCreate(Jac,iscoloring,>matfdcoloring);
>> >   ierr =
>> MatFDColoringSetFunction(this->matfdcoloring,(PetscErrorCode
>> (*)(void))__libmesh_petsc_snes_residual,(void *)this);
>> >   ierr = MatFDColoringSetFromOptions(this->matfdcoloring);
>> >   ierr = ISColoringDestroy();
>> >
>> >  Apply:
>> >   ierr = MatZeroEntries(*jac);CHKERRQ(ierr);
>> >   ierr =
>> MatFDColoringApply(*jac,solver->matfdcoloring,x,msflag,snes);
>> >
>> > Best regards,
>> > Li Luo
>> >
>> > This message and its contents, including attachments are intended
>> solely for the original recipient. If you are not the intended recipient or
>> have received this message in error, please notify me immediately and
>> delete this message from your computer system. Any unauthorized use or
>> distribution is prohibited. Please consider the environment before printing
>> this email.
>>
>>
>
> --
>
> Postdoctoral Fellow
> Extreme Computing Research Center
> King Abdullah University of Science & Technology
> https://sites.google.com/site/rolyliluo/
>
> --
> This message and its contents, including attachments are intended solely
> for the original recipient. If you are not the intended recipient or have
> received this message in error, please notify me immediately and delete
> this message from your computer system. Any unauthorized use or
> distribution is prohibited. Please consider the environment before printing
> this email.



-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener

https://www.cse.buffalo.edu/~knepley/ 


Re: [petsc-users] (no subject)

2019-12-02 Thread Li Luo
Thank you for your reply.

The matrix is small with only 67500 rows, but is relatively dense since a
second-order discontinuous Galerkin FEM is used, nonzeros=23,036,400.
The number of colors is 539 as shown by using -mat_fd_coloring_view:

MatFDColoring Object: 64 MPI processes
  type not yet set
  Error tolerance=1.49012e-08
  Umin=1.49012e-06
  Number of colors=539
  Information for color 0
Number of columns 1
  378
Number of rows 756
  0 1188
  1 1188
  2 1188
  3 1188
  4 1188
  5 1188
  ...

Is this normal?
When using MCFD, is there any difference using mpiaij and mpibaij?

Best,
Li

On Mon, Dec 2, 2019 at 10:03 AM Smith, Barry F.  wrote:

>
>   How many colors is it requiring?   And how long is the MatGetColoring()
> taking? Are you running in parallel?  The MatGetColoring() MATCOLORINGSL
> uses a sequential coloring algorithm so if your matrix is large and
> parallel the coloring will take a long time. The parallel colorings are
> MATCOLORINGGREEDY and MATCOLORINGJP
>
>   Barry
>
>
> > On Dec 1, 2019, at 12:56 AM, Li Luo  wrote:
> >
> > Dear Developers,
> >
> > I tried to use the multi-color finite-difference (MC-FD) method for
> constructing the Jacobians. However, I find it is very slow compared to the
> exact Jacobian.
> > My implementation of MC-FD Jacobian is posted below, would you please
> check whether I am correct? Anything missed? Thank you!
> >
> > // Setup phase:
> >   MatStructure flag;
> >   ISColoring   iscoloring;
> >   ierr = MatGetColoring(Jac,MATCOLORINGSL,);
> >   ierr =
> MatFDColoringCreate(Jac,iscoloring,>matfdcoloring);
> >   ierr =
> MatFDColoringSetFunction(this->matfdcoloring,(PetscErrorCode
> (*)(void))__libmesh_petsc_snes_residual,(void *)this);
> >   ierr = MatFDColoringSetFromOptions(this->matfdcoloring);
> >   ierr = ISColoringDestroy();
> >
> >  Apply:
> >   ierr = MatZeroEntries(*jac);CHKERRQ(ierr);
> >   ierr =
> MatFDColoringApply(*jac,solver->matfdcoloring,x,msflag,snes);
> >
> > Best regards,
> > Li Luo
> >
> > This message and its contents, including attachments are intended solely
> for the original recipient. If you are not the intended recipient or have
> received this message in error, please notify me immediately and delete
> this message from your computer system. Any unauthorized use or
> distribution is prohibited. Please consider the environment before printing
> this email.
>
>

-- 

Postdoctoral Fellow
Extreme Computing Research Center
King Abdullah University of Science & Technology
https://sites.google.com/site/rolyliluo/

-- 

This message and its contents, including attachments are intended solely 
for the original recipient. If you are not the intended recipient or have 
received this message in error, please notify me immediately and delete 
this message from your computer system. Any unauthorized use or 
distribution is prohibited. Please consider the environment before printing 
this email.


Re: [petsc-users] (no subject)

2019-12-01 Thread Smith, Barry F.


  How many colors is it requiring?   And how long is the MatGetColoring() 
taking? Are you running in parallel?  The MatGetColoring() MATCOLORINGSL uses a 
sequential coloring algorithm so if your matrix is large and parallel the 
coloring will take a long time. The parallel colorings are MATCOLORINGGREEDY 
and MATCOLORINGJP

  Barry


> On Dec 1, 2019, at 12:56 AM, Li Luo  wrote:
> 
> Dear Developers,
> 
> I tried to use the multi-color finite-difference (MC-FD) method for 
> constructing the Jacobians. However, I find it is very slow compared to the 
> exact Jacobian. 
> My implementation of MC-FD Jacobian is posted below, would you please check 
> whether I am correct? Anything missed? Thank you!
> 
> // Setup phase:
>   MatStructure flag;
>   ISColoring   iscoloring;
>   ierr = MatGetColoring(Jac,MATCOLORINGSL,);
>   ierr = MatFDColoringCreate(Jac,iscoloring,>matfdcoloring);
>   ierr = MatFDColoringSetFunction(this->matfdcoloring,(PetscErrorCode 
> (*)(void))__libmesh_petsc_snes_residual,(void *)this);
>   ierr = MatFDColoringSetFromOptions(this->matfdcoloring);
>   ierr = ISColoringDestroy();
> 
>  Apply:
>   ierr = MatZeroEntries(*jac);CHKERRQ(ierr);
>   ierr = MatFDColoringApply(*jac,solver->matfdcoloring,x,msflag,snes);
> 
> Best regards,
> Li Luo
> 
> This message and its contents, including attachments are intended solely for 
> the original recipient. If you are not the intended recipient or have 
> received this message in error, please notify me immediately and delete this 
> message from your computer system. Any unauthorized use or distribution is 
> prohibited. Please consider the environment before printing this email.



Re: [petsc-users] (no subject)

2019-12-01 Thread Jed Brown
Mark Adams  writes:

> FM matrices are slow and meant for debugging mostly (I thought,
> although the docs just give this warning if coloring is not available).
>
> I would check the timings from -log_view and verify that the time is spent
> in MatFDColoringApply. Running with -info should print the number of colors
> (C). The cost of an FD matrix is about C x cost of an exact Jacobian.
> Roughly. You could check that.

This depends a lot on how efficient the residual and Jacobian evaluation
is.  We've seen examples where a colored Jacobian is less expensive than
analytic, even when the analytic is not "poorly written".


Re: [petsc-users] (no subject)

2019-12-01 Thread Mark Adams
FM matrices are slow and meant for debugging mostly (I thought,
although the docs just give this warning if coloring is not available).

I would check the timings from -log_view and verify that the time is spent
in MatFDColoringApply. Running with -info should print the number of colors
(C). The cost of an FD matrix is about C x cost of an exact Jacobian.
Roughly. You could check that.

Mark

On Sun, Dec 1, 2019 at 3:58 AM Li Luo  wrote:

> Dear Developers,
>
> I tried to use the multi-color finite-difference (MC-FD) method for
> constructing the Jacobians. However, I find it is very slow compared to the
> exact Jacobian.
> My implementation of MC-FD Jacobian is posted below, would you please
> check whether I am correct? Anything missed? Thank you!
>
> // Setup phase:
>   MatStructure flag;
>   ISColoring   iscoloring;
>   ierr = MatGetColoring(Jac,MATCOLORINGSL,);
>   ierr = MatFDColoringCreate(Jac,iscoloring,>matfdcoloring);
>   ierr =
> MatFDColoringSetFunction(this->matfdcoloring,(PetscErrorCode
> (*)(void))__libmesh_petsc_snes_residual,(void *)this);
>   ierr = MatFDColoringSetFromOptions(this->matfdcoloring);
>   ierr = ISColoringDestroy();
>
>  Apply:
>   ierr = MatZeroEntries(*jac);CHKERRQ(ierr);
>   ierr =
> MatFDColoringApply(*jac,solver->matfdcoloring,x,msflag,snes);
>
> Best regards,
> Li Luo
>
> --
> This message and its contents, including attachments are intended solely
> for the original recipient. If you are not the intended recipient or have
> received this message in error, please notify me immediately and delete
> this message from your computer system. Any unauthorized use or
> distribution is prohibited. Please consider the environment before printing
> this email.


Re: [petsc-users] (no subject)

2019-02-27 Thread Balay, Satish via petsc-users


Can you send configure.log from this build?

Satish

On Thu, 28 Feb 2019, DAFNAKIS PANAGIOTIS via petsc-users wrote:

> Hi everybody,
> 
> I am trying to install PETSc version 3.10.3 on OSX Sierra 10.13.6 with the
> following configure options
> ./configure --CC=mpicc --CXX=mpicxx --FC=mpif90 --PETSC_ARCH=sierra-dbg
> --with-debugging=1 --download-hypre=1 --with-x=0
> 
> however I am getting the following error messages when I do 'make check'. See
> below the resulting message. Any suggestions?
> 
> Thanks,
> 
> --Panos
> 
> panos@Sierra-iMac:~/Softwares/PETSc-Bitbucket/PETSc$ make
> PETSC_DIR=/Users/panos/Softwares/PETSc-Bitbucket/PETSc PETSC_ARCH=sierra-dbg
> check
> Running test examples to verify correct installation
> Using PETSC_DIR=/Users/panos/Softwares/PETSc-Bitbucket/PETSc and
> PETSC_ARCH=sierra-dbg
> make[2]: [ex19.PETSc] Error 2 (ignored)
> ***Error detected during compile or link!***
> See http://www.mcs.anl.gov/petsc/documentation/faq.html
> /Users/panos/Softwares/PETSc-Bitbucket/PETSc/src/snes/examples/tutorials ex19
> *
> mpicc -o ex19.o -c -Wall -Wwrite-strings -Wno-strict-aliasing
> -Wno-unknown-pragmas -Qunused-arguments -fvisibility=hidden -g3
> -I/Users/panos/Softwares/PETSc-Bitbucket/PETSc/include
> -I/Users/panos/Softwares/PETSc-Bitbucket/PETSc/sierra-dbg/include
> `pwd`/ex19.c
> mpicc -Wl,-multiply_defined,suppress -Wl,-multiply_defined -Wl,suppress
> -Wl,-commons,use_dylibs -Wl,-search_paths_first -Wl,-no_compact_unwind
> -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas
> -Qunused-arguments -fvisibility=hidden -g3  -o ex19 ex19.o
> -Wl,-rpath,/Users/panos/Softwares/PETSc-Bitbucket/PETSc/sierra-dbg/lib
> -L/Users/panos/Softwares/PETSc-Bitbucket/PETSc/sierra-dbg/lib
> -Wl,-rpath,/Users/panos/Softwares/PETSc-Bitbucket/PETSc/sierra-dbg/lib
> -L/Users/panos/Softwares/PETSc-Bitbucket/PETSc/sierra-dbg/lib
> -Wl,-rpath,/usr/local/Cellar/mpich/3.3/lib -L/usr/local/Cellar/mpich/3.3/lib
> -Wl,-rpath,/usr/local/Cellar/gcc/8.3.0/lib/gcc/8/gcc/x86_64-apple-darwin17.7.0/8.3.0
> -L/usr/local/Cellar/gcc/8.3.0/lib/gcc/8/gcc/x86_64-apple-darwin17.7.0/8.3.0
> -Wl,-rpath,/usr/local/Cellar/gcc/8.3.0/lib/gcc/8
> -L/usr/local/Cellar/gcc/8.3.0/lib/gcc/8
> -Wl,-rpath,/System/Library/Frameworks/OpenGL.framework/Versions/A/Libraries
> -L/System/Library/Frameworks/OpenGL.framework/Versions/A/Libraries
> -Wl,-rpath,/System/Library/Frameworks/ApplicationServices.framework/Versions/A/Frameworks/ATS.framework/Versions/A/Resources
> -L/System/Library/Frameworks/ApplicationServices.framework/Versions/A/Frameworks/ATS.framework/Versions/A/Resources
> -Wl,-rpath,/System/Library/Frameworks/ImageIO.framework/Versions/A/Resources
> -L/System/Library/Frameworks/ImageIO.framework/Versions/A/Resources -lpetsc
> -lHYPRE -ldl -lmpifort -lmpi -lpmpi -lgfortran -lquadmath -lm -lGFXShared
> -lGLU -lGL -lGLImage -lCVMSPluginSupport -lFontParser -lFontRegistry -lJPEG
> -lTIFF -lPng -lGIF -lJP2 -lRadiance -lCoreVMClient -ldl
> ld: warning: text-based stub file
> /System/Library/Frameworks/OpenGL.framework/Versions/A/Libraries/libGFXShared.tbd
> and library file
> /System/Library/Frameworks/OpenGL.framework/Versions/A/Libraries/libGFXShared.dylib
> are out of sync. Falling back to library file for linking.
> ld: warning: text-based stub file
> /System/Library/Frameworks/OpenGL.framework/Versions/A/Libraries/libGLU.tbd
> and library file
> /System/Library/Frameworks/OpenGL.framework/Versions/A/Libraries/libGLU.dylib
> are out of sync. Falling back to library file for linking.
> ld: warning: text-based stub file
> /System/Library/Frameworks/OpenGL.framework/Versions/A/Libraries/libGL.tbd and
> library file
> /System/Library/Frameworks/OpenGL.framework/Versions/A/Libraries/libGL.dylib
> are out of sync. Falling back to library file for linking.
> ld: warning: text-based stub file
> /System/Library/Frameworks/OpenGL.framework/Versions/A/Libraries/libGLImage.tbd
> and library file
> /System/Library/Frameworks/OpenGL.framework/Versions/A/Libraries/libGLImage.dylib
> are out of sync. Falling back to library file for linking.
> ld: warning: text-based stub file
> /System/Library/Frameworks/OpenGL.framework/Versions/A/Libraries/libCVMSPluginSupport.tbd
> and library file
> /System/Library/Frameworks/OpenGL.framework/Versions/A/Libraries/libCVMSPluginSupport.dylib
> are out of sync. Falling back to library file for linking.
> ld: warning: text-based stub file
> /System/Library/Frameworks/ApplicationServices.framework/Versions/A/Frameworks/ATS.framework/Versions/A/Resources/libFontParser.tbd
> and library file
> /System/Library/Frameworks/ApplicationServices.framework/Versions/A/Frameworks/ATS.framework/Versions/A/Resources/libFontParser.dylib
> are out of sync. Falling back to library file for linking.
> ld: warning: text-based stub file
> 

Re: [petsc-users] (no subject)

2018-10-31 Thread Smith, Barry F. via petsc-users
https://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind


> On Oct 31, 2018, at 9:18 PM, Wenjin Xing  wrote:
> 
> Hi Barry
>  
> As you said, I have set the mat option to Aij. (MATAIJ = "aij" - A matrix 
> type to be used for sparse matrices. This matrix type is identical to 
> MATSEQAIJ when constructed with a single process communicator.) However, a 
> new error pops up.  By the way, I am using a single processor, not in 
> parallel mode.
>  
>  
>  
> 
>  
> Kind regards
> Wenjin
>  
>  
>  
>  
> -Original Message-
> From: Smith, Barry F. [mailto:bsm...@mcs.anl.gov] 
> Sent: Thursday, 1 November 2018 9:28 AM
> To: Wenjin Xing 
> Cc: petsc-users@mcs.anl.gov
> Subject: Re: [petsc-users] (no subject)
>  
>  
>This option only works with AIJ matrices; you must be using either BAIJ or 
> SBAIJ matrices? (or a shell matrix)
>  
>Barry
>  
>  
> > On Oct 31, 2018, at 5:45 AM, Wenjin Xing via petsc-users 
> >  wrote:
> > 
> > My issue is summarized in the picture and posted in the link 
> > https://scicomp.stackexchange.com/questions/30458/what-does-the-error-this-matrix-type-does-not-have-a-find-zero-diagonals-define?noredirect=1#comment56074_30458
> >  
> > 
> >  
> > Kind regards
> > Wenjin



Re: [petsc-users] (no subject)

2018-10-31 Thread Smith, Barry F. via petsc-users


   This option only works with AIJ matrices; you must be using either BAIJ or 
SBAIJ matrices? (or a shell matrix)

   Barry


> On Oct 31, 2018, at 5:45 AM, Wenjin Xing via petsc-users 
>  wrote:
> 
> My issue is summarized in the picture and posted in the link 
> https://scicomp.stackexchange.com/questions/30458/what-does-the-error-this-matrix-type-does-not-have-a-find-zero-diagonals-define?noredirect=1#comment56074_30458
>  
> 
>  
> Kind regards
> Wenjin



Re: [petsc-users] (no subject)

2016-09-15 Thread Ji Zhang
Thanks. I think I find the right way.

Wayne

On Fri, Sep 16, 2016 at 11:33 AM, Ji Zhang  wrote:

> Thanks for your warm help. Could you please show me some necessary
> functions or a simple demo code?
>
>
> Wayne
>
> On Fri, Sep 16, 2016 at 10:32 AM, Barry Smith  wrote:
>
>>
>>   You should create your small m_ij matrices as just dense two
>> dimensional arrays and then set them into the big M matrix. Do not create
>> the small dense matrices as PETSc matrices.
>>
>>   Barry
>>
>>
>> > On Sep 15, 2016, at 9:21 PM, Ji Zhang  wrote:
>> >
>> > I'm so apologize for the ambiguity. Let me clarify it.
>> >
>> > I'm trying to simulation interactions among different bodies. Now I
>> have calculated the interaction between two of them and stored in the
>> sub-matrix m_ij. What I want to do is to consider the whole interaction and
>> construct all sub-matrices m_ij into a big matrix M, just like this,
>> imaging the problem contain 3 bodies,
>> >
>> >  [  m11  m12  m13  ]
>> >  M =  |  m21  m22  m23  |   ,
>> >  [  m31  m32  m33  ]
>> >
>> > The system is huge that I have to use MPI and a lot of cups. A mcve
>> code is showing below, and I'm using a python wrap of PETSc, however, their
>> grammar is similar.
>> >
>> > import numpy as np
>> > from petsc4py import PETSc
>> >
>> > mSizes = (5, 8, 6)
>> > mij = []
>> >
>> > # create sub-matrices mij
>> > for i in range(len(mSizes)):
>> > for j in range(len(mSizes)):
>> > temp_m = PETSc.Mat().create(comm=PETSc.COMM_WORLD)
>> > temp_m.setSizes(((None, mSizes[i]), (None, mSizes[j])))
>> > temp_m.setType('mpidense')
>> > temp_m.setFromOptions()
>> > temp_m.setUp()
>> > temp_m[:, :] = np.random.random_sample((mSizes[i], mSizes[j]))
>> > temp_m.assemble()
>> > mij.append(temp_m)
>> >
>> > # Now we have four sub-matrices. I would like to construct them into a
>> big matrix M.
>> > M = PETSc.Mat().create(comm=PETSc.COMM_WORLD)
>> > M.setSizes(((None, np.sum(mSizes)), (None, np.sum(mSizes
>> > M.setType('mpidense')
>> > M.setFromOptions()
>> > M.setUp()
>> > mLocations = np.insert(np.cumsum(mSizes), 0, 0)# mLocations = [0,
>> mSizes]
>> > for i in range(len(mSizes)):
>> > for j in range(len(mSizes)):
>> > M[mLocations[i]:mLocations[i+1],
>> mLocations[j]:mLocations[j+1]] = mij[i*len(mSizes)+j][:, :]
>> > M.assemble()
>> >
>> > Thanks.
>> >
>> >
>> > 2016-09-16
>> > Best,
>> > Regards,
>> > Zhang Ji
>> > Beijing Computational Science Research Center
>> > E-mail: got...@gmail.com
>> >
>> >
>> >
>> >
>> >
>> > Wayne
>> >
>> > On Thu, Sep 15, 2016 at 8:58 PM, Matthew Knepley 
>> wrote:
>> > On Thu, Sep 15, 2016 at 4:23 AM, Ji Zhang  wrote:
>> > Thanks Matt. It works well for signal core. But is there any solution
>> if I need a MPI program?
>> >
>> > It unclear what the stuff below would mean in parallel.
>> >
>> > If you want to assemble several blocks of a parallel matrix that looks
>> like serial matrices, then use
>> >
>> >   http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages
>> /Mat/MatGetLocalSubMatrix.html
>> >
>> >   Thanks,
>> >
>> >  Matt
>> >
>> > Thanks.
>> >
>> > Wayne
>> >
>> > On Tue, Sep 13, 2016 at 9:30 AM, Matthew Knepley 
>> wrote:
>> > On Mon, Sep 12, 2016 at 8:24 PM, Ji Zhang  wrote:
>> > Dear all,
>> >
>> > I'm using petsc4py and now face some problems.
>> > I have a number of small petsc dense matrices mij, and I want to
>> construct them to a big matrix M like this:
>> >
>> >  [  m11  m12  m13  ]
>> > M =  |  m21  m22  m23  |   ,
>> >  [  m31  m32  m33  ]
>> > How could I do it effectively?
>> >
>> > Now I'm using the code below:
>> >
>> > # get indexes of matrix mij
>> > index1_begin, index1_end = getindex_i( )
>> > index2_begin, index2_end = getindex_j( )
>> > M[index1_begin:index1_end, index2_begin:index2_end] = mij[:, :]
>> > which report such error messages:
>> >
>> > petsc4py.PETSc.Error: error code 56
>> > [0] MatGetValues() line 1818 in /home/zhangji/PycharmProjects/
>> petsc-petsc-31a1859eaff6/src/mat/interface/matrix.c
>> > [0] MatGetValues_MPIDense() line 154 in
>> /home/zhangji/PycharmProjects/petsc-petsc-31a1859eaff6/src/m
>> at/impls/dense/mpi/mpidense.c
>> >
>> > Make M a sequential dense matrix.
>> >
>> >Matt
>> >
>> > [0] No support for this operation for this object type
>> > [0] Only local values currently supported
>> >
>> > Thanks.
>> >
>> >
>> > 2016-09-13
>> > Best,
>> > Regards,
>> > Zhang Ji
>> > Beijing Computational Science Research Center
>> > E-mail: got...@gmail.com
>> >
>> >
>> >
>> >
>> >
>> > --
>> > What most experimenters take for granted before they begin their
>> experiments is infinitely more interesting than any results to which their
>> experiments lead.
>> > -- Norbert Wiener
>> >
>> >
>> >
>> >
>> > --
>> > What 

Re: [petsc-users] (no subject)

2016-09-15 Thread Ji Zhang
Thanks for your warm help. Could you please show me some necessary
functions or a simple demo code?


Wayne

On Fri, Sep 16, 2016 at 10:32 AM, Barry Smith  wrote:

>
>   You should create your small m_ij matrices as just dense two dimensional
> arrays and then set them into the big M matrix. Do not create the small
> dense matrices as PETSc matrices.
>
>   Barry
>
>
> > On Sep 15, 2016, at 9:21 PM, Ji Zhang  wrote:
> >
> > I'm so apologize for the ambiguity. Let me clarify it.
> >
> > I'm trying to simulation interactions among different bodies. Now I have
> calculated the interaction between two of them and stored in the sub-matrix
> m_ij. What I want to do is to consider the whole interaction and construct
> all sub-matrices m_ij into a big matrix M, just like this, imaging the
> problem contain 3 bodies,
> >
> >  [  m11  m12  m13  ]
> >  M =  |  m21  m22  m23  |   ,
> >  [  m31  m32  m33  ]
> >
> > The system is huge that I have to use MPI and a lot of cups. A mcve code
> is showing below, and I'm using a python wrap of PETSc, however, their
> grammar is similar.
> >
> > import numpy as np
> > from petsc4py import PETSc
> >
> > mSizes = (5, 8, 6)
> > mij = []
> >
> > # create sub-matrices mij
> > for i in range(len(mSizes)):
> > for j in range(len(mSizes)):
> > temp_m = PETSc.Mat().create(comm=PETSc.COMM_WORLD)
> > temp_m.setSizes(((None, mSizes[i]), (None, mSizes[j])))
> > temp_m.setType('mpidense')
> > temp_m.setFromOptions()
> > temp_m.setUp()
> > temp_m[:, :] = np.random.random_sample((mSizes[i], mSizes[j]))
> > temp_m.assemble()
> > mij.append(temp_m)
> >
> > # Now we have four sub-matrices. I would like to construct them into a
> big matrix M.
> > M = PETSc.Mat().create(comm=PETSc.COMM_WORLD)
> > M.setSizes(((None, np.sum(mSizes)), (None, np.sum(mSizes
> > M.setType('mpidense')
> > M.setFromOptions()
> > M.setUp()
> > mLocations = np.insert(np.cumsum(mSizes), 0, 0)# mLocations = [0,
> mSizes]
> > for i in range(len(mSizes)):
> > for j in range(len(mSizes)):
> > M[mLocations[i]:mLocations[i+1], mLocations[j]:mLocations[j+1]]
> = mij[i*len(mSizes)+j][:, :]
> > M.assemble()
> >
> > Thanks.
> >
> >
> > 2016-09-16
> > Best,
> > Regards,
> > Zhang Ji
> > Beijing Computational Science Research Center
> > E-mail: got...@gmail.com
> >
> >
> >
> >
> >
> > Wayne
> >
> > On Thu, Sep 15, 2016 at 8:58 PM, Matthew Knepley 
> wrote:
> > On Thu, Sep 15, 2016 at 4:23 AM, Ji Zhang  wrote:
> > Thanks Matt. It works well for signal core. But is there any solution if
> I need a MPI program?
> >
> > It unclear what the stuff below would mean in parallel.
> >
> > If you want to assemble several blocks of a parallel matrix that looks
> like serial matrices, then use
> >
> >   http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/Mat/
> MatGetLocalSubMatrix.html
> >
> >   Thanks,
> >
> >  Matt
> >
> > Thanks.
> >
> > Wayne
> >
> > On Tue, Sep 13, 2016 at 9:30 AM, Matthew Knepley 
> wrote:
> > On Mon, Sep 12, 2016 at 8:24 PM, Ji Zhang  wrote:
> > Dear all,
> >
> > I'm using petsc4py and now face some problems.
> > I have a number of small petsc dense matrices mij, and I want to
> construct them to a big matrix M like this:
> >
> >  [  m11  m12  m13  ]
> > M =  |  m21  m22  m23  |   ,
> >  [  m31  m32  m33  ]
> > How could I do it effectively?
> >
> > Now I'm using the code below:
> >
> > # get indexes of matrix mij
> > index1_begin, index1_end = getindex_i( )
> > index2_begin, index2_end = getindex_j( )
> > M[index1_begin:index1_end, index2_begin:index2_end] = mij[:, :]
> > which report such error messages:
> >
> > petsc4py.PETSc.Error: error code 56
> > [0] MatGetValues() line 1818 in /home/zhangji/PycharmProjects/
> petsc-petsc-31a1859eaff6/src/mat/interface/matrix.c
> > [0] MatGetValues_MPIDense() line 154 in
> /home/zhangji/PycharmProjects/petsc-petsc-31a1859eaff6/src/
> mat/impls/dense/mpi/mpidense.c
> >
> > Make M a sequential dense matrix.
> >
> >Matt
> >
> > [0] No support for this operation for this object type
> > [0] Only local values currently supported
> >
> > Thanks.
> >
> >
> > 2016-09-13
> > Best,
> > Regards,
> > Zhang Ji
> > Beijing Computational Science Research Center
> > E-mail: got...@gmail.com
> >
> >
> >
> >
> >
> > --
> > What most experimenters take for granted before they begin their
> experiments is infinitely more interesting than any results to which their
> experiments lead.
> > -- Norbert Wiener
> >
> >
> >
> >
> > --
> > What most experimenters take for granted before they begin their
> experiments is infinitely more interesting than any results to which their
> experiments lead.
> > -- Norbert Wiener
> >
>
>


Re: [petsc-users] (no subject)

2016-09-15 Thread Barry Smith

  You should create your small m_ij matrices as just dense two dimensional 
arrays and then set them into the big M matrix. Do not create the small dense 
matrices as PETSc matrices.

  Barry


> On Sep 15, 2016, at 9:21 PM, Ji Zhang  wrote:
> 
> I'm so apologize for the ambiguity. Let me clarify it. 
> 
> I'm trying to simulation interactions among different bodies. Now I have 
> calculated the interaction between two of them and stored in the sub-matrix 
> m_ij. What I want to do is to consider the whole interaction and construct 
> all sub-matrices m_ij into a big matrix M, just like this, imaging the 
> problem contain 3 bodies,  
> 
>  [  m11  m12  m13  ]
>  M =  |  m21  m22  m23  |   ,
>  [  m31  m32  m33  ]
> 
> The system is huge that I have to use MPI and a lot of cups. A mcve code is 
> showing below, and I'm using a python wrap of PETSc, however, their grammar 
> is similar. 
> 
> import numpy as np
> from petsc4py import PETSc
> 
> mSizes = (5, 8, 6)
> mij = []
> 
> # create sub-matrices mij
> for i in range(len(mSizes)):
> for j in range(len(mSizes)):
> temp_m = PETSc.Mat().create(comm=PETSc.COMM_WORLD)
> temp_m.setSizes(((None, mSizes[i]), (None, mSizes[j])))
> temp_m.setType('mpidense')
> temp_m.setFromOptions()
> temp_m.setUp()
> temp_m[:, :] = np.random.random_sample((mSizes[i], mSizes[j]))
> temp_m.assemble()
> mij.append(temp_m)
> 
> # Now we have four sub-matrices. I would like to construct them into a big 
> matrix M.
> M = PETSc.Mat().create(comm=PETSc.COMM_WORLD)
> M.setSizes(((None, np.sum(mSizes)), (None, np.sum(mSizes
> M.setType('mpidense')
> M.setFromOptions()
> M.setUp()
> mLocations = np.insert(np.cumsum(mSizes), 0, 0)# mLocations = [0, mSizes]
> for i in range(len(mSizes)):
> for j in range(len(mSizes)):
> M[mLocations[i]:mLocations[i+1], mLocations[j]:mLocations[j+1]] = 
> mij[i*len(mSizes)+j][:, :]
> M.assemble()
> 
> Thanks. 
> 
> 
> 2016-09-16
> Best, 
> Regards,
> Zhang Ji 
> Beijing Computational Science Research Center 
> E-mail: got...@gmail.com
> 
> 
> 
> 
> 
> Wayne
> 
> On Thu, Sep 15, 2016 at 8:58 PM, Matthew Knepley  wrote:
> On Thu, Sep 15, 2016 at 4:23 AM, Ji Zhang  wrote:
> Thanks Matt. It works well for signal core. But is there any solution if I 
> need a MPI program?
> 
> It unclear what the stuff below would mean in parallel.
> 
> If you want to assemble several blocks of a parallel matrix that looks like 
> serial matrices, then use
> 
>   
> http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/Mat/MatGetLocalSubMatrix.html
> 
>   Thanks,
> 
>  Matt
>  
> Thanks. 
> 
> Wayne
> 
> On Tue, Sep 13, 2016 at 9:30 AM, Matthew Knepley  wrote:
> On Mon, Sep 12, 2016 at 8:24 PM, Ji Zhang  wrote:
> Dear all, 
> 
> I'm using petsc4py and now face some problems.
> I have a number of small petsc dense matrices mij, and I want to construct 
> them to a big matrix M like this:
> 
>  [  m11  m12  m13  ]
> M =  |  m21  m22  m23  |   ,
>  [  m31  m32  m33  ]
> How could I do it effectively?
> 
> Now I'm using the code below:
> 
> # get indexes of matrix mij
> index1_begin, index1_end = getindex_i( )
> index2_begin, index2_end = getindex_j( )
> M[index1_begin:index1_end, index2_begin:index2_end] = mij[:, :]
> which report such error messages:
> 
> petsc4py.PETSc.Error: error code 56
> [0] MatGetValues() line 1818 in 
> /home/zhangji/PycharmProjects/petsc-petsc-31a1859eaff6/src/mat/interface/matrix.c
> [0] MatGetValues_MPIDense() line 154 in 
> /home/zhangji/PycharmProjects/petsc-petsc-31a1859eaff6/src/mat/impls/dense/mpi/mpidense.c
> 
> Make M a sequential dense matrix.
> 
>Matt
>  
> [0] No support for this operation for this object type
> [0] Only local values currently supported
> 
> Thanks. 
> 
> 
> 2016-09-13
> Best, 
> Regards,
> Zhang Ji 
> Beijing Computational Science Research Center 
> E-mail: got...@gmail.com
> 
> 
> 
> 
> 
> -- 
> What most experimenters take for granted before they begin their experiments 
> is infinitely more interesting than any results to which their experiments 
> lead.
> -- Norbert Wiener
> 
> 
> 
> 
> -- 
> What most experimenters take for granted before they begin their experiments 
> is infinitely more interesting than any results to which their experiments 
> lead.
> -- Norbert Wiener
> 



Re: [petsc-users] (no subject)

2016-09-15 Thread Ji Zhang
I'm so apologize for the ambiguity. Let me clarify it.

I'm trying to simulation interactions among different bodies. Now I have
calculated the interaction between two of them and stored in the sub-matrix
m_ij. What I want to do is to consider the whole interaction and construct
all sub-matrices m_ij into a big matrix M, just like this, imaging the
problem contain 3 bodies,

 [  m11  m12  m13  ]
 M =  |  m21  m22  m23  |   ,
 [  m31  m32  m33  ]

The system is huge that I have to use MPI and a lot of cups. A mcve code is
showing below, and I'm using a python wrap of PETSc, however, their grammar
is similar.

import numpy as np
from petsc4py import PETSc

mSizes = (5, 8, 6)
mij = []

# create sub-matrices mij
for i in range(len(mSizes)):
for j in range(len(mSizes)):
temp_m = PETSc.Mat().create(comm=PETSc.COMM_WORLD)
temp_m.setSizes(((None, mSizes[i]), (None, mSizes[j])))
temp_m.setType('mpidense')
temp_m.setFromOptions()
temp_m.setUp()
temp_m[:, :] = np.random.random_sample((mSizes[i], mSizes[j]))
temp_m.assemble()
mij.append(temp_m)

# Now we have four sub-matrices. I would like to construct them into a
big matrix M.
M = PETSc.Mat().create(comm=PETSc.COMM_WORLD)
M.setSizes(((None, np.sum(mSizes)), (None, np.sum(mSizes
M.setType('mpidense')
M.setFromOptions()
M.setUp()
mLocations = np.insert(np.cumsum(mSizes), 0, 0)# mLocations = [0, mSizes]
for i in range(len(mSizes)):
for j in range(len(mSizes)):
M[mLocations[i]:mLocations[i+1],
mLocations[j]:mLocations[j+1]] = mij[i*len(mSizes)+j][:, :]
M.assemble()


Thanks.


2016-09-16
Best,
Regards,
Zhang Ji
Beijing Computational Science Research Center
E-mail: got...@gmail.com





Wayne

On Thu, Sep 15, 2016 at 8:58 PM, Matthew Knepley  wrote:

> On Thu, Sep 15, 2016 at 4:23 AM, Ji Zhang  wrote:
>
>> Thanks Matt. It works well for signal core. But is there any solution if
>> I need a MPI program?
>>
>
> It unclear what the stuff below would mean in parallel.
>
> If you want to assemble several blocks of a parallel matrix that looks
> like serial matrices, then use
>
>   http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/Mat/
> MatGetLocalSubMatrix.html
>
>   Thanks,
>
>  Matt
>
>
>> Thanks.
>>
>> Wayne
>>
>> On Tue, Sep 13, 2016 at 9:30 AM, Matthew Knepley 
>> wrote:
>>
>>> On Mon, Sep 12, 2016 at 8:24 PM, Ji Zhang  wrote:
>>>
 Dear all,

 I'm using petsc4py and now face some problems.
 I have a number of small petsc dense matrices mij, and I want to
 construct them to a big matrix M like this:

  [  m11  m12  m13  ]
 M =  |  m21  m22  m23  |   ,
  [  m31  m32  m33  ]
 How could I do it effectively?

 Now I'm using the code below:

 # get indexes of matrix mij
 index1_begin, index1_end = getindex_i( )
 index2_begin, index2_end = getindex_j( )
 M[index1_begin:index1_end, index2_begin:index2_end] = mij[:, :]
 which report such error messages:

 petsc4py.PETSc.Error: error code 56
 [0] MatGetValues() line 1818 in /home/zhangji/PycharmProjects/
 petsc-petsc-31a1859eaff6/src/mat/interface/matrix.c
 [0] MatGetValues_MPIDense() line 154 in
 /home/zhangji/PycharmProjects/petsc-petsc-31a1859eaff6/src/m
 at/impls/dense/mpi/mpidense.c

>>>
>>> Make M a sequential dense matrix.
>>>
>>>Matt
>>>
>>>
 [0] No support for this operation for this object type
 [0] Only local values currently supported

 Thanks.


 2016-09-13
 Best,
 Regards,
 Zhang Ji
 Beijing Computational Science Research Center
 E-mail: got...@gmail.com



>>>
>>>
>>> --
>>> What most experimenters take for granted before they begin their
>>> experiments is infinitely more interesting than any results to which their
>>> experiments lead.
>>> -- Norbert Wiener
>>>
>>
>>
>
>
> --
> What most experimenters take for granted before they begin their
> experiments is infinitely more interesting than any results to which their
> experiments lead.
> -- Norbert Wiener
>


Re: [petsc-users] (no subject)

2016-09-15 Thread Matthew Knepley
On Thu, Sep 15, 2016 at 4:23 AM, Ji Zhang  wrote:

> Thanks Matt. It works well for signal core. But is there any solution if I
> need a MPI program?
>

It unclear what the stuff below would mean in parallel.

If you want to assemble several blocks of a parallel matrix that looks like
serial matrices, then use


http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/Mat/MatGetLocalSubMatrix.html

  Thanks,

 Matt


> Thanks.
>
> Wayne
>
> On Tue, Sep 13, 2016 at 9:30 AM, Matthew Knepley 
> wrote:
>
>> On Mon, Sep 12, 2016 at 8:24 PM, Ji Zhang  wrote:
>>
>>> Dear all,
>>>
>>> I'm using petsc4py and now face some problems.
>>> I have a number of small petsc dense matrices mij, and I want to
>>> construct them to a big matrix M like this:
>>>
>>>  [  m11  m12  m13  ]
>>> M =  |  m21  m22  m23  |   ,
>>>  [  m31  m32  m33  ]
>>> How could I do it effectively?
>>>
>>> Now I'm using the code below:
>>>
>>> # get indexes of matrix mij
>>> index1_begin, index1_end = getindex_i( )
>>> index2_begin, index2_end = getindex_j( )
>>> M[index1_begin:index1_end, index2_begin:index2_end] = mij[:, :]
>>> which report such error messages:
>>>
>>> petsc4py.PETSc.Error: error code 56
>>> [0] MatGetValues() line 1818 in /home/zhangji/PycharmProjects/
>>> petsc-petsc-31a1859eaff6/src/mat/interface/matrix.c
>>> [0] MatGetValues_MPIDense() line 154 in
>>> /home/zhangji/PycharmProjects/petsc-petsc-31a1859eaff6/src/m
>>> at/impls/dense/mpi/mpidense.c
>>>
>>
>> Make M a sequential dense matrix.
>>
>>Matt
>>
>>
>>> [0] No support for this operation for this object type
>>> [0] Only local values currently supported
>>>
>>> Thanks.
>>>
>>>
>>> 2016-09-13
>>> Best,
>>> Regards,
>>> Zhang Ji
>>> Beijing Computational Science Research Center
>>> E-mail: got...@gmail.com
>>>
>>>
>>>
>>
>>
>> --
>> What most experimenters take for granted before they begin their
>> experiments is infinitely more interesting than any results to which their
>> experiments lead.
>> -- Norbert Wiener
>>
>
>


-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener


Re: [petsc-users] (no subject)

2016-09-15 Thread Ji Zhang
Thanks Matt. It works well for signal core. But is there any solution if I
need a MPI program?

Thanks.

Wayne

On Tue, Sep 13, 2016 at 9:30 AM, Matthew Knepley  wrote:

> On Mon, Sep 12, 2016 at 8:24 PM, Ji Zhang  wrote:
>
>> Dear all,
>>
>> I'm using petsc4py and now face some problems.
>> I have a number of small petsc dense matrices mij, and I want to
>> construct them to a big matrix M like this:
>>
>>  [  m11  m12  m13  ]
>> M =  |  m21  m22  m23  |   ,
>>  [  m31  m32  m33  ]
>> How could I do it effectively?
>>
>> Now I'm using the code below:
>>
>> # get indexes of matrix mij
>> index1_begin, index1_end = getindex_i( )
>> index2_begin, index2_end = getindex_j( )
>> M[index1_begin:index1_end, index2_begin:index2_end] = mij[:, :]
>> which report such error messages:
>>
>> petsc4py.PETSc.Error: error code 56
>> [0] MatGetValues() line 1818 in /home/zhangji/PycharmProjects/
>> petsc-petsc-31a1859eaff6/src/mat/interface/matrix.c
>> [0] MatGetValues_MPIDense() line 154 in /home/zhangji/PycharmProjects/
>> petsc-petsc-31a1859eaff6/src/mat/impls/dense/mpi/mpidense.c
>>
>
> Make M a sequential dense matrix.
>
>Matt
>
>
>> [0] No support for this operation for this object type
>> [0] Only local values currently supported
>>
>> Thanks.
>>
>>
>> 2016-09-13
>> Best,
>> Regards,
>> Zhang Ji
>> Beijing Computational Science Research Center
>> E-mail: got...@gmail.com
>>
>>
>>
>
>
> --
> What most experimenters take for granted before they begin their
> experiments is infinitely more interesting than any results to which their
> experiments lead.
> -- Norbert Wiener
>


Re: [petsc-users] (no subject)

2016-09-12 Thread Matthew Knepley
On Mon, Sep 12, 2016 at 8:24 PM, Ji Zhang  wrote:

> Dear all,
>
> I'm using petsc4py and now face some problems.
> I have a number of small petsc dense matrices mij, and I want to construct
> them to a big matrix M like this:
>
>  [  m11  m12  m13  ]
> M =  |  m21  m22  m23  |   ,
>  [  m31  m32  m33  ]
> How could I do it effectively?
>
> Now I'm using the code below:
>
> # get indexes of matrix mij
> index1_begin, index1_end = getindex_i( )
> index2_begin, index2_end = getindex_j( )
> M[index1_begin:index1_end, index2_begin:index2_end] = mij[:, :]
> which report such error messages:
>
> petsc4py.PETSc.Error: error code 56
> [0] MatGetValues() line 1818 in /home/zhangji/PycharmProjects/
> petsc-petsc-31a1859eaff6/src/mat/interface/matrix.c
> [0] MatGetValues_MPIDense() line 154 in /home/zhangji/PycharmProjects/
> petsc-petsc-31a1859eaff6/src/mat/impls/dense/mpi/mpidense.c
>

Make M a sequential dense matrix.

   Matt


> [0] No support for this operation for this object type
> [0] Only local values currently supported
>
> Thanks.
>
>
> 2016-09-13
> Best,
> Regards,
> Zhang Ji
> Beijing Computational Science Research Center
> E-mail: got...@gmail.com
>
>
>


-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener


Re: [petsc-users] (no subject)

2016-06-02 Thread neok m4700
Re,

Makes sense to read the documentation, I will try with another
preconditioners.

Thanks for the support.

2016-06-02 18:15 GMT+02:00 Matthew Knepley :

> On Thu, Jun 2, 2016 at 11:10 AM, neok m4700  wrote:
>
>> Hi Satish,
>>
>> Thanks for the correction.
>>
>> The error message is now slightly different, but the result is the same
>> (serial runs fine, parallel with mpirun fails with following error):
>>
>
> Now the error is correct. You are asking to run ICC in parallel, which we
> do not support. It is telling you
> to look at the table of available solvers.
>
>   Thanks,
>
> Matt
>
>
>> [0] KSPSolve() line 599 in <...>/src/ksp/ksp/interface/itfunc.c
>> [0] KSPSetUp() line 390 in <...>/src/ksp/ksp/interface/itfunc.c
>> [0] PCSetUp() line 968 in <...>/src/ksp/pc/interface/precon.c
>> [0] PCSetUp_ICC() line 21 in<...>/src/ksp/pc/impls/factor/icc/icc.c
>> [0] MatGetFactor() line 4291 in <...>/src/mat/interface/matrix.c
>> [0] See http://www.mcs.anl.gov/petsc/documentation/linearsolvertable.html
>> for possible LU and Cholesky solvers
>> [0] Could not locate a solver package. Perhaps you must ./configure with
>> --download-
>>
>>
>>
>>
>> 2016-06-02 17:58 GMT+02:00 Satish Balay :
>>
>>> with petsc-master - you would have to use petsc4py-master.
>>>
>>> i.e try petsc-eab7b92 with petsc4py-6e8e093
>>>
>>> Satish
>>>
>>>
>>> On Thu, 2 Jun 2016, neok m4700 wrote:
>>>
>>> > Hi Matthew,
>>> >
>>> > I've rebuilt petsc // petsc4py with following versions:
>>> >
>>> > 3.7.0 // 3.7.0 => same runtime error
>>> > 00c67f3 // 3.7.1 => fails to build petsc4py (error below)
>>> > 00c67f3 // 6e8e093 => same as above
>>> > f1b0812 (latest commit) // 6e8e093 (latest commit) => same as above
>>> >
>>> > In file included from src/PETSc.c:3:0:
>>> > src/petsc4py.PETSc.c: In function
>>> > ‘__pyx_pf_8petsc4py_5PETSc_6DMPlex_4createBoxMesh’:
>>> > src/petsc4py.PETSc.c:214629:112: error: incompatible type for argument
>>> 4 of
>>> > ‘DMPlexCreateBoxMesh’
>>> >__pyx_t_4 =
>>> > __pyx_f_8petsc4py_5PETSc_CHKERR(DMPlexCreateBoxMesh(__pyx_v_ccomm,
>>> > __pyx_v_cdim, __pyx_v_interp, (&__pyx_v_newdm))); if
>>> (unlikely(__pyx_t_4 ==
>>> > -1)) __PYX_ERR(42, 49, __pyx_L1_error)
>>> >
>>> > using
>>> > - numpy 1.11.0
>>> > - openblas 0.2.18
>>> > - openmpi 1.10.2
>>> >
>>> > Thanks
>>> >
>>> > 2016-06-02 16:39 GMT+02:00 Matthew Knepley :
>>> >
>>> > > On Thu, Jun 2, 2016 at 9:12 AM, neok m4700 
>>> wrote:
>>> > >
>>> > >> Hi,
>>> > >>
>>> > >> I built petsc 3.7.1 and petsc4py 3.7.0 (with openmpi 1.10.2) and
>>> ran the
>>> > >> examples in the demo directory.
>>> > >>
>>> > >
>>> > > I believe this was fixed in 'master':
>>> > >
>>> https://bitbucket.org/petsc/petsc/commits/00c67f3b09c0bcda06af5ed306d845d9138e5003
>>> > >
>>> > > Is it possible to try this?
>>> > >
>>> > >   Thanks,
>>> > >
>>> > > Matt
>>> > >
>>> > >
>>> > >> $ python test_mat_ksp.py
>>> > >> => runs as expected (serial)
>>> > >>
>>> > >> $ mpiexec -np 2 python test_mat_ksp.py
>>> > >> => fails with the following output:
>>> > >>
>>> > >> Traceback (most recent call last):
>>> > >>   File "<...>/demo/kspsolve/test_mat_ksp.py", line 15, in 
>>> > >> execfile('petsc-ksp.py')
>>> > >>   File "<...>/demo/kspsolve/test_mat_ksp.py", line 6, in execfile
>>> > >> try: exec(fh.read()+"\n", globals, locals)
>>> > >>   File "", line 15, in 
>>> > >>   File "PETSc/KSP.pyx", line 384, in petsc4py.PETSc.KSP.solve
>>> > >> (src/petsc4py.PETSc.c:153555)
>>> > >> petsc4py.PETSc.Error: error code 92
>>> > >> [0] KSPSolve() line 599 in <...>/src/ksp/ksp/interface/itfunc.c
>>> > >> [0] KSPSetUp() line 390 in <...>/src/ksp/ksp/interface/itfunc.c
>>> > >> [0] PCSetUp() line 968 in <...>/src/ksp/pc/interface/precon.c
>>> > >> [0] PCSetUp_ICC() line 21 in <...>/src/ksp/pc/impls/factor/icc/icc.c
>>> > >> [0] MatGetFactor() line 4240 in<...>/src/mat/interface/matrix.c
>>> > >> [0] You cannot overwrite this option since that will conflict with
>>> other
>>> > >> previously set options
>>> > >> [0] Could not locate solver package (null). Perhaps you must
>>> ./configure
>>> > >> with --download-(null)
>>> > >>
>>> > >> <...>
>>> > >> ---
>>> > >> Primary job  terminated normally, but 1 process returned
>>> > >> a non-zero exit code.. Per user-direction, the job has been aborted.
>>> > >> ---
>>> > >>
>>> --
>>> > >> mpirun detected that one or more processes exited with non-zero
>>> status,
>>> > >> thus causing
>>> > >> the job to be terminated. The first process to do so was:
>>> > >>
>>> > >>   Process name: [[23110,1],0]
>>> > >>   Exit code:1
>>> > >>
>>> --
>>> > >>
>>> > >>
>>> > >> What have I done wrong ?
>>> > >>
>>> 

Re: [petsc-users] (no subject)

2016-06-02 Thread Matthew Knepley
On Thu, Jun 2, 2016 at 11:10 AM, neok m4700  wrote:

> Hi Satish,
>
> Thanks for the correction.
>
> The error message is now slightly different, but the result is the same
> (serial runs fine, parallel with mpirun fails with following error):
>

Now the error is correct. You are asking to run ICC in parallel, which we
do not support. It is telling you
to look at the table of available solvers.

  Thanks,

Matt


> [0] KSPSolve() line 599 in <...>/src/ksp/ksp/interface/itfunc.c
> [0] KSPSetUp() line 390 in <...>/src/ksp/ksp/interface/itfunc.c
> [0] PCSetUp() line 968 in <...>/src/ksp/pc/interface/precon.c
> [0] PCSetUp_ICC() line 21 in<...>/src/ksp/pc/impls/factor/icc/icc.c
> [0] MatGetFactor() line 4291 in <...>/src/mat/interface/matrix.c
> [0] See http://www.mcs.anl.gov/petsc/documentation/linearsolvertable.html
> for possible LU and Cholesky solvers
> [0] Could not locate a solver package. Perhaps you must ./configure with
> --download-
>
>
>
>
> 2016-06-02 17:58 GMT+02:00 Satish Balay :
>
>> with petsc-master - you would have to use petsc4py-master.
>>
>> i.e try petsc-eab7b92 with petsc4py-6e8e093
>>
>> Satish
>>
>>
>> On Thu, 2 Jun 2016, neok m4700 wrote:
>>
>> > Hi Matthew,
>> >
>> > I've rebuilt petsc // petsc4py with following versions:
>> >
>> > 3.7.0 // 3.7.0 => same runtime error
>> > 00c67f3 // 3.7.1 => fails to build petsc4py (error below)
>> > 00c67f3 // 6e8e093 => same as above
>> > f1b0812 (latest commit) // 6e8e093 (latest commit) => same as above
>> >
>> > In file included from src/PETSc.c:3:0:
>> > src/petsc4py.PETSc.c: In function
>> > ‘__pyx_pf_8petsc4py_5PETSc_6DMPlex_4createBoxMesh’:
>> > src/petsc4py.PETSc.c:214629:112: error: incompatible type for argument
>> 4 of
>> > ‘DMPlexCreateBoxMesh’
>> >__pyx_t_4 =
>> > __pyx_f_8petsc4py_5PETSc_CHKERR(DMPlexCreateBoxMesh(__pyx_v_ccomm,
>> > __pyx_v_cdim, __pyx_v_interp, (&__pyx_v_newdm))); if
>> (unlikely(__pyx_t_4 ==
>> > -1)) __PYX_ERR(42, 49, __pyx_L1_error)
>> >
>> > using
>> > - numpy 1.11.0
>> > - openblas 0.2.18
>> > - openmpi 1.10.2
>> >
>> > Thanks
>> >
>> > 2016-06-02 16:39 GMT+02:00 Matthew Knepley :
>> >
>> > > On Thu, Jun 2, 2016 at 9:12 AM, neok m4700 
>> wrote:
>> > >
>> > >> Hi,
>> > >>
>> > >> I built petsc 3.7.1 and petsc4py 3.7.0 (with openmpi 1.10.2) and ran
>> the
>> > >> examples in the demo directory.
>> > >>
>> > >
>> > > I believe this was fixed in 'master':
>> > >
>> https://bitbucket.org/petsc/petsc/commits/00c67f3b09c0bcda06af5ed306d845d9138e5003
>> > >
>> > > Is it possible to try this?
>> > >
>> > >   Thanks,
>> > >
>> > > Matt
>> > >
>> > >
>> > >> $ python test_mat_ksp.py
>> > >> => runs as expected (serial)
>> > >>
>> > >> $ mpiexec -np 2 python test_mat_ksp.py
>> > >> => fails with the following output:
>> > >>
>> > >> Traceback (most recent call last):
>> > >>   File "<...>/demo/kspsolve/test_mat_ksp.py", line 15, in 
>> > >> execfile('petsc-ksp.py')
>> > >>   File "<...>/demo/kspsolve/test_mat_ksp.py", line 6, in execfile
>> > >> try: exec(fh.read()+"\n", globals, locals)
>> > >>   File "", line 15, in 
>> > >>   File "PETSc/KSP.pyx", line 384, in petsc4py.PETSc.KSP.solve
>> > >> (src/petsc4py.PETSc.c:153555)
>> > >> petsc4py.PETSc.Error: error code 92
>> > >> [0] KSPSolve() line 599 in <...>/src/ksp/ksp/interface/itfunc.c
>> > >> [0] KSPSetUp() line 390 in <...>/src/ksp/ksp/interface/itfunc.c
>> > >> [0] PCSetUp() line 968 in <...>/src/ksp/pc/interface/precon.c
>> > >> [0] PCSetUp_ICC() line 21 in <...>/src/ksp/pc/impls/factor/icc/icc.c
>> > >> [0] MatGetFactor() line 4240 in<...>/src/mat/interface/matrix.c
>> > >> [0] You cannot overwrite this option since that will conflict with
>> other
>> > >> previously set options
>> > >> [0] Could not locate solver package (null). Perhaps you must
>> ./configure
>> > >> with --download-(null)
>> > >>
>> > >> <...>
>> > >> ---
>> > >> Primary job  terminated normally, but 1 process returned
>> > >> a non-zero exit code.. Per user-direction, the job has been aborted.
>> > >> ---
>> > >>
>> --
>> > >> mpirun detected that one or more processes exited with non-zero
>> status,
>> > >> thus causing
>> > >> the job to be terminated. The first process to do so was:
>> > >>
>> > >>   Process name: [[23110,1],0]
>> > >>   Exit code:1
>> > >>
>> --
>> > >>
>> > >>
>> > >> What have I done wrong ?
>> > >>
>> > >>
>> > >>
>> > >>
>> > >
>> > >
>> > > --
>> > > What most experimenters take for granted before they begin their
>> > > experiments is infinitely more interesting than any results to which
>> their
>> > > experiments lead.
>> > > -- Norbert Wiener
>> > >
>> >
>>
>
>


-- 
What most experimenters take for granted before 

Re: [petsc-users] (no subject)

2016-06-02 Thread neok m4700
Hi Satish,

Thanks for the correction.

The error message is now slightly different, but the result is the same
(serial runs fine, parallel with mpirun fails with following error):

[0] KSPSolve() line 599 in <...>/src/ksp/ksp/interface/itfunc.c
[0] KSPSetUp() line 390 in <...>/src/ksp/ksp/interface/itfunc.c
[0] PCSetUp() line 968 in <...>/src/ksp/pc/interface/precon.c
[0] PCSetUp_ICC() line 21 in<...>/src/ksp/pc/impls/factor/icc/icc.c
[0] MatGetFactor() line 4291 in <...>/src/mat/interface/matrix.c
[0] See http://www.mcs.anl.gov/petsc/documentation/linearsolvertable.html
for possible LU and Cholesky solvers
[0] Could not locate a solver package. Perhaps you must ./configure with
--download-




2016-06-02 17:58 GMT+02:00 Satish Balay :

> with petsc-master - you would have to use petsc4py-master.
>
> i.e try petsc-eab7b92 with petsc4py-6e8e093
>
> Satish
>
>
> On Thu, 2 Jun 2016, neok m4700 wrote:
>
> > Hi Matthew,
> >
> > I've rebuilt petsc // petsc4py with following versions:
> >
> > 3.7.0 // 3.7.0 => same runtime error
> > 00c67f3 // 3.7.1 => fails to build petsc4py (error below)
> > 00c67f3 // 6e8e093 => same as above
> > f1b0812 (latest commit) // 6e8e093 (latest commit) => same as above
> >
> > In file included from src/PETSc.c:3:0:
> > src/petsc4py.PETSc.c: In function
> > ‘__pyx_pf_8petsc4py_5PETSc_6DMPlex_4createBoxMesh’:
> > src/petsc4py.PETSc.c:214629:112: error: incompatible type for argument 4
> of
> > ‘DMPlexCreateBoxMesh’
> >__pyx_t_4 =
> > __pyx_f_8petsc4py_5PETSc_CHKERR(DMPlexCreateBoxMesh(__pyx_v_ccomm,
> > __pyx_v_cdim, __pyx_v_interp, (&__pyx_v_newdm))); if (unlikely(__pyx_t_4
> ==
> > -1)) __PYX_ERR(42, 49, __pyx_L1_error)
> >
> > using
> > - numpy 1.11.0
> > - openblas 0.2.18
> > - openmpi 1.10.2
> >
> > Thanks
> >
> > 2016-06-02 16:39 GMT+02:00 Matthew Knepley :
> >
> > > On Thu, Jun 2, 2016 at 9:12 AM, neok m4700 
> wrote:
> > >
> > >> Hi,
> > >>
> > >> I built petsc 3.7.1 and petsc4py 3.7.0 (with openmpi 1.10.2) and ran
> the
> > >> examples in the demo directory.
> > >>
> > >
> > > I believe this was fixed in 'master':
> > >
> https://bitbucket.org/petsc/petsc/commits/00c67f3b09c0bcda06af5ed306d845d9138e5003
> > >
> > > Is it possible to try this?
> > >
> > >   Thanks,
> > >
> > > Matt
> > >
> > >
> > >> $ python test_mat_ksp.py
> > >> => runs as expected (serial)
> > >>
> > >> $ mpiexec -np 2 python test_mat_ksp.py
> > >> => fails with the following output:
> > >>
> > >> Traceback (most recent call last):
> > >>   File "<...>/demo/kspsolve/test_mat_ksp.py", line 15, in 
> > >> execfile('petsc-ksp.py')
> > >>   File "<...>/demo/kspsolve/test_mat_ksp.py", line 6, in execfile
> > >> try: exec(fh.read()+"\n", globals, locals)
> > >>   File "", line 15, in 
> > >>   File "PETSc/KSP.pyx", line 384, in petsc4py.PETSc.KSP.solve
> > >> (src/petsc4py.PETSc.c:153555)
> > >> petsc4py.PETSc.Error: error code 92
> > >> [0] KSPSolve() line 599 in <...>/src/ksp/ksp/interface/itfunc.c
> > >> [0] KSPSetUp() line 390 in <...>/src/ksp/ksp/interface/itfunc.c
> > >> [0] PCSetUp() line 968 in <...>/src/ksp/pc/interface/precon.c
> > >> [0] PCSetUp_ICC() line 21 in <...>/src/ksp/pc/impls/factor/icc/icc.c
> > >> [0] MatGetFactor() line 4240 in<...>/src/mat/interface/matrix.c
> > >> [0] You cannot overwrite this option since that will conflict with
> other
> > >> previously set options
> > >> [0] Could not locate solver package (null). Perhaps you must
> ./configure
> > >> with --download-(null)
> > >>
> > >> <...>
> > >> ---
> > >> Primary job  terminated normally, but 1 process returned
> > >> a non-zero exit code.. Per user-direction, the job has been aborted.
> > >> ---
> > >>
> --
> > >> mpirun detected that one or more processes exited with non-zero
> status,
> > >> thus causing
> > >> the job to be terminated. The first process to do so was:
> > >>
> > >>   Process name: [[23110,1],0]
> > >>   Exit code:1
> > >>
> --
> > >>
> > >>
> > >> What have I done wrong ?
> > >>
> > >>
> > >>
> > >>
> > >
> > >
> > > --
> > > What most experimenters take for granted before they begin their
> > > experiments is infinitely more interesting than any results to which
> their
> > > experiments lead.
> > > -- Norbert Wiener
> > >
> >
>


Re: [petsc-users] (no subject)

2016-06-02 Thread Satish Balay
with petsc-master - you would have to use petsc4py-master.

i.e try petsc-eab7b92 with petsc4py-6e8e093

Satish


On Thu, 2 Jun 2016, neok m4700 wrote:

> Hi Matthew,
> 
> I've rebuilt petsc // petsc4py with following versions:
> 
> 3.7.0 // 3.7.0 => same runtime error
> 00c67f3 // 3.7.1 => fails to build petsc4py (error below)
> 00c67f3 // 6e8e093 => same as above
> f1b0812 (latest commit) // 6e8e093 (latest commit) => same as above
> 
> In file included from src/PETSc.c:3:0:
> src/petsc4py.PETSc.c: In function
> ‘__pyx_pf_8petsc4py_5PETSc_6DMPlex_4createBoxMesh’:
> src/petsc4py.PETSc.c:214629:112: error: incompatible type for argument 4 of
> ‘DMPlexCreateBoxMesh’
>__pyx_t_4 =
> __pyx_f_8petsc4py_5PETSc_CHKERR(DMPlexCreateBoxMesh(__pyx_v_ccomm,
> __pyx_v_cdim, __pyx_v_interp, (&__pyx_v_newdm))); if (unlikely(__pyx_t_4 ==
> -1)) __PYX_ERR(42, 49, __pyx_L1_error)
> 
> using
> - numpy 1.11.0
> - openblas 0.2.18
> - openmpi 1.10.2
> 
> Thanks
> 
> 2016-06-02 16:39 GMT+02:00 Matthew Knepley :
> 
> > On Thu, Jun 2, 2016 at 9:12 AM, neok m4700  wrote:
> >
> >> Hi,
> >>
> >> I built petsc 3.7.1 and petsc4py 3.7.0 (with openmpi 1.10.2) and ran the
> >> examples in the demo directory.
> >>
> >
> > I believe this was fixed in 'master':
> > https://bitbucket.org/petsc/petsc/commits/00c67f3b09c0bcda06af5ed306d845d9138e5003
> >
> > Is it possible to try this?
> >
> >   Thanks,
> >
> > Matt
> >
> >
> >> $ python test_mat_ksp.py
> >> => runs as expected (serial)
> >>
> >> $ mpiexec -np 2 python test_mat_ksp.py
> >> => fails with the following output:
> >>
> >> Traceback (most recent call last):
> >>   File "<...>/demo/kspsolve/test_mat_ksp.py", line 15, in 
> >> execfile('petsc-ksp.py')
> >>   File "<...>/demo/kspsolve/test_mat_ksp.py", line 6, in execfile
> >> try: exec(fh.read()+"\n", globals, locals)
> >>   File "", line 15, in 
> >>   File "PETSc/KSP.pyx", line 384, in petsc4py.PETSc.KSP.solve
> >> (src/petsc4py.PETSc.c:153555)
> >> petsc4py.PETSc.Error: error code 92
> >> [0] KSPSolve() line 599 in <...>/src/ksp/ksp/interface/itfunc.c
> >> [0] KSPSetUp() line 390 in <...>/src/ksp/ksp/interface/itfunc.c
> >> [0] PCSetUp() line 968 in <...>/src/ksp/pc/interface/precon.c
> >> [0] PCSetUp_ICC() line 21 in <...>/src/ksp/pc/impls/factor/icc/icc.c
> >> [0] MatGetFactor() line 4240 in<...>/src/mat/interface/matrix.c
> >> [0] You cannot overwrite this option since that will conflict with other
> >> previously set options
> >> [0] Could not locate solver package (null). Perhaps you must ./configure
> >> with --download-(null)
> >>
> >> <...>
> >> ---
> >> Primary job  terminated normally, but 1 process returned
> >> a non-zero exit code.. Per user-direction, the job has been aborted.
> >> ---
> >> --
> >> mpirun detected that one or more processes exited with non-zero status,
> >> thus causing
> >> the job to be terminated. The first process to do so was:
> >>
> >>   Process name: [[23110,1],0]
> >>   Exit code:1
> >> --
> >>
> >>
> >> What have I done wrong ?
> >>
> >>
> >>
> >>
> >
> >
> > --
> > What most experimenters take for granted before they begin their
> > experiments is infinitely more interesting than any results to which their
> > experiments lead.
> > -- Norbert Wiener
> >
> 


Re: [petsc-users] (no subject)

2016-06-02 Thread neok m4700
Hi Matthew,

I've rebuilt petsc // petsc4py with following versions:

3.7.0 // 3.7.0 => same runtime error
00c67f3 // 3.7.1 => fails to build petsc4py (error below)
00c67f3 // 6e8e093 => same as above
f1b0812 (latest commit) // 6e8e093 (latest commit) => same as above

In file included from src/PETSc.c:3:0:
src/petsc4py.PETSc.c: In function
‘__pyx_pf_8petsc4py_5PETSc_6DMPlex_4createBoxMesh’:
src/petsc4py.PETSc.c:214629:112: error: incompatible type for argument 4 of
‘DMPlexCreateBoxMesh’
   __pyx_t_4 =
__pyx_f_8petsc4py_5PETSc_CHKERR(DMPlexCreateBoxMesh(__pyx_v_ccomm,
__pyx_v_cdim, __pyx_v_interp, (&__pyx_v_newdm))); if (unlikely(__pyx_t_4 ==
-1)) __PYX_ERR(42, 49, __pyx_L1_error)

using
- numpy 1.11.0
- openblas 0.2.18
- openmpi 1.10.2

Thanks

2016-06-02 16:39 GMT+02:00 Matthew Knepley :

> On Thu, Jun 2, 2016 at 9:12 AM, neok m4700  wrote:
>
>> Hi,
>>
>> I built petsc 3.7.1 and petsc4py 3.7.0 (with openmpi 1.10.2) and ran the
>> examples in the demo directory.
>>
>
> I believe this was fixed in 'master':
> https://bitbucket.org/petsc/petsc/commits/00c67f3b09c0bcda06af5ed306d845d9138e5003
>
> Is it possible to try this?
>
>   Thanks,
>
> Matt
>
>
>> $ python test_mat_ksp.py
>> => runs as expected (serial)
>>
>> $ mpiexec -np 2 python test_mat_ksp.py
>> => fails with the following output:
>>
>> Traceback (most recent call last):
>>   File "<...>/demo/kspsolve/test_mat_ksp.py", line 15, in 
>> execfile('petsc-ksp.py')
>>   File "<...>/demo/kspsolve/test_mat_ksp.py", line 6, in execfile
>> try: exec(fh.read()+"\n", globals, locals)
>>   File "", line 15, in 
>>   File "PETSc/KSP.pyx", line 384, in petsc4py.PETSc.KSP.solve
>> (src/petsc4py.PETSc.c:153555)
>> petsc4py.PETSc.Error: error code 92
>> [0] KSPSolve() line 599 in <...>/src/ksp/ksp/interface/itfunc.c
>> [0] KSPSetUp() line 390 in <...>/src/ksp/ksp/interface/itfunc.c
>> [0] PCSetUp() line 968 in <...>/src/ksp/pc/interface/precon.c
>> [0] PCSetUp_ICC() line 21 in <...>/src/ksp/pc/impls/factor/icc/icc.c
>> [0] MatGetFactor() line 4240 in<...>/src/mat/interface/matrix.c
>> [0] You cannot overwrite this option since that will conflict with other
>> previously set options
>> [0] Could not locate solver package (null). Perhaps you must ./configure
>> with --download-(null)
>>
>> <...>
>> ---
>> Primary job  terminated normally, but 1 process returned
>> a non-zero exit code.. Per user-direction, the job has been aborted.
>> ---
>> --
>> mpirun detected that one or more processes exited with non-zero status,
>> thus causing
>> the job to be terminated. The first process to do so was:
>>
>>   Process name: [[23110,1],0]
>>   Exit code:1
>> --
>>
>>
>> What have I done wrong ?
>>
>>
>>
>>
>
>
> --
> What most experimenters take for granted before they begin their
> experiments is infinitely more interesting than any results to which their
> experiments lead.
> -- Norbert Wiener
>


Re: [petsc-users] (no subject)

2016-06-02 Thread Matthew Knepley
On Thu, Jun 2, 2016 at 9:12 AM, neok m4700  wrote:

> Hi,
>
> I built petsc 3.7.1 and petsc4py 3.7.0 (with openmpi 1.10.2) and ran the
> examples in the demo directory.
>

I believe this was fixed in 'master':
https://bitbucket.org/petsc/petsc/commits/00c67f3b09c0bcda06af5ed306d845d9138e5003

Is it possible to try this?

  Thanks,

Matt


> $ python test_mat_ksp.py
> => runs as expected (serial)
>
> $ mpiexec -np 2 python test_mat_ksp.py
> => fails with the following output:
>
> Traceback (most recent call last):
>   File "<...>/demo/kspsolve/test_mat_ksp.py", line 15, in 
> execfile('petsc-ksp.py')
>   File "<...>/demo/kspsolve/test_mat_ksp.py", line 6, in execfile
> try: exec(fh.read()+"\n", globals, locals)
>   File "", line 15, in 
>   File "PETSc/KSP.pyx", line 384, in petsc4py.PETSc.KSP.solve
> (src/petsc4py.PETSc.c:153555)
> petsc4py.PETSc.Error: error code 92
> [0] KSPSolve() line 599 in <...>/src/ksp/ksp/interface/itfunc.c
> [0] KSPSetUp() line 390 in <...>/src/ksp/ksp/interface/itfunc.c
> [0] PCSetUp() line 968 in <...>/src/ksp/pc/interface/precon.c
> [0] PCSetUp_ICC() line 21 in <...>/src/ksp/pc/impls/factor/icc/icc.c
> [0] MatGetFactor() line 4240 in<...>/src/mat/interface/matrix.c
> [0] You cannot overwrite this option since that will conflict with other
> previously set options
> [0] Could not locate solver package (null). Perhaps you must ./configure
> with --download-(null)
>
> <...>
> ---
> Primary job  terminated normally, but 1 process returned
> a non-zero exit code.. Per user-direction, the job has been aborted.
> ---
> --
> mpirun detected that one or more processes exited with non-zero status,
> thus causing
> the job to be terminated. The first process to do so was:
>
>   Process name: [[23110,1],0]
>   Exit code:1
> --
>
>
> What have I done wrong ?
>
>
>
>


-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener


Re: [petsc-users] (no subject)

2015-05-04 Thread Matthew Knepley
On Mon, May 4, 2015 at 5:57 PM, Reza Yaghmaie reza.yaghma...@gmail.com
wrote:


 Dear Matt,

 The initial guess was zero for all cases of SNES solvers. The initial
 jacobian was identity for all cases. The system is small and is ran
 sequentially.
 I have to add that I use FDColoring routine for the jacobian as well.


So, the initial Jacobian was the identity, but we have no idea how close
that is to the true Jacobian. You could use the
coloring routines (the default) combined with


http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/SNES/SNESQNSetScaleType.html

using Jacobian, which sets the initial guess to the true Jacobian. That
would give us an idea whether the initial guess
impacted QN covergence.

  Thanks,

 Matt


 Regards,
 Ray





 On Monday, May 4, 2015, Matthew Knepley knep...@gmail.com wrote:

 On Mon, May 4, 2015 at 5:41 PM, Reza Yaghmaie reza.yaghma...@gmail.com
 wrote:


 Dear Matt,

 Actually the initial jacobian was identity. Regular SNES converges in 48
 iterations, GMRES in 19, NCG in 67,...
 Do you think SNESQN with the basiclineseach was the problem for
 divergence?
 If I use SNESQN by default should not it converge with initial identity
 jacobian?


 Do you mean that you used an initial guess of the identity, or that the
 Actual Jacobian was the identity at your
 initial guess?

   Matt


 Best regards,
 Reza




 On Monday, May 4, 2015, Matthew Knepley knep...@gmail.com wrote:

 On Mon, May 4, 2015 at 2:11 PM, Reza Yaghmaie reza.yaghma...@gmail.com
  wrote:


 Dear PETSC representatives,

 I am solving a nonlinear problem with SNESNGMRES and it converges
 faster with less iterations compared to otehr SNES methods. Any idea why
 that is the case?


 It is impossible to tell with this information.


 Also SNESQN diverges quickly. I tried to use SNESLINESEARCHBASIC for
 the linesearch option and nothing changes.


 This can happen, especially if your matrix is far from the identity.


 I presume it is using the default SNESQN method. Btw, there are three
 options for QN, as SNES_QN_LBFGS, SNES_QN_BROYDEN, SNES_QN_BADBROYEN
 in teh manual. I tried to associate them with SNES however it seems these
 hyphened names don't work there. What am I missing?


 -snes_qn_scale_type lbfgs,broyden,badbroyden

 from
 http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/SNES/SNESQNSetType.html

Thanks,

 Matt

 Best regards,
 Ray




 --
 What most experimenters take for granted before they begin their
 experiments is infinitely more interesting than any results to which their
 experiments lead.
 -- Norbert Wiener




 --
 What most experimenters take for granted before they begin their
 experiments is infinitely more interesting than any results to which their
 experiments lead.
 -- Norbert Wiener




-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener


Re: [petsc-users] (no subject)

2015-05-04 Thread Reza Yaghmaie
Dear Matt,

The initial guess was zero for all cases of SNES solvers. The initial
jacobian was identity for all cases. The system is small and is ran
sequentially.
I have to add that I use FDColoring routine for the jacobian as well.

Regards,
Ray





On Monday, May 4, 2015, Matthew Knepley knep...@gmail.com wrote:

 On Mon, May 4, 2015 at 5:41 PM, Reza Yaghmaie reza.yaghma...@gmail.com
 javascript:_e(%7B%7D,'cvml','reza.yaghma...@gmail.com'); wrote:


 Dear Matt,

 Actually the initial jacobian was identity. Regular SNES converges in 48
 iterations, GMRES in 19, NCG in 67,...
 Do you think SNESQN with the basiclineseach was the problem for
 divergence?
 If I use SNESQN by default should not it converge with initial identity
 jacobian?


 Do you mean that you used an initial guess of the identity, or that the
 Actual Jacobian was the identity at your
 initial guess?

   Matt


 Best regards,
 Reza




 On Monday, May 4, 2015, Matthew Knepley knep...@gmail.com
 javascript:_e(%7B%7D,'cvml','knep...@gmail.com'); wrote:

 On Mon, May 4, 2015 at 2:11 PM, Reza Yaghmaie reza.yaghma...@gmail.com
 wrote:


 Dear PETSC representatives,

 I am solving a nonlinear problem with SNESNGMRES and it converges
 faster with less iterations compared to otehr SNES methods. Any idea why
 that is the case?


 It is impossible to tell with this information.


 Also SNESQN diverges quickly. I tried to use SNESLINESEARCHBASIC for
 the linesearch option and nothing changes.


 This can happen, especially if your matrix is far from the identity.


 I presume it is using the default SNESQN method. Btw, there are three
 options for QN, as SNES_QN_LBFGS, SNES_QN_BROYDEN, SNES_QN_BADBROYEN
 in teh manual. I tried to associate them with SNES however it seems these
 hyphened names don't work there. What am I missing?


 -snes_qn_scale_type lbfgs,broyden,badbroyden

 from
 http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/SNES/SNESQNSetType.html

Thanks,

 Matt

 Best regards,
 Ray




 --
 What most experimenters take for granted before they begin their
 experiments is infinitely more interesting than any results to which their
 experiments lead.
 -- Norbert Wiener




 --
 What most experimenters take for granted before they begin their
 experiments is infinitely more interesting than any results to which their
 experiments lead.
 -- Norbert Wiener



Re: [petsc-users] (no subject)

2015-05-04 Thread Matthew Knepley
On Mon, May 4, 2015 at 2:11 PM, Reza Yaghmaie reza.yaghma...@gmail.com
wrote:


 Dear PETSC representatives,

 I am solving a nonlinear problem with SNESNGMRES and it converges faster
 with less iterations compared to otehr SNES methods. Any idea why that is
 the case?


It is impossible to tell with this information.


 Also SNESQN diverges quickly. I tried to use SNESLINESEARCHBASIC for the
 linesearch option and nothing changes.


This can happen, especially if your matrix is far from the identity.


 I presume it is using the default SNESQN method. Btw, there are three
 options for QN, as SNES_QN_LBFGS, SNES_QN_BROYDEN, SNES_QN_BADBROYEN in
 teh manual. I tried to associate them with SNES however it seems these
 hyphened names don't work there. What am I missing?


-snes_qn_scale_type lbfgs,broyden,badbroyden

from
http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/SNES/SNESQNSetType.html

   Thanks,

Matt

Best regards,
 Ray




-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener


Re: [petsc-users] (no subject)

2015-05-04 Thread Reza Yaghmaie
Dear Matt,

Actually the initial jacobian was identity. Regular SNES converges in 48
iterations, GMRES in 19, NCG in 67,...
Do you think SNESQN with the basiclineseach was the problem for divergence?
If I use SNESQN by default should not it converge with initial identity
jacobian?

Best regards,
Reza




On Monday, May 4, 2015, Matthew Knepley knep...@gmail.com wrote:

 On Mon, May 4, 2015 at 2:11 PM, Reza Yaghmaie reza.yaghma...@gmail.com
 javascript:_e(%7B%7D,'cvml','reza.yaghma...@gmail.com'); wrote:


 Dear PETSC representatives,

 I am solving a nonlinear problem with SNESNGMRES and it converges faster
 with less iterations compared to otehr SNES methods. Any idea why that is
 the case?


 It is impossible to tell with this information.


 Also SNESQN diverges quickly. I tried to use SNESLINESEARCHBASIC for the
 linesearch option and nothing changes.


 This can happen, especially if your matrix is far from the identity.


 I presume it is using the default SNESQN method. Btw, there are three
 options for QN, as SNES_QN_LBFGS, SNES_QN_BROYDEN, SNES_QN_BADBROYEN
 in teh manual. I tried to associate them with SNES however it seems these
 hyphened names don't work there. What am I missing?


 -snes_qn_scale_type lbfgs,broyden,badbroyden

 from
 http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/SNES/SNESQNSetType.html

Thanks,

 Matt

 Best regards,
 Ray




 --
 What most experimenters take for granted before they begin their
 experiments is infinitely more interesting than any results to which their
 experiments lead.
 -- Norbert Wiener



Re: [petsc-users] (no subject)

2015-05-04 Thread Matthew Knepley
On Mon, May 4, 2015 at 5:41 PM, Reza Yaghmaie reza.yaghma...@gmail.com
wrote:


 Dear Matt,

 Actually the initial jacobian was identity. Regular SNES converges in 48
 iterations, GMRES in 19, NCG in 67,...
 Do you think SNESQN with the basiclineseach was the problem for divergence?
 If I use SNESQN by default should not it converge with initial identity
 jacobian?


Do you mean that you used an initial guess of the identity, or that the
Actual Jacobian was the identity at your
initial guess?

  Matt


 Best regards,
 Reza




 On Monday, May 4, 2015, Matthew Knepley knep...@gmail.com wrote:

 On Mon, May 4, 2015 at 2:11 PM, Reza Yaghmaie reza.yaghma...@gmail.com
 wrote:


 Dear PETSC representatives,

 I am solving a nonlinear problem with SNESNGMRES and it converges faster
 with less iterations compared to otehr SNES methods. Any idea why that is
 the case?


 It is impossible to tell with this information.


 Also SNESQN diverges quickly. I tried to use SNESLINESEARCHBASIC for
 the linesearch option and nothing changes.


 This can happen, especially if your matrix is far from the identity.


 I presume it is using the default SNESQN method. Btw, there are three
 options for QN, as SNES_QN_LBFGS, SNES_QN_BROYDEN, SNES_QN_BADBROYEN
 in teh manual. I tried to associate them with SNES however it seems these
 hyphened names don't work there. What am I missing?


 -snes_qn_scale_type lbfgs,broyden,badbroyden

 from
 http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/SNES/SNESQNSetType.html

Thanks,

 Matt

 Best regards,
 Ray




 --
 What most experimenters take for granted before they begin their
 experiments is infinitely more interesting than any results to which their
 experiments lead.
 -- Norbert Wiener




-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener


Re: [petsc-users] (no subject)

2014-06-23 Thread Dave May
Yes, just assemble the same sequential matrices on each rank. To do this,
create the matrix using the communicator PETSC_COMM_SELF

Cheers
  Dave

On Monday, 23 June 2014, Bogdan Dita bog...@lmn.pub.ro wrote:



 Hello,

 I wanted to see how well Umfpack performs in PETSc but i encountered a
 problem regarding the matrix type used and the way is distributed. I'm
 trying to solve for multiple frequencies in parallel, where system matrix
 A = omega*E-C, and omega is a frequency dependent scalar. Can i send
 matrix E and C to each process, plus a set of frequencies to compute omega
 and form A_k = omega_k * E - C and then solve for every omega? Is this
 even possible given that Umfpack only works with SeqAij?


 Best regards,
 Bogdan


 
 Bogdan DITA, PhD Student
 UPB - EE Dept.- CIEAC-LMN
 Splaiul Independentei 313
 060042 Bucharest, Romania
 email: bog...@lmn.pub.ro javascript:;



Re: [petsc-users] (no subject)

2014-06-23 Thread Dave May
And use the same sequential communicator when you create the KSP on each
rank

On Monday, 23 June 2014, Dave May dave.mayhe...@gmail.com wrote:

 Yes, just assemble the same sequential matrices on each rank. To do this,
 create the matrix using the communicator PETSC_COMM_SELF

 Cheers
   Dave

 On Monday, 23 June 2014, Bogdan Dita bog...@lmn.pub.ro
 javascript:_e(%7B%7D,'cvml','bog...@lmn.pub.ro'); wrote:



 Hello,

 I wanted to see how well Umfpack performs in PETSc but i encountered a
 problem regarding the matrix type used and the way is distributed. I'm
 trying to solve for multiple frequencies in parallel, where system matrix
 A = omega*E-C, and omega is a frequency dependent scalar. Can i send
 matrix E and C to each process, plus a set of frequencies to compute omega
 and form A_k = omega_k * E - C and then solve for every omega? Is this
 even possible given that Umfpack only works with SeqAij?


 Best regards,
 Bogdan


 
 Bogdan DITA, PhD Student
 UPB - EE Dept.- CIEAC-LMN
 Splaiul Independentei 313
 060042 Bucharest, Romania
 email: bog...@lmn.pub.ro




Re: [petsc-users] (no subject)

2014-06-12 Thread Sai Rajeshwar
ok,

so considering performance on MIC

can the library MAGMA be used as an alternate to Viennacl for PETSc or
FEniCS?

http://www.nics.tennessee.edu/files/pdf/hpcss/04_03_LinearAlgebraPar.pdf
(from slide 37 onwards)

MAGMA seems to have sparse version which i think is doing all that any
sparse non linear solver can do..   MAGMA-sparse..

will this be helpful in using with MIC

*with regards..*

*M. Sai Rajeswar*
*M-tech  Computer Technology*


*IIT Delhi--Cogito Ergo Sum-*


On Wed, Jun 11, 2014 at 8:34 PM, Karl Rupp r...@iue.tuwien.ac.at wrote:

 Hi,

 Im a masters student from Indian Institute of Technology delhi. Im

 working on PETSc.. for performance, which is my area of interest. Can
 you please help me in knowing 'How to run PETSc on MIC' .That would
 be of great help to me.


 my experience is that 'performance' and 'MIC' for bandwidth-limited
 operations don't go together. Regardless, you can use ViennaCL by building
 via
  --download-viennacl
 for using the MIC via OpenCL, but you are usually much better off with a
 proper multi-socket CPU node.

 Feel free to have a look at my recent slides from the Intl. OpenCL
 Workshop here:
 http://iwocl.org/wp-content/uploads/iwocl-2014-tech-
 presentation-Karl-Rupp.pdf
 PDF page 32 shows that in the OpenCL case one achieves only up to 20% of
 peak bandwidth for 1900 different kernel configurations even for simple
 kernels such as vector copy, vector addition, dot products, or dense
 matrix-vector products. With some tricks one can probably get 30%, but
 that's it.

 PETSc does not provide any 'native' OpenMP execution on MIC for similar
 reasons.

 Best regards,
 Karli




Re: [petsc-users] (no subject)

2014-06-12 Thread Karl Rupp

Hi,

 so considering performance on MIC


can the library MAGMA be used as an alternate to Viennacl for PETSc or
FEniCS?


No, there is no interface to MAGMA in PETSc yet. Contributions are 
always welcome, yet it is not our priority to come up with an interface 
of our own. I don't think it will provide any substantial benefits, 
though, because there is no magic one can apply to overcome the memory wall.




http://www.nics.tennessee.edu/files/pdf/hpcss/04_03_LinearAlgebraPar.pdf
(from slide 37 onwards)

MAGMA seems to have sparse version which i think is doing all that any
sparse non linear solver can do..   MAGMA-sparse..

will this be helpful in using with MIC


This depends on what you are looking for. If you are looking a 
maximizing FLOP rates for a fixed algorithm, then MAGMA may help you if 
it happens to provide an implementation for this particular algorithm. 
However, if you're looking for a way to minimize time-to-solution for a 
given problem, then it's usually better to build a good preconditioner 
with the many options PETSc provides, such as field-split and multigrid 
preconditioners. Purely CPU-based implementations usually still beat 
accelerator-based approaches on larger scale, simply because it allows 
you to use better algorithms rather than throwing massive parallelism at 
it, which severely restricts your options. If you really want to play 
with accelerators in PETSc, use GPUs (higher memory bandwidth), not MIC.


Best regards,
Karli



Re: [petsc-users] (no subject)

2014-06-11 Thread Karl Rupp

Hi,

Im a masters student from Indian Institute of Technology delhi. Im

working on PETSc.. for performance, which is my area of interest. Can
you please help me in knowing 'How to run PETSc on MIC' .That would
be of great help to me.


my experience is that 'performance' and 'MIC' for bandwidth-limited 
operations don't go together. Regardless, you can use ViennaCL by 
building via

 --download-viennacl
for using the MIC via OpenCL, but you are usually much better off with a 
proper multi-socket CPU node.


Feel free to have a look at my recent slides from the Intl. OpenCL 
Workshop here:

http://iwocl.org/wp-content/uploads/iwocl-2014-tech-presentation-Karl-Rupp.pdf
PDF page 32 shows that in the OpenCL case one achieves only up to 20% of 
peak bandwidth for 1900 different kernel configurations even for simple 
kernels such as vector copy, vector addition, dot products, or dense 
matrix-vector products. With some tricks one can probably get 30%, but 
that's it.


PETSc does not provide any 'native' OpenMP execution on MIC for similar 
reasons.


Best regards,
Karli



Re: [petsc-users] (no subject)

2014-02-21 Thread Matthew Knepley
On Fri, Feb 21, 2014 at 12:34 PM, Chung-Kan Huang ckhua...@gmail.comwrote:

 Hello,

 In my application I like to use
 PetscErrorCode  KSPSetOperators(KSP ksp,Mat Amat,Mat Pmat,MatStructure
 flag)
 and have Pmat different from Amat

 if Amat = L + D + U
 then Pmat  = Amat  - L* - U* + rowsum(L* + U*)
 where L* and U* is a portion of L and U and rowsum(L* + U*) will add into D

 I know I can explicitly construct Pmat step-by-step but wonder what will
 be the most easy way to do this in PETSC?


It sounds like you should just use MatSetValues() since we have no idea
what L* and U* are.

   Matt


 Thanks,


 Kan




-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener