Re: [petsc-users] GAMG for the unsymmetrical matrix

2017-04-07 Thread Kong, Fande
On Fri, Apr 7, 2017 at 3:52 PM, Barry Smith  wrote:

>
> > On Apr 7, 2017, at 4:46 PM, Kong, Fande  wrote:
> >
> >
> >
> > On Fri, Apr 7, 2017 at 3:39 PM, Barry Smith  wrote:
> >
> >   Using Petsc Release Version 3.7.5, unknown
> >
> >So are you using the release or are you using master branch?
> >
> > I am working on the maint branch.
> >
> > I did something two months ago:
> >
> >  git clone -b maint https://urldefense.proofpoint.
> com/v2/url?u=https-3A__bitbucket.org_petsc_petsc=DwIFAg=
> 54IZrppPQZKX9mLzcGdPfFD1hxrcB__aEkJFOKJFd00=DUUt3SRGI0_
> JgtNaS3udV68GRkgV4ts7XKfj2opmiCY=c92UNplDTVgzFrXIn_
> 70buWa2rXPGUKN083_aJYI0FQ=yrulwZxJiduZc-703r7PJOUApPDehsFIkhS0BTrroXc=
> petsc.
> >
> >
> > I am interested to improve the GAMG performance.
>
>   Why, why not use the best solver for your problem?
>

I am just curious. I want to understand the potential of interesting
preconditioners.



>
> > Is it possible? It can not beat ASM at all? The multilevel method should
> be better than the one-level if the number of processor cores is large.
>
>The ASM is taking 30 iterations, this is fantastic, it is really going
> to be tough to get GAMG to be faster (set up time for GAMG is high).
>
>What happens to both with 10 times as many processes? 100 times as many?
>


Did not try many processes yet.

Fande,



>
>
>Barry
>
> >
> > Fande,
> >
> >
> >If you use master the ASM will be even faster.
> >
> > What's new in master?
> >
> >
> > Fande,
> >
> >
> >
> > > On Apr 7, 2017, at 4:29 PM, Kong, Fande  wrote:
> > >
> > > Thanks, Barry.
> > >
> > > It works.
> > >
> > > GAMG is three times better than ASM in terms of the number of linear
> iterations, but it is five times slower than ASM. Any suggestions to
> improve the performance of GAMG? Log files are attached.
> > >
> > > Fande,
> > >
> > > On Thu, Apr 6, 2017 at 3:39 PM, Barry Smith 
> wrote:
> > >
> > > > On Apr 6, 2017, at 9:39 AM, Kong, Fande  wrote:
> > > >
> > > > Thanks, Mark and Barry,
> > > >
> > > > It works pretty wells in terms of the number of linear iterations
> (using "-pc_gamg_sym_graph true"), but it is horrible in the compute time.
> I am using the two-level method via "-pc_mg_levels 2". The reason why the
> compute time is larger than other preconditioning options is that a matrix
> free method is used in the fine level and in my particular problem the
> function evaluation is expensive.
> > > >
> > > > I am using "-snes_mf_operator 1" to turn on the Jacobian-free
> Newton, but I do not think I want to make the preconditioning part
> matrix-free.  Do you guys know how to turn off the matrix-free method for
> GAMG?
> > >
> > >-pc_use_amat false
> > >
> > > >
> > > > Here is the detailed solver:
> > > >
> > > > SNES Object: 384 MPI processes
> > > >   type: newtonls
> > > >   maximum iterations=200, maximum function evaluations=1
> > > >   tolerances: relative=1e-08, absolute=1e-08, solution=1e-50
> > > >   total number of linear solver iterations=20
> > > >   total number of function evaluations=166
> > > >   norm schedule ALWAYS
> > > >   SNESLineSearch Object:   384 MPI processes
> > > > type: bt
> > > >   interpolation: cubic
> > > >   alpha=1.00e-04
> > > > maxstep=1.00e+08, minlambda=1.00e-12
> > > > tolerances: relative=1.00e-08, absolute=1.00e-15,
> lambda=1.00e-08
> > > > maximum iterations=40
> > > >   KSP Object:   384 MPI processes
> > > > type: gmres
> > > >   GMRES: restart=100, using Classical (unmodified) Gram-Schmidt
> Orthogonalization with no iterative refinement
> > > >   GMRES: happy breakdown tolerance 1e-30
> > > > maximum iterations=100, initial guess is zero
> > > > tolerances:  relative=0.001, absolute=1e-50, divergence=1.
> > > > right preconditioning
> > > > using UNPRECONDITIONED norm type for convergence test
> > > >   PC Object:   384 MPI processes
> > > > type: gamg
> > > >   MG: type is MULTIPLICATIVE, levels=2 cycles=v
> > > > Cycles per PCApply=1
> > > > Using Galerkin computed coarse grid matrices
> > > > GAMG specific options
> > > >   Threshold for dropping small values from graph 0.
> > > >   AGG specific options
> > > > Symmetric graph true
> > > > Coarse grid solver -- level ---
> > > >   KSP Object:  (mg_coarse_)   384 MPI processes
> > > > type: preonly
> > > > maximum iterations=1, initial guess is zero
> > > > tolerances:  relative=1e-05, absolute=1e-50,
> divergence=1.
> > > > left preconditioning
> > > > using NONE norm type for convergence test
> > > >   PC Object:  (mg_coarse_)   384 MPI processes
> > > > type: bjacobi
> > > >   block Jacobi: number of blocks = 384
> > > >   Local solve is 

Re: [petsc-users] GAMG for the unsymmetrical matrix

2017-04-07 Thread Barry Smith

> On Apr 7, 2017, at 4:46 PM, Kong, Fande  wrote:
> 
> 
> 
> On Fri, Apr 7, 2017 at 3:39 PM, Barry Smith  wrote:
> 
>   Using Petsc Release Version 3.7.5, unknown
> 
>So are you using the release or are you using master branch?
> 
> I am working on the maint branch. 
> 
> I did something two months ago:
> 
>  git clone -b maint https://bitbucket.org/petsc/petsc petsc.
> 
> 
> I am interested to improve the GAMG performance.

  Why, why not use the best solver for your problem?

> Is it possible? It can not beat ASM at all? The multilevel method should be 
> better than the one-level if the number of processor cores is large.

   The ASM is taking 30 iterations, this is fantastic, it is really going to be 
tough to get GAMG to be faster (set up time for GAMG is high).

   What happens to both with 10 times as many processes? 100 times as many?


   Barry

> 
> Fande,
>  
> 
>If you use master the ASM will be even faster.
> 
> What's new in master?
> 
> 
> Fande,
>  
> 
> 
> > On Apr 7, 2017, at 4:29 PM, Kong, Fande  wrote:
> >
> > Thanks, Barry.
> >
> > It works.
> >
> > GAMG is three times better than ASM in terms of the number of linear 
> > iterations, but it is five times slower than ASM. Any suggestions to 
> > improve the performance of GAMG? Log files are attached.
> >
> > Fande,
> >
> > On Thu, Apr 6, 2017 at 3:39 PM, Barry Smith  wrote:
> >
> > > On Apr 6, 2017, at 9:39 AM, Kong, Fande  wrote:
> > >
> > > Thanks, Mark and Barry,
> > >
> > > It works pretty wells in terms of the number of linear iterations (using 
> > > "-pc_gamg_sym_graph true"), but it is horrible in the compute time. I am 
> > > using the two-level method via "-pc_mg_levels 2". The reason why the 
> > > compute time is larger than other preconditioning options is that a 
> > > matrix free method is used in the fine level and in my particular problem 
> > > the function evaluation is expensive.
> > >
> > > I am using "-snes_mf_operator 1" to turn on the Jacobian-free Newton, but 
> > > I do not think I want to make the preconditioning part matrix-free.  Do 
> > > you guys know how to turn off the matrix-free method for GAMG?
> >
> >-pc_use_amat false
> >
> > >
> > > Here is the detailed solver:
> > >
> > > SNES Object: 384 MPI processes
> > >   type: newtonls
> > >   maximum iterations=200, maximum function evaluations=1
> > >   tolerances: relative=1e-08, absolute=1e-08, solution=1e-50
> > >   total number of linear solver iterations=20
> > >   total number of function evaluations=166
> > >   norm schedule ALWAYS
> > >   SNESLineSearch Object:   384 MPI processes
> > > type: bt
> > >   interpolation: cubic
> > >   alpha=1.00e-04
> > > maxstep=1.00e+08, minlambda=1.00e-12
> > > tolerances: relative=1.00e-08, absolute=1.00e-15, 
> > > lambda=1.00e-08
> > > maximum iterations=40
> > >   KSP Object:   384 MPI processes
> > > type: gmres
> > >   GMRES: restart=100, using Classical (unmodified) Gram-Schmidt 
> > > Orthogonalization with no iterative refinement
> > >   GMRES: happy breakdown tolerance 1e-30
> > > maximum iterations=100, initial guess is zero
> > > tolerances:  relative=0.001, absolute=1e-50, divergence=1.
> > > right preconditioning
> > > using UNPRECONDITIONED norm type for convergence test
> > >   PC Object:   384 MPI processes
> > > type: gamg
> > >   MG: type is MULTIPLICATIVE, levels=2 cycles=v
> > > Cycles per PCApply=1
> > > Using Galerkin computed coarse grid matrices
> > > GAMG specific options
> > >   Threshold for dropping small values from graph 0.
> > >   AGG specific options
> > > Symmetric graph true
> > > Coarse grid solver -- level ---
> > >   KSP Object:  (mg_coarse_)   384 MPI processes
> > > type: preonly
> > > maximum iterations=1, initial guess is zero
> > > tolerances:  relative=1e-05, absolute=1e-50, divergence=1.
> > > left preconditioning
> > > using NONE norm type for convergence test
> > >   PC Object:  (mg_coarse_)   384 MPI processes
> > > type: bjacobi
> > >   block Jacobi: number of blocks = 384
> > >   Local solve is same for all blocks, in the following KSP and PC 
> > > objects:
> > > KSP Object:(mg_coarse_sub_) 1 MPI processes
> > >   type: preonly
> > >   maximum iterations=1, initial guess is zero
> > >   tolerances:  relative=1e-05, absolute=1e-50, divergence=1.
> > >   left preconditioning
> > >   using NONE norm type for convergence test
> > > PC Object:(mg_coarse_sub_) 1 MPI processes
> > >   type: lu
> > > LU: out-of-place factorization
> > > tolerance for zero pivot 

Re: [petsc-users] GAMG for the unsymmetrical matrix

2017-04-07 Thread Barry Smith

  Using Petsc Release Version 3.7.5, unknown 

   So are you using the release or are you using master branch?

   If you use master the ASM will be even faster.


> On Apr 7, 2017, at 4:29 PM, Kong, Fande  wrote:
> 
> Thanks, Barry.
> 
> It works.
> 
> GAMG is three times better than ASM in terms of the number of linear 
> iterations, but it is five times slower than ASM. Any suggestions to improve 
> the performance of GAMG? Log files are attached.
> 
> Fande,
> 
> On Thu, Apr 6, 2017 at 3:39 PM, Barry Smith  wrote:
> 
> > On Apr 6, 2017, at 9:39 AM, Kong, Fande  wrote:
> >
> > Thanks, Mark and Barry,
> >
> > It works pretty wells in terms of the number of linear iterations (using 
> > "-pc_gamg_sym_graph true"), but it is horrible in the compute time. I am 
> > using the two-level method via "-pc_mg_levels 2". The reason why the 
> > compute time is larger than other preconditioning options is that a matrix 
> > free method is used in the fine level and in my particular problem the 
> > function evaluation is expensive.
> >
> > I am using "-snes_mf_operator 1" to turn on the Jacobian-free Newton, but I 
> > do not think I want to make the preconditioning part matrix-free.  Do you 
> > guys know how to turn off the matrix-free method for GAMG?
> 
>-pc_use_amat false
> 
> >
> > Here is the detailed solver:
> >
> > SNES Object: 384 MPI processes
> >   type: newtonls
> >   maximum iterations=200, maximum function evaluations=1
> >   tolerances: relative=1e-08, absolute=1e-08, solution=1e-50
> >   total number of linear solver iterations=20
> >   total number of function evaluations=166
> >   norm schedule ALWAYS
> >   SNESLineSearch Object:   384 MPI processes
> > type: bt
> >   interpolation: cubic
> >   alpha=1.00e-04
> > maxstep=1.00e+08, minlambda=1.00e-12
> > tolerances: relative=1.00e-08, absolute=1.00e-15, 
> > lambda=1.00e-08
> > maximum iterations=40
> >   KSP Object:   384 MPI processes
> > type: gmres
> >   GMRES: restart=100, using Classical (unmodified) Gram-Schmidt 
> > Orthogonalization with no iterative refinement
> >   GMRES: happy breakdown tolerance 1e-30
> > maximum iterations=100, initial guess is zero
> > tolerances:  relative=0.001, absolute=1e-50, divergence=1.
> > right preconditioning
> > using UNPRECONDITIONED norm type for convergence test
> >   PC Object:   384 MPI processes
> > type: gamg
> >   MG: type is MULTIPLICATIVE, levels=2 cycles=v
> > Cycles per PCApply=1
> > Using Galerkin computed coarse grid matrices
> > GAMG specific options
> >   Threshold for dropping small values from graph 0.
> >   AGG specific options
> > Symmetric graph true
> > Coarse grid solver -- level ---
> >   KSP Object:  (mg_coarse_)   384 MPI processes
> > type: preonly
> > maximum iterations=1, initial guess is zero
> > tolerances:  relative=1e-05, absolute=1e-50, divergence=1.
> > left preconditioning
> > using NONE norm type for convergence test
> >   PC Object:  (mg_coarse_)   384 MPI processes
> > type: bjacobi
> >   block Jacobi: number of blocks = 384
> >   Local solve is same for all blocks, in the following KSP and PC 
> > objects:
> > KSP Object:(mg_coarse_sub_) 1 MPI processes
> >   type: preonly
> >   maximum iterations=1, initial guess is zero
> >   tolerances:  relative=1e-05, absolute=1e-50, divergence=1.
> >   left preconditioning
> >   using NONE norm type for convergence test
> > PC Object:(mg_coarse_sub_) 1 MPI processes
> >   type: lu
> > LU: out-of-place factorization
> > tolerance for zero pivot 2.22045e-14
> > using diagonal shift on blocks to prevent zero pivot [INBLOCKS]
> > matrix ordering: nd
> > factor fill ratio given 5., needed 1.31367
> >   Factored matrix follows:
> > Mat Object: 1 MPI processes
> >   type: seqaij
> >   rows=37, cols=37
> >   package used to perform factorization: petsc
> >   total: nonzeros=913, allocated nonzeros=913
> >   total number of mallocs used during MatSetValues calls =0
> > not using I-node routines
> >   linear system matrix = precond matrix:
> >   Mat Object:   1 MPI processes
> > type: seqaij
> > rows=37, cols=37
> > total: nonzeros=695, allocated nonzeros=695
> > total number of mallocs used during MatSetValues calls =0
> >   not using I-node routines
> > linear system matrix = precond matrix:
> > Mat Object: 

Re: [petsc-users] odd behavior when using lapack's dgeev with petsc

2017-04-07 Thread Barry Smith

> On Apr 7, 2017, at 3:34 PM, Manav Bhatia  wrote:
> 
> Yes, I printed the data in both cases and they look the same. 
> 
> I also used “set step-mode on” to show the system lapack info, and they both 
> are using the same lapack routine. 
> 
> This is still baffling me. 

   alignment of the input arrays, both the same? 

   I don't know why this is happening; what if you use your standalone code but 
link against all the libraries that are linked against for the PETSc case.




> 
> -Manav
> 
> 
>> On Apr 7, 2017, at 3:22 PM, Barry Smith  wrote:
>> 
>> 
>>> On Apr 7, 2017, at 2:57 PM, Manav Bhatia  wrote:
>>> 
>>> Hi Barry, 
>>> 
>>> Thanks for the inputs. 
>>> 
>>> I did try that, but the debugger (gdb) stepped right over the dgeev_ call, 
>>> without getting inside the function. 
>> 
>> Did it at least stop at the function so you do an up and print all the 
>> arguments passed in?
>> 
>>> 
>>> I am wondering if this has anything to do with the fact that the system 
>>> lapack library might not have any debugging info in it. 
>> 
>>  Yeah I forgot it might not have them.
>> 
>> Barry
>> 
>>> 
>>> Thoughts? 
>>> 
>>> Regards,
>>> Manav
>>> 
 On Apr 7, 2017, at 2:40 PM, Barry Smith  wrote:
 
 
> On Apr 7, 2017, at 1:46 PM, Manav Bhatia  wrote:
> 
> Hi, 
> 
> I have compile petsc on my Ubuntu machine (also Mac OS 10.12 separately) 
> to link to the system lapack and blas libraries (shown below). 
> 
> I have created an interface class to dgeev in lapack to calculate the 
> eigenvalues of a matrix. 
> 
> My application code links to multiple libraries: libMesh, petsc, slepc, 
> hdf5, etc. 
> 
> If I test my interface inside this application code, I get junk results. 
 
 This is easy to debug because you have a version that works.
 
 Run both versions in separate windows each in a debugger and put a break 
 point in the dgeev_ function. When it gets there check that it is the same 
 dgeev_ function in both cases and check that the inputs are the same then 
 step through both to see when things start to change between the two.
 
> 
> However, on the same machine, if I use the interface in a separate main() 
> function without linking to any of the libraries except lapack and blas, 
> then I get expected results. 
> 
> Also, this problem does not show up on Mac. 
> 
> I am not sure what could be causing this and don’t quite know where to 
> start. Could Petsc have anything to do with this? 
> 
> Any insight would be greatly appreciated. 
> 
> Regards,
> Manav
> 
> manav@manav1:~/test$ ldd /opt/local/lib/libpetsc.so
>   linux-vdso.so.1 =>  (0x7fff3e7a8000)
>   libsuperlu_dist.so.5 => /opt/local/lib/libsuperlu_dist.so.5 
> (0x7f721fbd1000)
>   libparmetis.so => /opt/local/lib/libparmetis.so (0x7f721f99)
>   libmetis.so => /opt/local/lib/libmetis.so (0x7f721f718000)
>   libsuperlu.so.5 => /opt/local/lib/libsuperlu.so.5 (0x7f721f4a7000)
>   libstdc++.so.6 => /usr/lib/x86_64-linux-gnu/libstdc++.so.6 
> (0x7f721f124000)
>   liblapack.so.3 => /usr/lib/liblapack.so.3 (0x7f721e92c000)
>   libblas.so.3 => /usr/lib/libblas.so.3 (0x7f721e6bd000)
>   libX11.so.6 => /usr/lib/x86_64-linux-gnu/libX11.so.6 
> (0x7f721e382000)
>   libm.so.6 => /lib/x86_64-linux-gnu/libm.so.6 (0x7f721e079000)
>   libmpi_mpifh.so.12 => /usr/lib/libmpi_mpifh.so.12 (0x7f721de2)
>   libgfortran.so.3 => /usr/lib/x86_64-linux-gnu/libgfortran.so.3 
> (0x7f721daf4000)
>   libdl.so.2 => /lib/x86_64-linux-gnu/libdl.so.2 (0x7f721d8f)
>   libmpi.so.12 => /usr/lib/libmpi.so.12 (0x7f721d61a000)
>   libgcc_s.so.1 => /lib/x86_64-linux-gnu/libgcc_s.so.1 
> (0x7f721d403000)
>   libpthread.so.0 => /lib/x86_64-linux-gnu/libpthread.so.0 
> (0x7f721d1e6000)
>   libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x7f721ce1d000)
>   /lib64/ld-linux-x86-64.so.2 (0x55d739f1b000)
>   libxcb.so.1 => /usr/lib/x86_64-linux-gnu/libxcb.so.1 
> (0x7f721cbcf000)
>   libopen-pal.so.13 => /usr/lib/libopen-pal.so.13 (0x7f721c932000)
>   libquadmath.so.0 => /usr/lib/x86_64-linux-gnu/libquadmath.so.0 
> (0x7f721c6f2000)
>   libibverbs.so.1 => /usr/lib/libibverbs.so.1 (0x7f721c4e3000)
>   libopen-rte.so.12 => /usr/lib/libopen-rte.so.12 (0x7f721c269000)
>   libXau.so.6 => /usr/lib/x86_64-linux-gnu/libXau.so.6 
> (0x7f721c064000)
>   libXdmcp.so.6 => /usr/lib/x86_64-linux-gnu/libXdmcp.so.6 
> (0x7f721be5e000)
>   librt.so.1 => /lib/x86_64-linux-gnu/librt.so.1 (0x7f721bc56000)
>   libutil.so.1 => /lib/x86_64-linux-gnu/libutil.so.1 (0x7f721ba52000)
>   

Re: [petsc-users] GAMG for the unsymmetrical matrix

2017-04-07 Thread Kong, Fande
Thanks, Barry.

It works.

GAMG is three times better than ASM in terms of the number of linear
iterations, but it is five times slower than ASM. Any suggestions to
improve the performance of GAMG? Log files are attached.

Fande,

On Thu, Apr 6, 2017 at 3:39 PM, Barry Smith  wrote:

>
> > On Apr 6, 2017, at 9:39 AM, Kong, Fande  wrote:
> >
> > Thanks, Mark and Barry,
> >
> > It works pretty wells in terms of the number of linear iterations (using
> "-pc_gamg_sym_graph true"), but it is horrible in the compute time. I am
> using the two-level method via "-pc_mg_levels 2". The reason why the
> compute time is larger than other preconditioning options is that a matrix
> free method is used in the fine level and in my particular problem the
> function evaluation is expensive.
> >
> > I am using "-snes_mf_operator 1" to turn on the Jacobian-free Newton,
> but I do not think I want to make the preconditioning part matrix-free.  Do
> you guys know how to turn off the matrix-free method for GAMG?
>
>-pc_use_amat false
>
> >
> > Here is the detailed solver:
> >
> > SNES Object: 384 MPI processes
> >   type: newtonls
> >   maximum iterations=200, maximum function evaluations=1
> >   tolerances: relative=1e-08, absolute=1e-08, solution=1e-50
> >   total number of linear solver iterations=20
> >   total number of function evaluations=166
> >   norm schedule ALWAYS
> >   SNESLineSearch Object:   384 MPI processes
> > type: bt
> >   interpolation: cubic
> >   alpha=1.00e-04
> > maxstep=1.00e+08, minlambda=1.00e-12
> > tolerances: relative=1.00e-08, absolute=1.00e-15,
> lambda=1.00e-08
> > maximum iterations=40
> >   KSP Object:   384 MPI processes
> > type: gmres
> >   GMRES: restart=100, using Classical (unmodified) Gram-Schmidt
> Orthogonalization with no iterative refinement
> >   GMRES: happy breakdown tolerance 1e-30
> > maximum iterations=100, initial guess is zero
> > tolerances:  relative=0.001, absolute=1e-50, divergence=1.
> > right preconditioning
> > using UNPRECONDITIONED norm type for convergence test
> >   PC Object:   384 MPI processes
> > type: gamg
> >   MG: type is MULTIPLICATIVE, levels=2 cycles=v
> > Cycles per PCApply=1
> > Using Galerkin computed coarse grid matrices
> > GAMG specific options
> >   Threshold for dropping small values from graph 0.
> >   AGG specific options
> > Symmetric graph true
> > Coarse grid solver -- level ---
> >   KSP Object:  (mg_coarse_)   384 MPI processes
> > type: preonly
> > maximum iterations=1, initial guess is zero
> > tolerances:  relative=1e-05, absolute=1e-50, divergence=1.
> > left preconditioning
> > using NONE norm type for convergence test
> >   PC Object:  (mg_coarse_)   384 MPI processes
> > type: bjacobi
> >   block Jacobi: number of blocks = 384
> >   Local solve is same for all blocks, in the following KSP and
> PC objects:
> > KSP Object:(mg_coarse_sub_) 1 MPI processes
> >   type: preonly
> >   maximum iterations=1, initial guess is zero
> >   tolerances:  relative=1e-05, absolute=1e-50, divergence=1.
> >   left preconditioning
> >   using NONE norm type for convergence test
> > PC Object:(mg_coarse_sub_) 1 MPI processes
> >   type: lu
> > LU: out-of-place factorization
> > tolerance for zero pivot 2.22045e-14
> > using diagonal shift on blocks to prevent zero pivot
> [INBLOCKS]
> > matrix ordering: nd
> > factor fill ratio given 5., needed 1.31367
> >   Factored matrix follows:
> > Mat Object: 1 MPI processes
> >   type: seqaij
> >   rows=37, cols=37
> >   package used to perform factorization: petsc
> >   total: nonzeros=913, allocated nonzeros=913
> >   total number of mallocs used during MatSetValues calls
> =0
> > not using I-node routines
> >   linear system matrix = precond matrix:
> >   Mat Object:   1 MPI processes
> > type: seqaij
> > rows=37, cols=37
> > total: nonzeros=695, allocated nonzeros=695
> > total number of mallocs used during MatSetValues calls =0
> >   not using I-node routines
> > linear system matrix = precond matrix:
> > Mat Object: 384 MPI processes
> >   type: mpiaij
> >   rows=18145, cols=18145
> >   total: nonzeros=1709115, allocated nonzeros=1709115
> >   total number of mallocs used during MatSetValues calls =0
> > not using I-node (on process 0) routines
> >  

Re: [petsc-users] odd behavior when using lapack's dgeev with petsc

2017-04-07 Thread Manav Bhatia
Yes, I printed the data in both cases and they look the same. 

I also used “set step-mode on” to show the system lapack info, and they both 
are using the same lapack routine. 

This is still baffling me. 

-Manav


> On Apr 7, 2017, at 3:22 PM, Barry Smith  wrote:
> 
> 
>> On Apr 7, 2017, at 2:57 PM, Manav Bhatia  wrote:
>> 
>> Hi Barry, 
>> 
>> Thanks for the inputs. 
>> 
>> I did try that, but the debugger (gdb) stepped right over the dgeev_ call, 
>> without getting inside the function. 
> 
> Did it at least stop at the function so you do an up and print all the 
> arguments passed in?
> 
>> 
>> I am wondering if this has anything to do with the fact that the system 
>> lapack library might not have any debugging info in it. 
> 
>   Yeah I forgot it might not have them.
> 
>  Barry
> 
>> 
>> Thoughts? 
>> 
>> Regards,
>> Manav
>> 
>>> On Apr 7, 2017, at 2:40 PM, Barry Smith  wrote:
>>> 
>>> 
 On Apr 7, 2017, at 1:46 PM, Manav Bhatia  wrote:
 
 Hi, 
 
 I have compile petsc on my Ubuntu machine (also Mac OS 10.12 separately) 
 to link to the system lapack and blas libraries (shown below). 
 
 I have created an interface class to dgeev in lapack to calculate the 
 eigenvalues of a matrix. 
 
 My application code links to multiple libraries: libMesh, petsc, slepc, 
 hdf5, etc. 
 
 If I test my interface inside this application code, I get junk results. 
>>> 
>>> This is easy to debug because you have a version that works.
>>> 
>>> Run both versions in separate windows each in a debugger and put a break 
>>> point in the dgeev_ function. When it gets there check that it is the same 
>>> dgeev_ function in both cases and check that the inputs are the same then 
>>> step through both to see when things start to change between the two.
>>> 
 
 However, on the same machine, if I use the interface in a separate main() 
 function without linking to any of the libraries except lapack and blas, 
 then I get expected results. 
 
 Also, this problem does not show up on Mac. 
 
 I am not sure what could be causing this and don’t quite know where to 
 start. Could Petsc have anything to do with this? 
 
 Any insight would be greatly appreciated. 
 
 Regards,
 Manav
 
 manav@manav1:~/test$ ldd /opt/local/lib/libpetsc.so
linux-vdso.so.1 =>  (0x7fff3e7a8000)
libsuperlu_dist.so.5 => /opt/local/lib/libsuperlu_dist.so.5 
 (0x7f721fbd1000)
libparmetis.so => /opt/local/lib/libparmetis.so (0x7f721f99)
libmetis.so => /opt/local/lib/libmetis.so (0x7f721f718000)
libsuperlu.so.5 => /opt/local/lib/libsuperlu.so.5 (0x7f721f4a7000)
libstdc++.so.6 => /usr/lib/x86_64-linux-gnu/libstdc++.so.6 
 (0x7f721f124000)
liblapack.so.3 => /usr/lib/liblapack.so.3 (0x7f721e92c000)
libblas.so.3 => /usr/lib/libblas.so.3 (0x7f721e6bd000)
libX11.so.6 => /usr/lib/x86_64-linux-gnu/libX11.so.6 
 (0x7f721e382000)
libm.so.6 => /lib/x86_64-linux-gnu/libm.so.6 (0x7f721e079000)
libmpi_mpifh.so.12 => /usr/lib/libmpi_mpifh.so.12 (0x7f721de2)
libgfortran.so.3 => /usr/lib/x86_64-linux-gnu/libgfortran.so.3 
 (0x7f721daf4000)
libdl.so.2 => /lib/x86_64-linux-gnu/libdl.so.2 (0x7f721d8f)
libmpi.so.12 => /usr/lib/libmpi.so.12 (0x7f721d61a000)
libgcc_s.so.1 => /lib/x86_64-linux-gnu/libgcc_s.so.1 
 (0x7f721d403000)
libpthread.so.0 => /lib/x86_64-linux-gnu/libpthread.so.0 
 (0x7f721d1e6000)
libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x7f721ce1d000)
/lib64/ld-linux-x86-64.so.2 (0x55d739f1b000)
libxcb.so.1 => /usr/lib/x86_64-linux-gnu/libxcb.so.1 
 (0x7f721cbcf000)
libopen-pal.so.13 => /usr/lib/libopen-pal.so.13 (0x7f721c932000)
libquadmath.so.0 => /usr/lib/x86_64-linux-gnu/libquadmath.so.0 
 (0x7f721c6f2000)
libibverbs.so.1 => /usr/lib/libibverbs.so.1 (0x7f721c4e3000)
libopen-rte.so.12 => /usr/lib/libopen-rte.so.12 (0x7f721c269000)
libXau.so.6 => /usr/lib/x86_64-linux-gnu/libXau.so.6 
 (0x7f721c064000)
libXdmcp.so.6 => /usr/lib/x86_64-linux-gnu/libXdmcp.so.6 
 (0x7f721be5e000)
librt.so.1 => /lib/x86_64-linux-gnu/librt.so.1 (0x7f721bc56000)
libutil.so.1 => /lib/x86_64-linux-gnu/libutil.so.1 (0x7f721ba52000)
libhwloc.so.5 => /usr/lib/x86_64-linux-gnu/libhwloc.so.5 
 (0x7f721b818000)
libnuma.so.1 => /usr/lib/x86_64-linux-gnu/libnuma.so.1 
 (0x7f721b60c000)
libltdl.so.7 => /usr/lib/x86_64-linux-gnu/libltdl.so.7 
 (0x7f721b402000)
 
>>> 
>> 
> 



Re: [petsc-users] odd behavior when using lapack's dgeev with petsc

2017-04-07 Thread Barry Smith

> On Apr 7, 2017, at 2:57 PM, Manav Bhatia  wrote:
> 
> Hi Barry, 
> 
>  Thanks for the inputs. 
> 
>  I did try that, but the debugger (gdb) stepped right over the dgeev_ call, 
> without getting inside the function. 

Did it at least stop at the function so you do an up and print all the 
arguments passed in?

> 
>  I am wondering if this has anything to do with the fact that the system 
> lapack library might not have any debugging info in it. 

   Yeah I forgot it might not have them.

  Barry

> 
>  Thoughts? 
> 
> Regards,
> Manav
> 
>> On Apr 7, 2017, at 2:40 PM, Barry Smith  wrote:
>> 
>> 
>>> On Apr 7, 2017, at 1:46 PM, Manav Bhatia  wrote:
>>> 
>>> Hi, 
>>> 
>>>  I have compile petsc on my Ubuntu machine (also Mac OS 10.12 separately) 
>>> to link to the system lapack and blas libraries (shown below). 
>>> 
>>>  I have created an interface class to dgeev in lapack to calculate the 
>>> eigenvalues of a matrix. 
>>> 
>>>  My application code links to multiple libraries: libMesh, petsc, slepc, 
>>> hdf5, etc. 
>>> 
>>>  If I test my interface inside this application code, I get junk results. 
>> 
>>  This is easy to debug because you have a version that works.
>> 
>>  Run both versions in separate windows each in a debugger and put a break 
>> point in the dgeev_ function. When it gets there check that it is the same 
>> dgeev_ function in both cases and check that the inputs are the same then 
>> step through both to see when things start to change between the two.
>> 
>>> 
>>>  However, on the same machine, if I use the interface in a separate main() 
>>> function without linking to any of the libraries except lapack and blas, 
>>> then I get expected results. 
>>> 
>>>  Also, this problem does not show up on Mac. 
>>> 
>>>  I am not sure what could be causing this and don’t quite know where to 
>>> start. Could Petsc have anything to do with this? 
>>> 
>>>  Any insight would be greatly appreciated. 
>>> 
>>> Regards,
>>> Manav
>>> 
>>> manav@manav1:~/test$ ldd /opt/local/lib/libpetsc.so
>>> linux-vdso.so.1 =>  (0x7fff3e7a8000)
>>> libsuperlu_dist.so.5 => /opt/local/lib/libsuperlu_dist.so.5 
>>> (0x7f721fbd1000)
>>> libparmetis.so => /opt/local/lib/libparmetis.so (0x7f721f99)
>>> libmetis.so => /opt/local/lib/libmetis.so (0x7f721f718000)
>>> libsuperlu.so.5 => /opt/local/lib/libsuperlu.so.5 (0x7f721f4a7000)
>>> libstdc++.so.6 => /usr/lib/x86_64-linux-gnu/libstdc++.so.6 
>>> (0x7f721f124000)
>>> liblapack.so.3 => /usr/lib/liblapack.so.3 (0x7f721e92c000)
>>> libblas.so.3 => /usr/lib/libblas.so.3 (0x7f721e6bd000)
>>> libX11.so.6 => /usr/lib/x86_64-linux-gnu/libX11.so.6 
>>> (0x7f721e382000)
>>> libm.so.6 => /lib/x86_64-linux-gnu/libm.so.6 (0x7f721e079000)
>>> libmpi_mpifh.so.12 => /usr/lib/libmpi_mpifh.so.12 (0x7f721de2)
>>> libgfortran.so.3 => /usr/lib/x86_64-linux-gnu/libgfortran.so.3 
>>> (0x7f721daf4000)
>>> libdl.so.2 => /lib/x86_64-linux-gnu/libdl.so.2 (0x7f721d8f)
>>> libmpi.so.12 => /usr/lib/libmpi.so.12 (0x7f721d61a000)
>>> libgcc_s.so.1 => /lib/x86_64-linux-gnu/libgcc_s.so.1 
>>> (0x7f721d403000)
>>> libpthread.so.0 => /lib/x86_64-linux-gnu/libpthread.so.0 
>>> (0x7f721d1e6000)
>>> libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x7f721ce1d000)
>>> /lib64/ld-linux-x86-64.so.2 (0x55d739f1b000)
>>> libxcb.so.1 => /usr/lib/x86_64-linux-gnu/libxcb.so.1 
>>> (0x7f721cbcf000)
>>> libopen-pal.so.13 => /usr/lib/libopen-pal.so.13 (0x7f721c932000)
>>> libquadmath.so.0 => /usr/lib/x86_64-linux-gnu/libquadmath.so.0 
>>> (0x7f721c6f2000)
>>> libibverbs.so.1 => /usr/lib/libibverbs.so.1 (0x7f721c4e3000)
>>> libopen-rte.so.12 => /usr/lib/libopen-rte.so.12 (0x7f721c269000)
>>> libXau.so.6 => /usr/lib/x86_64-linux-gnu/libXau.so.6 
>>> (0x7f721c064000)
>>> libXdmcp.so.6 => /usr/lib/x86_64-linux-gnu/libXdmcp.so.6 
>>> (0x7f721be5e000)
>>> librt.so.1 => /lib/x86_64-linux-gnu/librt.so.1 (0x7f721bc56000)
>>> libutil.so.1 => /lib/x86_64-linux-gnu/libutil.so.1 (0x7f721ba52000)
>>> libhwloc.so.5 => /usr/lib/x86_64-linux-gnu/libhwloc.so.5 
>>> (0x7f721b818000)
>>> libnuma.so.1 => /usr/lib/x86_64-linux-gnu/libnuma.so.1 
>>> (0x7f721b60c000)
>>> libltdl.so.7 => /usr/lib/x86_64-linux-gnu/libltdl.so.7 
>>> (0x7f721b402000)
>>> 
>> 
> 



Re: [petsc-users] odd behavior when using lapack's dgeev with petsc

2017-04-07 Thread Manav Bhatia
Hi Barry, 

  Thanks for the inputs. 

  I did try that, but the debugger (gdb) stepped right over the dgeev_ call, 
without getting inside the function. 

  I am wondering if this has anything to do with the fact that the system 
lapack library might not have any debugging info in it. 

  Thoughts? 

Regards,
Manav

> On Apr 7, 2017, at 2:40 PM, Barry Smith  wrote:
> 
> 
>> On Apr 7, 2017, at 1:46 PM, Manav Bhatia  wrote:
>> 
>> Hi, 
>> 
>>   I have compile petsc on my Ubuntu machine (also Mac OS 10.12 separately) 
>> to link to the system lapack and blas libraries (shown below). 
>> 
>>   I have created an interface class to dgeev in lapack to calculate the 
>> eigenvalues of a matrix. 
>> 
>>   My application code links to multiple libraries: libMesh, petsc, slepc, 
>> hdf5, etc. 
>> 
>>   If I test my interface inside this application code, I get junk results. 
> 
>   This is easy to debug because you have a version that works.
> 
>   Run both versions in separate windows each in a debugger and put a break 
> point in the dgeev_ function. When it gets there check that it is the same 
> dgeev_ function in both cases and check that the inputs are the same then 
> step through both to see when things start to change between the two.
> 
>> 
>>   However, on the same machine, if I use the interface in a separate main() 
>> function without linking to any of the libraries except lapack and blas, 
>> then I get expected results. 
>> 
>>   Also, this problem does not show up on Mac. 
>> 
>>   I am not sure what could be causing this and don’t quite know where to 
>> start. Could Petsc have anything to do with this? 
>> 
>>   Any insight would be greatly appreciated. 
>> 
>> Regards,
>> Manav
>> 
>> manav@manav1:~/test$ ldd /opt/local/lib/libpetsc.so
>>  linux-vdso.so.1 =>  (0x7fff3e7a8000)
>>  libsuperlu_dist.so.5 => /opt/local/lib/libsuperlu_dist.so.5 
>> (0x7f721fbd1000)
>>  libparmetis.so => /opt/local/lib/libparmetis.so (0x7f721f99)
>>  libmetis.so => /opt/local/lib/libmetis.so (0x7f721f718000)
>>  libsuperlu.so.5 => /opt/local/lib/libsuperlu.so.5 (0x7f721f4a7000)
>>  libstdc++.so.6 => /usr/lib/x86_64-linux-gnu/libstdc++.so.6 
>> (0x7f721f124000)
>>  liblapack.so.3 => /usr/lib/liblapack.so.3 (0x7f721e92c000)
>>  libblas.so.3 => /usr/lib/libblas.so.3 (0x7f721e6bd000)
>>  libX11.so.6 => /usr/lib/x86_64-linux-gnu/libX11.so.6 
>> (0x7f721e382000)
>>  libm.so.6 => /lib/x86_64-linux-gnu/libm.so.6 (0x7f721e079000)
>>  libmpi_mpifh.so.12 => /usr/lib/libmpi_mpifh.so.12 (0x7f721de2)
>>  libgfortran.so.3 => /usr/lib/x86_64-linux-gnu/libgfortran.so.3 
>> (0x7f721daf4000)
>>  libdl.so.2 => /lib/x86_64-linux-gnu/libdl.so.2 (0x7f721d8f)
>>  libmpi.so.12 => /usr/lib/libmpi.so.12 (0x7f721d61a000)
>>  libgcc_s.so.1 => /lib/x86_64-linux-gnu/libgcc_s.so.1 
>> (0x7f721d403000)
>>  libpthread.so.0 => /lib/x86_64-linux-gnu/libpthread.so.0 
>> (0x7f721d1e6000)
>>  libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x7f721ce1d000)
>>  /lib64/ld-linux-x86-64.so.2 (0x55d739f1b000)
>>  libxcb.so.1 => /usr/lib/x86_64-linux-gnu/libxcb.so.1 
>> (0x7f721cbcf000)
>>  libopen-pal.so.13 => /usr/lib/libopen-pal.so.13 (0x7f721c932000)
>>  libquadmath.so.0 => /usr/lib/x86_64-linux-gnu/libquadmath.so.0 
>> (0x7f721c6f2000)
>>  libibverbs.so.1 => /usr/lib/libibverbs.so.1 (0x7f721c4e3000)
>>  libopen-rte.so.12 => /usr/lib/libopen-rte.so.12 (0x7f721c269000)
>>  libXau.so.6 => /usr/lib/x86_64-linux-gnu/libXau.so.6 
>> (0x7f721c064000)
>>  libXdmcp.so.6 => /usr/lib/x86_64-linux-gnu/libXdmcp.so.6 
>> (0x7f721be5e000)
>>  librt.so.1 => /lib/x86_64-linux-gnu/librt.so.1 (0x7f721bc56000)
>>  libutil.so.1 => /lib/x86_64-linux-gnu/libutil.so.1 (0x7f721ba52000)
>>  libhwloc.so.5 => /usr/lib/x86_64-linux-gnu/libhwloc.so.5 
>> (0x7f721b818000)
>>  libnuma.so.1 => /usr/lib/x86_64-linux-gnu/libnuma.so.1 
>> (0x7f721b60c000)
>>  libltdl.so.7 => /usr/lib/x86_64-linux-gnu/libltdl.so.7 
>> (0x7f721b402000)
>> 
> 



Re: [petsc-users] odd behavior when using lapack's dgeev with petsc

2017-04-07 Thread Barry Smith

> On Apr 7, 2017, at 1:46 PM, Manav Bhatia  wrote:
> 
> Hi, 
>  
>I have compile petsc on my Ubuntu machine (also Mac OS 10.12 separately) 
> to link to the system lapack and blas libraries (shown below). 
> 
>I have created an interface class to dgeev in lapack to calculate the 
> eigenvalues of a matrix. 
> 
>My application code links to multiple libraries: libMesh, petsc, slepc, 
> hdf5, etc. 
> 
>If I test my interface inside this application code, I get junk results. 

   This is easy to debug because you have a version that works.

   Run both versions in separate windows each in a debugger and put a break 
point in the dgeev_ function. When it gets there check that it is the same 
dgeev_ function in both cases and check that the inputs are the same then step 
through both to see when things start to change between the two.

> 
>However, on the same machine, if I use the interface in a separate main() 
> function without linking to any of the libraries except lapack and blas, then 
> I get expected results. 
> 
>Also, this problem does not show up on Mac. 
> 
>I am not sure what could be causing this and don’t quite know where to 
> start. Could Petsc have anything to do with this? 
> 
>Any insight would be greatly appreciated. 
> 
> Regards,
> Manav
> 
> manav@manav1:~/test$ ldd /opt/local/lib/libpetsc.so
>   linux-vdso.so.1 =>  (0x7fff3e7a8000)
>   libsuperlu_dist.so.5 => /opt/local/lib/libsuperlu_dist.so.5 
> (0x7f721fbd1000)
>   libparmetis.so => /opt/local/lib/libparmetis.so (0x7f721f99)
>   libmetis.so => /opt/local/lib/libmetis.so (0x7f721f718000)
>   libsuperlu.so.5 => /opt/local/lib/libsuperlu.so.5 (0x7f721f4a7000)
>   libstdc++.so.6 => /usr/lib/x86_64-linux-gnu/libstdc++.so.6 
> (0x7f721f124000)
>   liblapack.so.3 => /usr/lib/liblapack.so.3 (0x7f721e92c000)
>   libblas.so.3 => /usr/lib/libblas.so.3 (0x7f721e6bd000)
>   libX11.so.6 => /usr/lib/x86_64-linux-gnu/libX11.so.6 
> (0x7f721e382000)
>   libm.so.6 => /lib/x86_64-linux-gnu/libm.so.6 (0x7f721e079000)
>   libmpi_mpifh.so.12 => /usr/lib/libmpi_mpifh.so.12 (0x7f721de2)
>   libgfortran.so.3 => /usr/lib/x86_64-linux-gnu/libgfortran.so.3 
> (0x7f721daf4000)
>   libdl.so.2 => /lib/x86_64-linux-gnu/libdl.so.2 (0x7f721d8f)
>   libmpi.so.12 => /usr/lib/libmpi.so.12 (0x7f721d61a000)
>   libgcc_s.so.1 => /lib/x86_64-linux-gnu/libgcc_s.so.1 
> (0x7f721d403000)
>   libpthread.so.0 => /lib/x86_64-linux-gnu/libpthread.so.0 
> (0x7f721d1e6000)
>   libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x7f721ce1d000)
>   /lib64/ld-linux-x86-64.so.2 (0x55d739f1b000)
>   libxcb.so.1 => /usr/lib/x86_64-linux-gnu/libxcb.so.1 
> (0x7f721cbcf000)
>   libopen-pal.so.13 => /usr/lib/libopen-pal.so.13 (0x7f721c932000)
>   libquadmath.so.0 => /usr/lib/x86_64-linux-gnu/libquadmath.so.0 
> (0x7f721c6f2000)
>   libibverbs.so.1 => /usr/lib/libibverbs.so.1 (0x7f721c4e3000)
>   libopen-rte.so.12 => /usr/lib/libopen-rte.so.12 (0x7f721c269000)
>   libXau.so.6 => /usr/lib/x86_64-linux-gnu/libXau.so.6 
> (0x7f721c064000)
>   libXdmcp.so.6 => /usr/lib/x86_64-linux-gnu/libXdmcp.so.6 
> (0x7f721be5e000)
>   librt.so.1 => /lib/x86_64-linux-gnu/librt.so.1 (0x7f721bc56000)
>   libutil.so.1 => /lib/x86_64-linux-gnu/libutil.so.1 (0x7f721ba52000)
>   libhwloc.so.5 => /usr/lib/x86_64-linux-gnu/libhwloc.so.5 
> (0x7f721b818000)
>   libnuma.so.1 => /usr/lib/x86_64-linux-gnu/libnuma.so.1 
> (0x7f721b60c000)
>   libltdl.so.7 => /usr/lib/x86_64-linux-gnu/libltdl.so.7 
> (0x7f721b402000)
> 



[petsc-users] odd behavior when using lapack's dgeev with petsc

2017-04-07 Thread Manav Bhatia
Hi, 
 
   I have compile petsc on my Ubuntu machine (also Mac OS 10.12 separately) to 
link to the system lapack and blas libraries (shown below). 

   I have created an interface class to dgeev in lapack to calculate the 
eigenvalues of a matrix. 

   My application code links to multiple libraries: libMesh, petsc, slepc, 
hdf5, etc. 

   If I test my interface inside this application code, I get junk results. 

   However, on the same machine, if I use the interface in a separate main() 
function without linking to any of the libraries except lapack and blas, then I 
get expected results. 

   Also, this problem does not show up on Mac. 

   I am not sure what could be causing this and don’t quite know where to 
start. Could Petsc have anything to do with this? 

   Any insight would be greatly appreciated. 

Regards,
Manav

manav@manav1:~/test$ ldd /opt/local/lib/libpetsc.so
linux-vdso.so.1 =>  (0x7fff3e7a8000)
libsuperlu_dist.so.5 => /opt/local/lib/libsuperlu_dist.so.5 
(0x7f721fbd1000)
libparmetis.so => /opt/local/lib/libparmetis.so (0x7f721f99)
libmetis.so => /opt/local/lib/libmetis.so (0x7f721f718000)
libsuperlu.so.5 => /opt/local/lib/libsuperlu.so.5 (0x7f721f4a7000)
libstdc++.so.6 => /usr/lib/x86_64-linux-gnu/libstdc++.so.6 
(0x7f721f124000)
liblapack.so.3 => /usr/lib/liblapack.so.3 (0x7f721e92c000)
libblas.so.3 => /usr/lib/libblas.so.3 (0x7f721e6bd000)
libX11.so.6 => /usr/lib/x86_64-linux-gnu/libX11.so.6 
(0x7f721e382000)
libm.so.6 => /lib/x86_64-linux-gnu/libm.so.6 (0x7f721e079000)
libmpi_mpifh.so.12 => /usr/lib/libmpi_mpifh.so.12 (0x7f721de2)
libgfortran.so.3 => /usr/lib/x86_64-linux-gnu/libgfortran.so.3 
(0x7f721daf4000)
libdl.so.2 => /lib/x86_64-linux-gnu/libdl.so.2 (0x7f721d8f)
libmpi.so.12 => /usr/lib/libmpi.so.12 (0x7f721d61a000)
libgcc_s.so.1 => /lib/x86_64-linux-gnu/libgcc_s.so.1 
(0x7f721d403000)
libpthread.so.0 => /lib/x86_64-linux-gnu/libpthread.so.0 
(0x7f721d1e6000)
libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x7f721ce1d000)
/lib64/ld-linux-x86-64.so.2 (0x55d739f1b000)
libxcb.so.1 => /usr/lib/x86_64-linux-gnu/libxcb.so.1 
(0x7f721cbcf000)
libopen-pal.so.13 => /usr/lib/libopen-pal.so.13 (0x7f721c932000)
libquadmath.so.0 => /usr/lib/x86_64-linux-gnu/libquadmath.so.0 
(0x7f721c6f2000)
libibverbs.so.1 => /usr/lib/libibverbs.so.1 (0x7f721c4e3000)
libopen-rte.so.12 => /usr/lib/libopen-rte.so.12 (0x7f721c269000)
libXau.so.6 => /usr/lib/x86_64-linux-gnu/libXau.so.6 
(0x7f721c064000)
libXdmcp.so.6 => /usr/lib/x86_64-linux-gnu/libXdmcp.so.6 
(0x7f721be5e000)
librt.so.1 => /lib/x86_64-linux-gnu/librt.so.1 (0x7f721bc56000)
libutil.so.1 => /lib/x86_64-linux-gnu/libutil.so.1 (0x7f721ba52000)
libhwloc.so.5 => /usr/lib/x86_64-linux-gnu/libhwloc.so.5 
(0x7f721b818000)
libnuma.so.1 => /usr/lib/x86_64-linux-gnu/libnuma.so.1 
(0x7f721b60c000)
libltdl.so.7 => /usr/lib/x86_64-linux-gnu/libltdl.so.7 
(0x7f721b402000)



Re: [petsc-users] Symmetric matrix: Setting entries below diagonal

2017-04-07 Thread Matthew Knepley
On Fri, Apr 7, 2017 at 11:23 AM, Barletta, Ivano  wrote:

> So, as far as I understand, the only benefit of PETSc with symmetric
> matrices
> is only when Matrix values are set, by reducing the overhead of
> MatSetValue calls?
>

It halves the storage. There is a slight advantage from not having to load
the lower triangle.

   Matt


> Thanks,
> Ivano
>
> 2017-04-07 17:19 GMT+02:00 Barry Smith :
>
>>
>> > On Apr 7, 2017, at 6:40 AM, Florian Lindner 
>> wrote:
>> >
>> > Hello,
>> >
>> > two questions about symmetric (MATSBAIJ) matrices.
>> >
>> > + Entries set with MatSetValue below the main diagonal are ignored. Is
>> that by design?
>>
>>Yes
>>
>> > I rather expected setting A_ij to
>> > have the same effect as setting A_ji.
>>
>>You need to check the relationship between i and j and swap them if
>> needed before the call.
>>
>> >
>> > + Has MatSetOption to MAT_SYMMETRIC and MAT_SYMMETRIC_ETERNAL any gain
>> on MATSBAIJ matrices?
>>
>>   No
>>
>> >
>> > Thanks,
>> > Florian
>> >
>> > Test programm:
>> >
>> >
>> > #include "petscmat.h"
>> > #include "petscviewer.h"
>> >
>> > int main(int argc, char **argv)
>> > {
>> >  PetscInitialize(, , "", NULL);
>> >  PetscErrorCode ierr = 0;
>> >
>> >  Mat A;
>> >  ierr = MatCreate(PETSC_COMM_WORLD, ); CHKERRQ(ierr);
>> >  MatSetType(A, MATSBAIJ); CHKERRQ(ierr);
>> >  ierr = MatSetSizes(A, 4, 4, PETSC_DECIDE, PETSC_DECIDE); CHKERRQ(ierr);
>> >  ierr = MatSetUp(A); CHKERRQ(ierr);
>> >  ierr = MatSetOption(A, MAT_SYMMETRIC, PETSC_TRUE); CHKERRQ(ierr);
>> >  ierr = MatSetOption(A, MAT_SYMMETRY_ETERNAL, PETSC_TRUE);
>> CHKERRQ(ierr);
>> >
>> >  // Stored
>> >  ierr = MatSetValue(A, 1, 2, 21, INSERT_VALUES); CHKERRQ(ierr);
>> >  ierr = MatSetValue(A, 1, 1, 11, INSERT_VALUES); CHKERRQ(ierr);
>> >
>> >  // Ignored
>> >  ierr = MatSetValue(A, 2, 1, 22, INSERT_VALUES); CHKERRQ(ierr);
>> >  ierr = MatSetValue(A, 3, 2, 32, INSERT_VALUES); CHKERRQ(ierr);
>> >  ierr = MatSetValue(A, 3, 1, 31, INSERT_VALUES); CHKERRQ(ierr);
>> >
>> >  ierr = MatAssemblyBegin(A, MAT_FINAL_ASSEMBLY); CHKERRQ(ierr);
>> >  ierr = MatAssemblyEnd(A, MAT_FINAL_ASSEMBLY); CHKERRQ(ierr);
>> >
>> >  PetscViewer viewer;
>> >  ierr = PetscViewerCreate(PETSC_COMM_WORLD, ); CHKERRQ(ierr);
>> >  ierr = PetscViewerSetType(viewer, PETSCVIEWERASCII); CHKERRQ(ierr);
>> >  ierr = PetscViewerPushFormat(viewer, PETSC_VIEWER_ASCII_DENSE);
>> CHKERRQ(ierr);
>> >  ierr = MatView(A, viewer); CHKERRQ(ierr);
>> >  ierr = PetscViewerPopFormat(viewer); CHKERRQ(ierr);
>> >  ierr = PetscViewerDestroy(); CHKERRQ(ierr);
>> >
>> >  PetscFinalize();
>> >  return 0;
>> > }
>>
>>
>


-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener


Re: [petsc-users] Symmetric matrix: Setting entries below diagonal

2017-04-07 Thread Barry Smith

> On Apr 7, 2017, at 11:23 AM, Barletta, Ivano  wrote:
> 
> So, as far as I understand, the only benefit of PETSc with symmetric matrices
> is only when Matrix values are set, by reducing the overhead of MatSetValue 
> calls?

The benefits of using SBAIJ matrices are

1 You don't need to compute or set values below the diagonal

2) the matrix storage requires about 1/2 the memory since the lower diagonal 
part is not stored

   If you use AIJ or BAIJ matrices and MatSetOption() to indicate they are 
symmetric there is, of course no benefit of 1 or 2.

  Barry


> 
> Thanks,
> Ivano
> 
> 2017-04-07 17:19 GMT+02:00 Barry Smith :
> 
> > On Apr 7, 2017, at 6:40 AM, Florian Lindner  wrote:
> >
> > Hello,
> >
> > two questions about symmetric (MATSBAIJ) matrices.
> >
> > + Entries set with MatSetValue below the main diagonal are ignored. Is that 
> > by design?
> 
>Yes
> 
> > I rather expected setting A_ij to
> > have the same effect as setting A_ji.
> 
>You need to check the relationship between i and j and swap them if needed 
> before the call.
> 
> >
> > + Has MatSetOption to MAT_SYMMETRIC and MAT_SYMMETRIC_ETERNAL any gain on 
> > MATSBAIJ matrices?
> 
>   No
> 
> >
> > Thanks,
> > Florian
> >
> > Test programm:
> >
> >
> > #include "petscmat.h"
> > #include "petscviewer.h"
> >
> > int main(int argc, char **argv)
> > {
> >  PetscInitialize(, , "", NULL);
> >  PetscErrorCode ierr = 0;
> >
> >  Mat A;
> >  ierr = MatCreate(PETSC_COMM_WORLD, ); CHKERRQ(ierr);
> >  MatSetType(A, MATSBAIJ); CHKERRQ(ierr);
> >  ierr = MatSetSizes(A, 4, 4, PETSC_DECIDE, PETSC_DECIDE); CHKERRQ(ierr);
> >  ierr = MatSetUp(A); CHKERRQ(ierr);
> >  ierr = MatSetOption(A, MAT_SYMMETRIC, PETSC_TRUE); CHKERRQ(ierr);
> >  ierr = MatSetOption(A, MAT_SYMMETRY_ETERNAL, PETSC_TRUE); CHKERRQ(ierr);
> >
> >  // Stored
> >  ierr = MatSetValue(A, 1, 2, 21, INSERT_VALUES); CHKERRQ(ierr);
> >  ierr = MatSetValue(A, 1, 1, 11, INSERT_VALUES); CHKERRQ(ierr);
> >
> >  // Ignored
> >  ierr = MatSetValue(A, 2, 1, 22, INSERT_VALUES); CHKERRQ(ierr);
> >  ierr = MatSetValue(A, 3, 2, 32, INSERT_VALUES); CHKERRQ(ierr);
> >  ierr = MatSetValue(A, 3, 1, 31, INSERT_VALUES); CHKERRQ(ierr);
> >
> >  ierr = MatAssemblyBegin(A, MAT_FINAL_ASSEMBLY); CHKERRQ(ierr);
> >  ierr = MatAssemblyEnd(A, MAT_FINAL_ASSEMBLY); CHKERRQ(ierr);
> >
> >  PetscViewer viewer;
> >  ierr = PetscViewerCreate(PETSC_COMM_WORLD, ); CHKERRQ(ierr);
> >  ierr = PetscViewerSetType(viewer, PETSCVIEWERASCII); CHKERRQ(ierr);
> >  ierr = PetscViewerPushFormat(viewer, PETSC_VIEWER_ASCII_DENSE); 
> > CHKERRQ(ierr);
> >  ierr = MatView(A, viewer); CHKERRQ(ierr);
> >  ierr = PetscViewerPopFormat(viewer); CHKERRQ(ierr);
> >  ierr = PetscViewerDestroy(); CHKERRQ(ierr);
> >
> >  PetscFinalize();
> >  return 0;
> > }
> 
> 



Re: [petsc-users] Symmetric matrix: Setting entries below diagonal

2017-04-07 Thread Barry Smith

   If you want to set all values in the matrix and have the SBAIJ matrix ignore 
those below the diagonal you can
use

   MatSetOption(mat,MAT_IGNORE_LOWER_TRIANGULAR,PETSC_TRUE);

or the options database -mat_ignore_lower_triangular

This is useful when you have a symmetric matrix but you want to switch between 
using AIJ and SBAIJ format without changing anything in the code.

  Barry

> On Apr 7, 2017, at 10:19 AM, Barry Smith  wrote:
> 
> 
>> On Apr 7, 2017, at 6:40 AM, Florian Lindner  wrote:
>> 
>> Hello,
>> 
>> two questions about symmetric (MATSBAIJ) matrices.
>> 
>> + Entries set with MatSetValue below the main diagonal are ignored. Is that 
>> by design?
> 
>   Yes
> 
>> I rather expected setting A_ij to
>> have the same effect as setting A_ji.
> 
>   You need to check the relationship between i and j and swap them if needed 
> before the call.
> 
>> 
>> + Has MatSetOption to MAT_SYMMETRIC and MAT_SYMMETRIC_ETERNAL any gain on 
>> MATSBAIJ matrices?
> 
>  No
> 
>> 
>> Thanks,
>> Florian
>> 
>> Test programm:
>> 
>> 
>> #include "petscmat.h"
>> #include "petscviewer.h"
>> 
>> int main(int argc, char **argv)
>> {
>> PetscInitialize(, , "", NULL);
>> PetscErrorCode ierr = 0;
>> 
>> Mat A;
>> ierr = MatCreate(PETSC_COMM_WORLD, ); CHKERRQ(ierr);
>> MatSetType(A, MATSBAIJ); CHKERRQ(ierr);
>> ierr = MatSetSizes(A, 4, 4, PETSC_DECIDE, PETSC_DECIDE); CHKERRQ(ierr);
>> ierr = MatSetUp(A); CHKERRQ(ierr);
>> ierr = MatSetOption(A, MAT_SYMMETRIC, PETSC_TRUE); CHKERRQ(ierr);
>> ierr = MatSetOption(A, MAT_SYMMETRY_ETERNAL, PETSC_TRUE); CHKERRQ(ierr);
>> 
>> // Stored
>> ierr = MatSetValue(A, 1, 2, 21, INSERT_VALUES); CHKERRQ(ierr);
>> ierr = MatSetValue(A, 1, 1, 11, INSERT_VALUES); CHKERRQ(ierr);
>> 
>> // Ignored
>> ierr = MatSetValue(A, 2, 1, 22, INSERT_VALUES); CHKERRQ(ierr);
>> ierr = MatSetValue(A, 3, 2, 32, INSERT_VALUES); CHKERRQ(ierr);
>> ierr = MatSetValue(A, 3, 1, 31, INSERT_VALUES); CHKERRQ(ierr);
>> 
>> ierr = MatAssemblyBegin(A, MAT_FINAL_ASSEMBLY); CHKERRQ(ierr);
>> ierr = MatAssemblyEnd(A, MAT_FINAL_ASSEMBLY); CHKERRQ(ierr);
>> 
>> PetscViewer viewer;
>> ierr = PetscViewerCreate(PETSC_COMM_WORLD, ); CHKERRQ(ierr);
>> ierr = PetscViewerSetType(viewer, PETSCVIEWERASCII); CHKERRQ(ierr);
>> ierr = PetscViewerPushFormat(viewer, PETSC_VIEWER_ASCII_DENSE); 
>> CHKERRQ(ierr);
>> ierr = MatView(A, viewer); CHKERRQ(ierr);
>> ierr = PetscViewerPopFormat(viewer); CHKERRQ(ierr);
>> ierr = PetscViewerDestroy(); CHKERRQ(ierr);
>> 
>> PetscFinalize();
>> return 0;
>> }
> 



Re: [petsc-users] Symmetric matrix: Setting entries below diagonal

2017-04-07 Thread Barletta, Ivano
So, as far as I understand, the only benefit of PETSc with symmetric
matrices
is only when Matrix values are set, by reducing the overhead of MatSetValue
calls?

Thanks,
Ivano

2017-04-07 17:19 GMT+02:00 Barry Smith :

>
> > On Apr 7, 2017, at 6:40 AM, Florian Lindner  wrote:
> >
> > Hello,
> >
> > two questions about symmetric (MATSBAIJ) matrices.
> >
> > + Entries set with MatSetValue below the main diagonal are ignored. Is
> that by design?
>
>Yes
>
> > I rather expected setting A_ij to
> > have the same effect as setting A_ji.
>
>You need to check the relationship between i and j and swap them if
> needed before the call.
>
> >
> > + Has MatSetOption to MAT_SYMMETRIC and MAT_SYMMETRIC_ETERNAL any gain
> on MATSBAIJ matrices?
>
>   No
>
> >
> > Thanks,
> > Florian
> >
> > Test programm:
> >
> >
> > #include "petscmat.h"
> > #include "petscviewer.h"
> >
> > int main(int argc, char **argv)
> > {
> >  PetscInitialize(, , "", NULL);
> >  PetscErrorCode ierr = 0;
> >
> >  Mat A;
> >  ierr = MatCreate(PETSC_COMM_WORLD, ); CHKERRQ(ierr);
> >  MatSetType(A, MATSBAIJ); CHKERRQ(ierr);
> >  ierr = MatSetSizes(A, 4, 4, PETSC_DECIDE, PETSC_DECIDE); CHKERRQ(ierr);
> >  ierr = MatSetUp(A); CHKERRQ(ierr);
> >  ierr = MatSetOption(A, MAT_SYMMETRIC, PETSC_TRUE); CHKERRQ(ierr);
> >  ierr = MatSetOption(A, MAT_SYMMETRY_ETERNAL, PETSC_TRUE); CHKERRQ(ierr);
> >
> >  // Stored
> >  ierr = MatSetValue(A, 1, 2, 21, INSERT_VALUES); CHKERRQ(ierr);
> >  ierr = MatSetValue(A, 1, 1, 11, INSERT_VALUES); CHKERRQ(ierr);
> >
> >  // Ignored
> >  ierr = MatSetValue(A, 2, 1, 22, INSERT_VALUES); CHKERRQ(ierr);
> >  ierr = MatSetValue(A, 3, 2, 32, INSERT_VALUES); CHKERRQ(ierr);
> >  ierr = MatSetValue(A, 3, 1, 31, INSERT_VALUES); CHKERRQ(ierr);
> >
> >  ierr = MatAssemblyBegin(A, MAT_FINAL_ASSEMBLY); CHKERRQ(ierr);
> >  ierr = MatAssemblyEnd(A, MAT_FINAL_ASSEMBLY); CHKERRQ(ierr);
> >
> >  PetscViewer viewer;
> >  ierr = PetscViewerCreate(PETSC_COMM_WORLD, ); CHKERRQ(ierr);
> >  ierr = PetscViewerSetType(viewer, PETSCVIEWERASCII); CHKERRQ(ierr);
> >  ierr = PetscViewerPushFormat(viewer, PETSC_VIEWER_ASCII_DENSE);
> CHKERRQ(ierr);
> >  ierr = MatView(A, viewer); CHKERRQ(ierr);
> >  ierr = PetscViewerPopFormat(viewer); CHKERRQ(ierr);
> >  ierr = PetscViewerDestroy(); CHKERRQ(ierr);
> >
> >  PetscFinalize();
> >  return 0;
> > }
>
>


Re: [petsc-users] Symmetric matrix: Setting entries below diagonal

2017-04-07 Thread Barry Smith

> On Apr 7, 2017, at 6:40 AM, Florian Lindner  wrote:
> 
> Hello,
> 
> two questions about symmetric (MATSBAIJ) matrices.
> 
> + Entries set with MatSetValue below the main diagonal are ignored. Is that 
> by design?

   Yes

> I rather expected setting A_ij to
> have the same effect as setting A_ji.

   You need to check the relationship between i and j and swap them if needed 
before the call.

> 
> + Has MatSetOption to MAT_SYMMETRIC and MAT_SYMMETRIC_ETERNAL any gain on 
> MATSBAIJ matrices?

  No

> 
> Thanks,
> Florian
> 
> Test programm:
> 
> 
> #include "petscmat.h"
> #include "petscviewer.h"
> 
> int main(int argc, char **argv)
> {
>  PetscInitialize(, , "", NULL);
>  PetscErrorCode ierr = 0;
> 
>  Mat A;
>  ierr = MatCreate(PETSC_COMM_WORLD, ); CHKERRQ(ierr);
>  MatSetType(A, MATSBAIJ); CHKERRQ(ierr);
>  ierr = MatSetSizes(A, 4, 4, PETSC_DECIDE, PETSC_DECIDE); CHKERRQ(ierr);
>  ierr = MatSetUp(A); CHKERRQ(ierr);
>  ierr = MatSetOption(A, MAT_SYMMETRIC, PETSC_TRUE); CHKERRQ(ierr);
>  ierr = MatSetOption(A, MAT_SYMMETRY_ETERNAL, PETSC_TRUE); CHKERRQ(ierr);
> 
>  // Stored
>  ierr = MatSetValue(A, 1, 2, 21, INSERT_VALUES); CHKERRQ(ierr);
>  ierr = MatSetValue(A, 1, 1, 11, INSERT_VALUES); CHKERRQ(ierr);
> 
>  // Ignored
>  ierr = MatSetValue(A, 2, 1, 22, INSERT_VALUES); CHKERRQ(ierr);
>  ierr = MatSetValue(A, 3, 2, 32, INSERT_VALUES); CHKERRQ(ierr);
>  ierr = MatSetValue(A, 3, 1, 31, INSERT_VALUES); CHKERRQ(ierr);
> 
>  ierr = MatAssemblyBegin(A, MAT_FINAL_ASSEMBLY); CHKERRQ(ierr);
>  ierr = MatAssemblyEnd(A, MAT_FINAL_ASSEMBLY); CHKERRQ(ierr);
> 
>  PetscViewer viewer;
>  ierr = PetscViewerCreate(PETSC_COMM_WORLD, ); CHKERRQ(ierr);
>  ierr = PetscViewerSetType(viewer, PETSCVIEWERASCII); CHKERRQ(ierr);
>  ierr = PetscViewerPushFormat(viewer, PETSC_VIEWER_ASCII_DENSE); 
> CHKERRQ(ierr);
>  ierr = MatView(A, viewer); CHKERRQ(ierr);
>  ierr = PetscViewerPopFormat(viewer); CHKERRQ(ierr);
>  ierr = PetscViewerDestroy(); CHKERRQ(ierr);
> 
>  PetscFinalize();
>  return 0;
> }



Re: [petsc-users] EPSViewer in SLEPc

2017-04-07 Thread Kong, Fande
Thanks, Jose

Fande,

On Fri, Apr 7, 2017 at 4:41 AM, Jose E. Roman  wrote:

> I have pushed a commit that should avoid this problem.
> Jose
>
> > El 6 abr 2017, a las 22:27, Kong, Fande  escribió:
> >
> > Hi All,
> >
> > The EPSViewer in SLEPc looks weird. I do not understand the viewer
> logic. For example there is a piece of code in SLEPc (at line 225 of
> epsview.c):
> >
> > if (!ispower) {
> >   if (!eps->ds) { ierr = EPSGetDS(eps,>ds);CHKERRQ(ierr); }
> >   ierr = DSView(eps->ds,viewer);CHKERRQ(ierr);
> > }
> >
> >
> > If eps->ds is NULL, why we are going to create a new one? I just want to
> view this object. If it is NULL, you should just show me that this object
> is empty. You could print out: ds: null.
> >
> > If a user wants to develop a new EPS solver, and then register the new
> EPS to SLEPc. But the user does not want to use DS, and DSView will show
> some error messages:
> >
> > [0]PETSC ERROR: - Error Message
> --
> > [0]PETSC ERROR: Object is in wrong state
> > [0]PETSC ERROR: Requested matrix was not created in this DS
> > [0]PETSC ERROR: See https://urldefense.proofpoint.
> com/v2/url?u=http-3A__www.mcs.anl.gov_petsc_documentation_
> faq.html=DwIFaQ=54IZrppPQZKX9mLzcGdPfFD1hxrcB_
> _aEkJFOKJFd00=DUUt3SRGI0_JgtNaS3udV68GRkgV4ts7XKfj2opmiCY=
> RUH2LlACLIVsE06Hdki8z27uIfsiU8hQJ2mN6Lxo628=
> T1QKhCMs9EnX64WJhlZd0wRvwQB0W6aeVSiC6R02Gag=  for trouble shooting.
> > [0]PETSC ERROR: Petsc Release Version 3.7.5, unknown
> > [0]PETSC ERROR: ../../../moose_test-opt on a arch-darwin-c-debug named
> FN604208 by kongf Thu Apr  6 14:22:14 2017
> > [0]PETSC ERROR: #1 DSViewMat() line 149 in /slepc/src/sys/classes/ds/
> interface/dspriv.c
> > [0]PETSC ERROR: #2 DSView_NHEP() line 47 in/slepc/src/sys/classes/ds/
> impls/nhep/dsnhep.c
> > [0]PETSC ERROR: #3 DSView() line 772 in/slepc/src/sys/classes/ds/
> interface/dsbasic.c
> > [0]PETSC ERROR: #4 EPSView() line 227 in /slepc/src/eps/interface/
> epsview.c
> > [0]PETSC ERROR: #5 PetscObjectView() line 106 in/petsc/src/sys/objects/
> destroy.c
> > [0]PETSC ERROR: #6 PetscObjectViewFromOptions() line 2808 in
> /petsc/src/sys/objects/options.c
> > [0]PETSC ERROR: #7 EPSSolve() line 159 in /slepc/src/eps/interface/
> epssolve.c
> >
> >
> >
> > Fande,
>
>


Re: [petsc-users] Using MatShell without MatMult

2017-04-07 Thread Dave May
You should also not call PetscInitialize() from within your user MatMult
function.




On Fri, 7 Apr 2017 at 13:24, Matthew Knepley  wrote:

> On Fri, Apr 7, 2017 at 5:11 AM, Francesco Migliorini <
> francescomigliorin...@gmail.com> wrote:
>
> Hello,
>
> I need to solve a linear system using GMRES without creating explicitly
> the matrix because very large. So, I am trying to use the MatShell strategy
> but I am stucked. The problem is that it seems to me that inside the
> user-defined MyMatMult it is required to use MatMult and this would
> honestly vanish all the gain from using this strategy. Indeed, I would need
> to access directly to the entries of the input vector, multiply them by
> some parameters imported in MyMatMult with *common* and finally compose
> the output vector without creating any matrix. First of all, is it
> possible?
>
>
> Yes.
>
>
> Secondly, if so, where is my mistake? Here's an example of my code with a
> very simple 10x10 system with the identity matrix:
>
> [...]
> call PetscInitialize(PETSC_NULL_CHARACTER,perr)
> ind(1) = 10
> call VecCreate(PETSC_COMM_WORLD,feP,perr)
> call VecSetSizes(feP,PETSC_DECIDE,ind,perr)
> call VecSetFromOptions(feP,perr)
> call VecDuplicate(feP,u1P,perr)
> do jt = 1,10
>  ind(1) = jt-1
>  fval(1) = jt
>   call VecSetValues(feP,1,ind,fval(1),INSERT_VALUES,perr)
> enddo
> call VecAssemblyBegin(feP,perr)
> call VecAssemblyEnd(feP,perr)
> ind(1) = 10
> call MatCreateShell(PETSC_COMM_WORLD, PETSC_DECIDE, PETSC_DECIDE, ind,
> ind, PETSC_NULL_INTEGER, TheShellMatrix, perr)
> call MatShellSetOperation(TheShellMatrix, MATOP_MULT, MyMatMult, perr)
>
>
> Here I would probably use
>
>
> http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/Mat/MatShellSetContext.html
>
> instead of a common block, but that works too.
>
>
> call KSPCreate(PETSC_COMM_WORLD, ksp, perr)
> call KSPSetType(ksp,KSPGMRES,perr)
> call KSPSetOperators(ksp,TheShellMatrix,TheShellMatrix,perr)
> call KSPSolve(ksp,feP,u1P,perr)
> call PetscFinalize(PETSC_NULL_CHARACTER,perr)
> [...]
>
> subroutine MyMatMult(TheShellMatrix,T,AT,ierr)
> [...]
> Vec T, AT
> Mat TheShellMatrix
> PetscReal   fval(1), u0(1)
> [...]
> call PetscInitialize(PETSC_NULL_CHARACTER,ierr)
> ind(1) = 10
> call VecCreate(PETSC_COMM_WORLD,AT,ierr)
> call VecSetSizes(AT,PETSC_DECIDE,ind,ierr)
> call VecSetFromOptions(AT,ierr)
>
>
> Its not your job to create AT. We are passing it in, so just use it.
>
>
> do i =0,9
> ind(1) = i
> call VecGetValues(T,1,ind,u0(1),ierr)
> fval(1) = u0(1)
> call VecSetValues(AT,1,ind,fval(1),INSERT_VALUES,ierr)
>
>
> You can do it this way, but its easier to use
>
>
> http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/Vec/VecGetArray.html
>
> outside the loop for both vectors.
>
>Matt
>
>
> enddo
> call VecAssemblyBegin(AT,ierr)
> call VecAssemblyEnd(AT,ierr)
> return
> end subroutine MyMatMult
>
> The output of this code is something completely invented but in some way
> related to the actual solution:
> 5.0964719143762542E-002
> 0.10192943828752508
> 0.15289415743128765
> 0.20385887657505017
> 0.25482359571881275
> 0.30578831486257529
> 0.35675303400633784
> 0.40771775315010034
> 0.45868247229386289
> 0.50964719143762549
>
> Instead, if I use MatMult in MyMatMult I get the right solution. Here's
> the code
>
> subroutine MyMatMult(TheShellMatrix,T,AT,ierr)
> [...]
> Vec T, AT
> Mat TheShellMatrix, IDEN
> PetscReal   fval(1)
> [...]
> call PetscInitialize(PETSC_NULL_CHARACTER,ierr)
> ind(1) = 10
> call MatCreate(PETSC_COMM_WORLD,IDEN,ierr)
> call MatSetSizes(IDEN,PETSC_DECIDE,PETSC_DECIDE,ind,ind,ierr)
> call MatSetUp(IDEN,ierr)
> do i =0,9
> ind(1) = i
> fval(1) = 1
> call MatSetValues(IDEN,1,ind,1,ind,fval(1),INSERT_VALUES,ierr)
> enddo
> call MatAssemblyBegin(IDEN,MAT_FINAL_ASSEMBLY,ierr)
> call MatAssemblyEnd(IDEN,MAT_FINAL_ASSEMBLY,ierr)
> call MatMult(IDEN,T,AT,ierr)
> return
> end subroutine MyMatMult
>
> Thanks in advance for any answer!
> Francesco Migliorini
>
>
>
>
> --
> What most experimenters take for granted before they begin their
> experiments is infinitely more interesting than any results to which their
> experiments lead.
> -- Norbert Wiener
>


[petsc-users] Symmetric matrix: Setting entries below diagonal

2017-04-07 Thread Florian Lindner
Hello,

two questions about symmetric (MATSBAIJ) matrices.

+ Entries set with MatSetValue below the main diagonal are ignored. Is that by 
design? I rather expected setting A_ij to
have the same effect as setting A_ji.

+ Has MatSetOption to MAT_SYMMETRIC and MAT_SYMMETRIC_ETERNAL any gain on 
MATSBAIJ matrices?

Thanks,
Florian

Test programm:


#include "petscmat.h"
#include "petscviewer.h"

int main(int argc, char **argv)
{
  PetscInitialize(, , "", NULL);
  PetscErrorCode ierr = 0;

  Mat A;
  ierr = MatCreate(PETSC_COMM_WORLD, ); CHKERRQ(ierr);
  MatSetType(A, MATSBAIJ); CHKERRQ(ierr);
  ierr = MatSetSizes(A, 4, 4, PETSC_DECIDE, PETSC_DECIDE); CHKERRQ(ierr);
  ierr = MatSetUp(A); CHKERRQ(ierr);
  ierr = MatSetOption(A, MAT_SYMMETRIC, PETSC_TRUE); CHKERRQ(ierr);
  ierr = MatSetOption(A, MAT_SYMMETRY_ETERNAL, PETSC_TRUE); CHKERRQ(ierr);

  // Stored
  ierr = MatSetValue(A, 1, 2, 21, INSERT_VALUES); CHKERRQ(ierr);
  ierr = MatSetValue(A, 1, 1, 11, INSERT_VALUES); CHKERRQ(ierr);

  // Ignored
  ierr = MatSetValue(A, 2, 1, 22, INSERT_VALUES); CHKERRQ(ierr);
  ierr = MatSetValue(A, 3, 2, 32, INSERT_VALUES); CHKERRQ(ierr);
  ierr = MatSetValue(A, 3, 1, 31, INSERT_VALUES); CHKERRQ(ierr);

  ierr = MatAssemblyBegin(A, MAT_FINAL_ASSEMBLY); CHKERRQ(ierr);
  ierr = MatAssemblyEnd(A, MAT_FINAL_ASSEMBLY); CHKERRQ(ierr);

  PetscViewer viewer;
  ierr = PetscViewerCreate(PETSC_COMM_WORLD, ); CHKERRQ(ierr);
  ierr = PetscViewerSetType(viewer, PETSCVIEWERASCII); CHKERRQ(ierr);
  ierr = PetscViewerPushFormat(viewer, PETSC_VIEWER_ASCII_DENSE); CHKERRQ(ierr);
  ierr = MatView(A, viewer); CHKERRQ(ierr);
  ierr = PetscViewerPopFormat(viewer); CHKERRQ(ierr);
  ierr = PetscViewerDestroy(); CHKERRQ(ierr);

  PetscFinalize();
  return 0;
}


Re: [petsc-users] Using same DMPlex for solving two different problems

2017-04-07 Thread Matthew Knepley
On Fri, Apr 7, 2017 at 6:32 AM, Abhyankar, Shrirang G. 
wrote:

>
> On Apr 7, 2017, at 6:25 AM, Matthew Knepley  wrote:
>
> On Thu, Apr 6, 2017 at 1:04 PM, Abhyankar, Shrirang G. 
> wrote:
>
>> I am solving a time-dependent problem using DMNetwork (uses DMPlex
>> internally) to manage the network. To find the initial conditions, I need
>> to solve a nonlinear problem on the same network but with different number
>> of dofs on the nodes and edges.
>>
>> Question: Can I reuse the same DMNetwork (DMPlex) for solving the two
>> problems. The way I am trying to approach this currently is by creating
>> two PetscSections to be used with Plex. The first one is used for the
>> initial conditions and the second one is for the time-stepping. Here¹s the
>> code I have
>>
>
> This is the right way to do it, but its easier if you call DMClone() first
> to get a new
> DM with the same Plex, and then change its default Section and solve your
> problem
> with it.
>
>
> Thanks Matt. Doesn't DMClone() make a shallow copy for Plex? So any
> section set for the cloned Plex will also be set for the original one?
>

The Plex is just a reference, but its the implementation. The DM itself is
copied, so the Section is independent. That is the point.

  Thanks,

Matt

>   Matt
>
>
>>
>>   for(i=vStart; i < vEnd; i++) {
>> /* Two variables at each vertex for the initial condition problem */
>> ierr = PetscSectionSetDof(dyn->initpflowpsection,i,2);CHKERRQ(ierr);
>>   }
>>   ierr = PetscSectionSetUp(dyn->initpflowpsection);CHKERRQ(ierr);
>>
>>   /* Get the plex dm */
>>   ierr = DMNetworkGetPlex(networkdm,);CHKERRQ(ierr);
>>
>>   /* Get default sections associated with this plex set for time-stepping
>> */
>>   ierr = DMGetDefaultSection(plexdm,>defaultsection);CHKERRQ(ierr);
>> ierr =
>> DMGetDefaultGlobalSection(plexdm,>defaultglobalsection)
>> ;CHKERRQ(ierr);
>>
>> /* Increase the reference count so that the section does not get destroyed
>> when a new one is set with DMSetDefaultSection */
>> ierr =
>> PetscObjectReference((PetscObject)dyn->defaultsection);CHKERRQ(ierr);
>> ierr =
>> PetscObjectReference((PetscObject)dyn->defaultglobalsection)
>> ;CHKERRQ(ierr);
>>
>>
>>   /* Set the new section created for initial conditions */
>>   ierr = DMSetDefaultSection(plexdm,dyn->initpflowpsection);CHKERRQ(
>> ierr);
>>   ierr =
>> DMGetDefaultGlobalSection(plexdm,>initpflowpglobsection
>> );CHKERRQ(ierr)
>> ;
>>
>>
>>
>> Would this work or should I rather use DMPlexCreateSection to create the
>> PetscSection used for initial conditions (dyn->initpflowpsection)? Any
>> other problems that I should be aware of? Has anyone else attempted using
>> the same plex for solving two different problems?
>>
>> Thanks,
>> Shri
>>
>>
>>
>
>
> --
> What most experimenters take for granted before they begin their
> experiments is infinitely more interesting than any results to which their
> experiments lead.
> -- Norbert Wiener
>
>


-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener


Re: [petsc-users] Using same DMPlex for solving two different problems

2017-04-07 Thread Abhyankar, Shrirang G.

On Apr 7, 2017, at 6:25 AM, Matthew Knepley 
> wrote:

On Thu, Apr 6, 2017 at 1:04 PM, Abhyankar, Shrirang G. 
> wrote:
I am solving a time-dependent problem using DMNetwork (uses DMPlex
internally) to manage the network. To find the initial conditions, I need
to solve a nonlinear problem on the same network but with different number
of dofs on the nodes and edges.

Question: Can I reuse the same DMNetwork (DMPlex) for solving the two
problems. The way I am trying to approach this currently is by creating
two PetscSections to be used with Plex. The first one is used for the
initial conditions and the second one is for the time-stepping. Here¹s the
code I have

This is the right way to do it, but its easier if you call DMClone() first to 
get a new
DM with the same Plex, and then change its default Section and solve your 
problem
with it.

Thanks Matt. Doesn't DMClone() make a shallow copy for Plex? So any section set 
for the cloned Plex will also be set for the original one?


  Matt


  for(i=vStart; i < vEnd; i++) {
/* Two variables at each vertex for the initial condition problem */
ierr = PetscSectionSetDof(dyn->initpflowpsection,i,2);CHKERRQ(ierr);
  }
  ierr = PetscSectionSetUp(dyn->initpflowpsection);CHKERRQ(ierr);

  /* Get the plex dm */
  ierr = DMNetworkGetPlex(networkdm,);CHKERRQ(ierr);

  /* Get default sections associated with this plex set for time-stepping
*/
  ierr = DMGetDefaultSection(plexdm,>defaultsection);CHKERRQ(ierr);
ierr =
DMGetDefaultGlobalSection(plexdm,>defaultglobalsection);CHKERRQ(ierr);

/* Increase the reference count so that the section does not get destroyed
when a new one is set with DMSetDefaultSection */
ierr =
PetscObjectReference((PetscObject)dyn->defaultsection);CHKERRQ(ierr);
ierr =
PetscObjectReference((PetscObject)dyn->defaultglobalsection);CHKERRQ(ierr);


  /* Set the new section created for initial conditions */
  ierr = DMSetDefaultSection(plexdm,dyn->initpflowpsection);CHKERRQ(ierr);
  ierr =
DMGetDefaultGlobalSection(plexdm,>initpflowpglobsection);CHKERRQ(ierr)
;



Would this work or should I rather use DMPlexCreateSection to create the
PetscSection used for initial conditions (dyn->initpflowpsection)? Any
other problems that I should be aware of? Has anyone else attempted using
the same plex for solving two different problems?

Thanks,
Shri





--
What most experimenters take for granted before they begin their experiments is 
infinitely more interesting than any results to which their experiments lead.
-- Norbert Wiener


Re: [petsc-users] Using same DMPlex for solving two different problems

2017-04-07 Thread Matthew Knepley
On Thu, Apr 6, 2017 at 1:04 PM, Abhyankar, Shrirang G. 
wrote:

> I am solving a time-dependent problem using DMNetwork (uses DMPlex
> internally) to manage the network. To find the initial conditions, I need
> to solve a nonlinear problem on the same network but with different number
> of dofs on the nodes and edges.
>
> Question: Can I reuse the same DMNetwork (DMPlex) for solving the two
> problems. The way I am trying to approach this currently is by creating
> two PetscSections to be used with Plex. The first one is used for the
> initial conditions and the second one is for the time-stepping. Here¹s the
> code I have
>

This is the right way to do it, but its easier if you call DMClone() first
to get a new
DM with the same Plex, and then change its default Section and solve your
problem
with it.

  Matt


>
>   for(i=vStart; i < vEnd; i++) {
> /* Two variables at each vertex for the initial condition problem */
> ierr = PetscSectionSetDof(dyn->initpflowpsection,i,2);CHKERRQ(ierr);
>   }
>   ierr = PetscSectionSetUp(dyn->initpflowpsection);CHKERRQ(ierr);
>
>   /* Get the plex dm */
>   ierr = DMNetworkGetPlex(networkdm,);CHKERRQ(ierr);
>
>   /* Get default sections associated with this plex set for time-stepping
> */
>   ierr = DMGetDefaultSection(plexdm,>defaultsection);CHKERRQ(ierr);
> ierr =
> DMGetDefaultGlobalSection(plexdm,>defaultglobalsection);CHKERRQ(
> ierr);
>
> /* Increase the reference count so that the section does not get destroyed
> when a new one is set with DMSetDefaultSection */
> ierr =
> PetscObjectReference((PetscObject)dyn->defaultsection);CHKERRQ(ierr);
> ierr =
> PetscObjectReference((PetscObject)dyn->defaultglobalsection);CHKERRQ(
> ierr);
>
>
>   /* Set the new section created for initial conditions */
>   ierr = DMSetDefaultSection(plexdm,dyn->initpflowpsection);CHKERRQ(ierr);
>   ierr =
> DMGetDefaultGlobalSection(plexdm,>initpflowpglobsection);
> CHKERRQ(ierr)
> ;
>
>
>
> Would this work or should I rather use DMPlexCreateSection to create the
> PetscSection used for initial conditions (dyn->initpflowpsection)? Any
> other problems that I should be aware of? Has anyone else attempted using
> the same plex for solving two different problems?
>
> Thanks,
> Shri
>
>
>


-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener


Re: [petsc-users] Using MatShell without MatMult

2017-04-07 Thread Matthew Knepley
On Fri, Apr 7, 2017 at 5:11 AM, Francesco Migliorini <
francescomigliorin...@gmail.com> wrote:

> Hello,
>
> I need to solve a linear system using GMRES without creating explicitly
> the matrix because very large. So, I am trying to use the MatShell strategy
> but I am stucked. The problem is that it seems to me that inside the
> user-defined MyMatMult it is required to use MatMult and this would
> honestly vanish all the gain from using this strategy. Indeed, I would need
> to access directly to the entries of the input vector, multiply them by
> some parameters imported in MyMatMult with *common* and finally compose
> the output vector without creating any matrix. First of all, is it
> possible?
>

Yes.


> Secondly, if so, where is my mistake? Here's an example of my code with a
> very simple 10x10 system with the identity matrix:
>
> [...]
> call PetscInitialize(PETSC_NULL_CHARACTER,perr)
> ind(1) = 10
> call VecCreate(PETSC_COMM_WORLD,feP,perr)
> call VecSetSizes(feP,PETSC_DECIDE,ind,perr)
> call VecSetFromOptions(feP,perr)
> call VecDuplicate(feP,u1P,perr)
> do jt = 1,10
>  ind(1) = jt-1
>  fval(1) = jt
>   call VecSetValues(feP,1,ind,fval(1),INSERT_VALUES,perr)
> enddo
> call VecAssemblyBegin(feP,perr)
> call VecAssemblyEnd(feP,perr)
> ind(1) = 10
> call MatCreateShell(PETSC_COMM_WORLD, PETSC_DECIDE, PETSC_DECIDE, ind,
> ind, PETSC_NULL_INTEGER, TheShellMatrix, perr)
> call MatShellSetOperation(TheShellMatrix, MATOP_MULT, MyMatMult, perr)
>

Here I would probably use


http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/Mat/MatShellSetContext.html

instead of a common block, but that works too.


> call KSPCreate(PETSC_COMM_WORLD, ksp, perr)
> call KSPSetType(ksp,KSPGMRES,perr)
> call KSPSetOperators(ksp,TheShellMatrix,TheShellMatrix,perr)
> call KSPSolve(ksp,feP,u1P,perr)
> call PetscFinalize(PETSC_NULL_CHARACTER,perr)
> [...]
>
> subroutine MyMatMult(TheShellMatrix,T,AT,ierr)
> [...]
> Vec T, AT
> Mat TheShellMatrix
> PetscReal   fval(1), u0(1)
> [...]
> call PetscInitialize(PETSC_NULL_CHARACTER,ierr)
> ind(1) = 10
> call VecCreate(PETSC_COMM_WORLD,AT,ierr)
> call VecSetSizes(AT,PETSC_DECIDE,ind,ierr)
> call VecSetFromOptions(AT,ierr)
>

Its not your job to create AT. We are passing it in, so just use it.


> do i =0,9
> ind(1) = i
> call VecGetValues(T,1,ind,u0(1),ierr)
> fval(1) = u0(1)
> call VecSetValues(AT,1,ind,fval(1),INSERT_VALUES,ierr)
>

You can do it this way, but its easier to use


http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/Vec/VecGetArray.html

outside the loop for both vectors.

   Matt


> enddo
> call VecAssemblyBegin(AT,ierr)
> call VecAssemblyEnd(AT,ierr)
> return
> end subroutine MyMatMult
>
> The output of this code is something completely invented but in some way
> related to the actual solution:
> 5.0964719143762542E-002
> 0.10192943828752508
> 0.15289415743128765
> 0.20385887657505017
> 0.25482359571881275
> 0.30578831486257529
> 0.35675303400633784
> 0.40771775315010034
> 0.45868247229386289
> 0.50964719143762549
>
> Instead, if I use MatMult in MyMatMult I get the right solution. Here's
> the code
>
> subroutine MyMatMult(TheShellMatrix,T,AT,ierr)
> [...]
> Vec T, AT
> Mat TheShellMatrix, IDEN
> PetscReal   fval(1)
> [...]
> call PetscInitialize(PETSC_NULL_CHARACTER,ierr)
> ind(1) = 10
> call MatCreate(PETSC_COMM_WORLD,IDEN,ierr)
> call MatSetSizes(IDEN,PETSC_DECIDE,PETSC_DECIDE,ind,ind,ierr)
> call MatSetUp(IDEN,ierr)
> do i =0,9
> ind(1) = i
> fval(1) = 1
> call MatSetValues(IDEN,1,ind,1,ind,fval(1),INSERT_VALUES,ierr)
> enddo
> call MatAssemblyBegin(IDEN,MAT_FINAL_ASSEMBLY,ierr)
> call MatAssemblyEnd(IDEN,MAT_FINAL_ASSEMBLY,ierr)
> call MatMult(IDEN,T,AT,ierr)
> return
> end subroutine MyMatMult
>
> Thanks in advance for any answer!
> Francesco Migliorini
>
>


-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener


Re: [petsc-users] EPSViewer in SLEPc

2017-04-07 Thread Jose E. Roman
I have pushed a commit that should avoid this problem.
Jose

> El 6 abr 2017, a las 22:27, Kong, Fande  escribió:
> 
> Hi All,
> 
> The EPSViewer in SLEPc looks weird. I do not understand the viewer logic. For 
> example there is a piece of code in SLEPc (at line 225 of epsview.c):
> 
> if (!ispower) {
>   if (!eps->ds) { ierr = EPSGetDS(eps,>ds);CHKERRQ(ierr); }
>   ierr = DSView(eps->ds,viewer);CHKERRQ(ierr);
> }
> 
> 
> If eps->ds is NULL, why we are going to create a new one? I just want to view 
> this object. If it is NULL, you should just show me that this object is 
> empty. You could print out: ds: null. 
> 
> If a user wants to develop a new EPS solver, and then register the new EPS to 
> SLEPc. But the user does not want to use DS, and DSView will show some error 
> messages:
> 
> [0]PETSC ERROR: - Error Message 
> --
> [0]PETSC ERROR: Object is in wrong state
> [0]PETSC ERROR: Requested matrix was not created in this DS
> [0]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html for 
> trouble shooting.
> [0]PETSC ERROR: Petsc Release Version 3.7.5, unknown 
> [0]PETSC ERROR: ../../../moose_test-opt on a arch-darwin-c-debug named 
> FN604208 by kongf Thu Apr  6 14:22:14 2017
> [0]PETSC ERROR: #1 DSViewMat() line 149 in 
> /slepc/src/sys/classes/ds/interface/dspriv.c
> [0]PETSC ERROR: #2 DSView_NHEP() line 47 
> in/slepc/src/sys/classes/ds/impls/nhep/dsnhep.c
> [0]PETSC ERROR: #3 DSView() line 772 
> in/slepc/src/sys/classes/ds/interface/dsbasic.c
> [0]PETSC ERROR: #4 EPSView() line 227 in /slepc/src/eps/interface/epsview.c
> [0]PETSC ERROR: #5 PetscObjectView() line 106 
> in/petsc/src/sys/objects/destroy.c
> [0]PETSC ERROR: #6 PetscObjectViewFromOptions() line 2808 in 
> /petsc/src/sys/objects/options.c
> [0]PETSC ERROR: #7 EPSSolve() line 159 in /slepc/src/eps/interface/epssolve.c
> 
> 
> 
> Fande,



[petsc-users] Using MatShell without MatMult

2017-04-07 Thread Francesco Migliorini
Hello,

I need to solve a linear system using GMRES without creating explicitly the
matrix because very large. So, I am trying to use the MatShell strategy but
I am stucked. The problem is that it seems to me that inside the
user-defined MyMatMult it is required to use MatMult and this would
honestly vanish all the gain from using this strategy. Indeed, I would need
to access directly to the entries of the input vector, multiply them by
some parameters imported in MyMatMult with *common* and finally compose the
output vector without creating any matrix. First of all, is it possible?
Secondly, if so, where is my mistake? Here's an example of my code with a
very simple 10x10 system with the identity matrix:

[...]
call PetscInitialize(PETSC_NULL_CHARACTER,perr)
ind(1) = 10
call VecCreate(PETSC_COMM_WORLD,feP,perr)
call VecSetSizes(feP,PETSC_DECIDE,ind,perr)
call VecSetFromOptions(feP,perr)
call VecDuplicate(feP,u1P,perr)
do jt = 1,10
 ind(1) = jt-1
 fval(1) = jt
  call VecSetValues(feP,1,ind,fval(1),INSERT_VALUES,perr)
enddo
call VecAssemblyBegin(feP,perr)
call VecAssemblyEnd(feP,perr)
ind(1) = 10
call MatCreateShell(PETSC_COMM_WORLD, PETSC_DECIDE, PETSC_DECIDE, ind, ind,
PETSC_NULL_INTEGER, TheShellMatrix, perr)
call MatShellSetOperation(TheShellMatrix, MATOP_MULT, MyMatMult, perr)
call KSPCreate(PETSC_COMM_WORLD, ksp, perr)
call KSPSetType(ksp,KSPGMRES,perr)
call KSPSetOperators(ksp,TheShellMatrix,TheShellMatrix,perr)
call KSPSolve(ksp,feP,u1P,perr)
call PetscFinalize(PETSC_NULL_CHARACTER,perr)
[...]

subroutine MyMatMult(TheShellMatrix,T,AT,ierr)
[...]
Vec T, AT
Mat TheShellMatrix
PetscReal   fval(1), u0(1)
[...]
call PetscInitialize(PETSC_NULL_CHARACTER,ierr)
ind(1) = 10
call VecCreate(PETSC_COMM_WORLD,AT,ierr)
call VecSetSizes(AT,PETSC_DECIDE,ind,ierr)
call VecSetFromOptions(AT,ierr)
do i =0,9
ind(1) = i
call VecGetValues(T,1,ind,u0(1),ierr)
fval(1) = u0(1)
call VecSetValues(AT,1,ind,fval(1),INSERT_VALUES,ierr)
enddo
call VecAssemblyBegin(AT,ierr)
call VecAssemblyEnd(AT,ierr)
return
end subroutine MyMatMult

The output of this code is something completely invented but in some way
related to the actual solution:
5.0964719143762542E-002
0.10192943828752508
0.15289415743128765
0.20385887657505017
0.25482359571881275
0.30578831486257529
0.35675303400633784
0.40771775315010034
0.45868247229386289
0.50964719143762549

Instead, if I use MatMult in MyMatMult I get the right solution. Here's the
code

subroutine MyMatMult(TheShellMatrix,T,AT,ierr)
[...]
Vec T, AT
Mat TheShellMatrix, IDEN
PetscReal   fval(1)
[...]
call PetscInitialize(PETSC_NULL_CHARACTER,ierr)
ind(1) = 10
call MatCreate(PETSC_COMM_WORLD,IDEN,ierr)
call MatSetSizes(IDEN,PETSC_DECIDE,PETSC_DECIDE,ind,ind,ierr)
call MatSetUp(IDEN,ierr)
do i =0,9
ind(1) = i
fval(1) = 1
call MatSetValues(IDEN,1,ind,1,ind,fval(1),INSERT_VALUES,ierr)
enddo
call MatAssemblyBegin(IDEN,MAT_FINAL_ASSEMBLY,ierr)
call MatAssemblyEnd(IDEN,MAT_FINAL_ASSEMBLY,ierr)
call MatMult(IDEN,T,AT,ierr)
return
end subroutine MyMatMult

Thanks in advance for any answer!
Francesco Migliorini