Re: [petsc-users] random SLEPc segfault using openmpi-3.0.1

2018-10-19 Thread Smith, Barry F.



> On Oct 19, 2018, at 2:08 PM, Moritz Cygorek  wrote:
> 
> Hi,
> 
> I'm using SLEPc to diagonalize a huge sparse matrix and I've encountered 
> random segmentation faults. 

https://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind


> 
> 
> I'm actually using a the slepc example 4 without modifications to rule out 
> errors due to coding.
> Concretely, I use the command line 
> 
> ompirun -n 28 ex4 \
> -file amatrix.bin -eps_tol 1e-6 -eps_target 0 -eps_nev 18 \
> -eps_harmonic -eps_ncv 40 -eps_max_it 10 \
> -eps_monitor -eps_view  -eps_view_values -eps_view_vectors 2>&1 |tee -a 
> $LOGFILE
> 
> 
> 
> The program runs for some time (about half a day) and then stops with the 
> error message
> 
> 
> [13]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation, 
> probably memory access out of range
> 
> There is definitely enough memory, because I'm using less than 4% of the 
> available 128GB.
> 
> 
> 
> Since everything worked fine on a slower computer with a different setup and 
> from previous mailing list comments, I have the feeling that this might be 
> due to some issues with MPI.
> 
> Unfortunately, I have to share the computer with other people and can not 
> uninstall the current MPI implementation and I've also heard that there are 
> issues if you install more than one MPI implementation. 
> 
> For your information: I've configured PETSc with
> 
> ./configure  
> --with-mpi-dir=/home/applications/builds/intel_2018/openmpi-3.0.1/ 
> --with-scalar-type=complex --download-mumps --download-scalapack 
> --with-blas-lapack-dir=/opt/intel/compilers_and_libraries_2018.2.199/linux/mkl
> 
> 
> 
> 
> I wanted to ask a few things:
> 
> - Is there a known issue with openmpi causing random segmentation faults?
> 
> - I've also tried to install everything needed by configuring PETSc with
> 
> ./configure \
> --with-cc=gcc --with-cxx=g++ --with-fc=gfortran --with-scalar-type=complex \
> --download-mumps --download-scalapack --download-mpich --download-fblaslapack
> 
> Here, the problem is that performing the checks after "make" stops after the 
> check with 1 MPI process, i.e., the check using 2 MPI just never finishes. 
> Is that a known issue of conflict between the downloaded mpich and the 
> installed openmpi?
> Do you know a way to install mpich without conflicts with openmpi without 
> actually removing openmpi?
> 
> 
> - Some time ago a posted a question in the mailing list about how to compile 
> SLEPc/PETSc with OpenMP only instead of MPI. After some time, I was able to 
> get MPI to work on a different computer, 
> but I was never really able to use OpenMP with slepc, but it would be very 
> useful in the present situation. The programs compile but they never take 
> more than 100% CPU load as displayed by top.
> The answers to my question contained the recommendations that I should 
> configure with --download-openblas and have the OMP_NUM_THREADS variable set 
> when executing the program. I did it, but it didn't help either.
> So, my question: has someone ever managed to find a configure line that 
> disables MPI but enables the usage of OpenMP so that the slepc ex4 program 
> uses significantly more than 100% CPU usage when executing the standard 
> Krylov-Schur method?
>  
> 
> 
> 
> Regards,
> Moritz



Re: [petsc-users] fortran INTENT with petsc object gives segfault after upgrade from 3.8.4 to 3.10.2

2018-10-19 Thread Smith, Barry F.



> On Oct 19, 2018, at 9:37 AM, Klaij, Christiaan  wrote:
> 
> As far as I (mis)understand fortran, this is a data protection
> thing: all arguments are passed in from above but the subroutine
> is only allowed to change rr and ierr, not aa and xx (if you try,
> you get a compiler warning).

  The routine is not allowed to change rr (it is only allowed to change the 
values "inside" rr) that is why it needs to be intent in or inout. Otherwise 
the compiler can optimize and not pass down the value of the rr pointer to the 
subroutine since by declaring as it as out the compiler thinks your subroutine 
is going to set is value. 

Barry


> That's why I find it very odd to
> give an intent(in) to rr. But I've tried your suggestion anyway:
> both intent(in) and intent(inout) for rr do work! Can't say I
> understand though.
> 
> Below's a small example of what I was expecting. Change rr to
> intent(in) and the compiler complains.
> 
> Chris
> 
> $ cat intent.f90
> program intent
> 
>  implicit none
> 
>  real, allocatable :: aa(:), xx(:), rr(:)
>  integer :: ierr
> 
>  allocate(aa(10),xx(10),rr(10))
> 
>  aa = 1.0
>  xx = 2.0
> 
>  call matmult(aa,xx,rr,ierr)
> 
>  print *, rr(1)
>  print *, ierr
> 
>  deallocate(aa,xx,rr)
> 
>  contains
> 
>subroutine matmult(aa,xx,rr,ierr)
>  real, intent(in) :: aa(:), xx(:)
>  real, intent(out):: rr(:)
>  integer, intent(out) :: ierr
>  rr=aa*xx
>  ierr=0
>end subroutine matmult
> 
> end program intent
> $ ./a.out
>   2.00
>   0
> 
> 
> 
> 
> dr. ir. Christiaan Klaij  | Senior Researcher | Research & Development
> MARIN | T +31 317 49 33 44 | mailto:c.kl...@marin.nl | http://www.marin.nl
> 
> MARIN news: 
> http://www.marin.nl/web/News/News-items/Seminar-Scheepsbrandstof-en-de-mondiale-zwavelnorm-2020.htm
> 
> 
> From: Smith, Barry F. 
> Sent: Friday, October 19, 2018 2:32 PM
> To: Klaij, Christiaan
> Cc: petsc-users@mcs.anl.gov
> Subject: Re: [petsc-users] fortran INTENT with petsc object gives segfault 
> after upgrade from 3.8.4 to 3.10.2
> 
>   Hmm, the intent of all three first arguments should be in since they are 
> passed in from the routine above. Does it work if you replace
> 
>> Vec, INTENT(out) :: rr_system
> 
> with
> 
>> Vec, INTENT(in) :: rr_system
> 
> ?
> 
>Barry
> 
> 
>> On Oct 19, 2018, at 3:51 AM, Klaij, Christiaan  wrote:
>> 
>> I've recently upgraded from petsc-3.8.4 to petsc-3.10.2 and was
>> surprised to find a number of segmentation faults in my test
>> cases. These turned out to be related to user-defined MatMult and
>> PCApply for shell matrices. For example:
>> 
>> SUBROUTINE systemMatMult(aa_system,xx_system,rr_system,ierr)
>> Mat, INTENT(in) :: aa_system
>> Vec, INTENT(in) :: xx_system
>> Vec, INTENT(out) :: rr_system
>> PetscErrorCode, INTENT(out) :: ierr
>> ...
>> END
>> 
>> where aa_system is the shell matrix. This code works fine with
>> 3.8.4 and older, but fails with 3.10.2 due to invalid
>> pointers (gdb backtrace shows failure of VecSetValues due to
>> invalid first argument). After replacing by:
>> 
>> SUBROUTINE mass_momentum_systemMatMult(aa_system,xx_system,rr_system,ierr)
>> Mat :: aa_system
>> Vec :: xx_system
>> Vec :: rr_system
>> PetscErrorCode :: ierr
>> ...
>> END
>> 
>> everything's fine again. So clearly something has changed since
>> 3.8.4 that now prevents the use of INTENT in Fortran (at least
>> using intel 17.0.1 compilers). Any reason for this?
>> 
>> Chris
>> 
>> 
>> dr. ir. Christiaan Klaij  | Senior Researcher | Research & Development
>> MARIN | T +31 317 49 33 44 | mailto:c.kl...@marin.nl | http://www.marin.nl
>> 
>> MARIN news: 
>> http://www.marin.nl/web/News/News-items/ReFRESCO-successfully-coupled-to-ParaView-Catalyst-for-insitu-analysis-1.htm
>> 
> 



Re: [petsc-users] random SLEPc segfault using openmpi-3.0.1

2018-10-19 Thread Matthew Knepley
On Fri, Oct 19, 2018 at 3:09 PM Moritz Cygorek  wrote:

> Hi,
>
> I'm using SLEPc to diagonalize a huge sparse matrix and I've encountered
> random segmentation faults.
>
> I'm actually using a the slepc example 4 without modifications to rule out
> errors due to coding.
>
> Concretely, I use the command line
>
>
> ompirun -n 28 ex4 \
> -file amatrix.bin -eps_tol 1e-6 -eps_target 0 -eps_nev 18 \
> -eps_harmonic -eps_ncv 40 -eps_max_it 10 \
> -eps_monitor -eps_view  -eps_view_values -eps_view_vectors 2>&1 |tee -a
> $LOGFILE
>
>
>
> The program runs for some time (about half a day) and then stops with the
> error message
>
>
>
> [13]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation,
> probably memory access out of range
>
> There is definitely enough memory, because I'm using less than 4% of the
> available 128GB.
>
>
>
>
> Since everything worked fine on a slower computer with a different setup
> and from previous mailing list comments, I have the feeling that this might
> be due to some issues with MPI.
>
>
> Unfortunately, I have to share the computer with other people and can not
> uninstall the current MPI implementation and I've also heard that there are
> issues if you install more than one MPI implementation.
>
>
> For your information: I've configured PETSc with
>
>
> ./configure
> --with-mpi-dir=/home/applications/builds/intel_2018/openmpi-3.0.1/
> --with-scalar-type=complex --download-mumps --download-scalapack
> --with-blas-lapack-dir=/opt/intel/compilers_and_libraries_2018.2.199/linux/mkl
>
> I wanted to ask a few things:
>
> - Is there a known issue with openmpi causing random segmentation faults?
>
OpenMPI certainly has had bugs, but this is not a constrained enough
question to pin the fault on any one of those.

> - I've also tried to install everything needed by configuring PETSc with
> ./configure \
> --with-cc=gcc --with-cxx=g++ --with-fc=gfortran --with-scalar-type=complex
> \
> --download-mumps --download-scalapack --download-mpich
> --download-fblaslapack
>
> Here, the problem is that performing the checks after "make" stops after
> the check with 1 MPI process, i.e., the check using 2 MPI just never
> finishes.
> Is that a known issue of conflict between the downloaded mpich and the
> installed openmpi?
>

No, it likely has to do with the network configuration, that is mpiexec is
waiting for gethostbyname() for your machine, which is failing.


> Do you know a way to install mpich without conflicts with openmpi without
> actually removing openmpi?
>

The above can work as long as OpenMPI is not in default compiler paths like
/usr/lib.


> - Some time ago a posted a question in the mailing list about how to
> compile SLEPc/PETSc with OpenMP only instead of MPI. After some time, I was
> able to get MPI to work on a different computer,
> but I was never really able to use OpenMP with slepc, but it would be very
> useful in the present situation. The
>

Why do you think so?


> programs compile but they never take more than 100% CPU load as displayed
> by top.
>

That is perfectly understandable since the memory bandwidth can be maxed
out with fewer cores than are present. OpenMP will not help this.


> The answers to my question contained the recommendations that I should
> configure with --download-openblas and have the OMP_NUM_THREADS variable
> set when executing the program. I did it, but it didn't help either.
>

Yep.


> So, my question: has someone ever managed to find a configure line that
> disables MPI but enables the usage of OpenMP so that the slepc ex4 program
> uses significantly more than 100% CPU usage when executing the standard
> Krylov-Schur method?
>

As I said, this is likely to be impossible for architecture reasons:
https://www.mcs.anl.gov/petsc/documentation/faq.html#computers

  Thanks,

 Matt

> Regards,
>
> Moritz
>
>
>

-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener

https://www.cse.buffalo.edu/~knepley/ 


[petsc-users] random SLEPc segfault using openmpi-3.0.1

2018-10-19 Thread Moritz Cygorek
Hi,


I'm using SLEPc to diagonalize a huge sparse matrix and I've encountered random 
segmentation faults.



I'm actually using a the slepc example 4 without modifications to rule out 
errors due to coding.

Concretely, I use the command line


ompirun -n 28 ex4 \
-file amatrix.bin -eps_tol 1e-6 -eps_target 0 -eps_nev 18 \
-eps_harmonic -eps_ncv 40 -eps_max_it 10 \
-eps_monitor -eps_view  -eps_view_values -eps_view_vectors 2>&1 |tee -a $LOGFILE




The program runs for some time (about half a day) and then stops with the error 
message



[13]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation, probably 
memory access out of range

There is definitely enough memory, because I'm using less than 4% of the 
available 128GB.




Since everything worked fine on a slower computer with a different setup and 
from previous mailing list comments, I have the feeling that this might be due 
to some issues with MPI.


Unfortunately, I have to share the computer with other people and can not 
uninstall the current MPI implementation and I've also heard that there are 
issues if you install more than one MPI implementation.


For your information: I've configured PETSc with


./configure  --with-mpi-dir=/home/applications/builds/intel_2018/openmpi-3.0.1/ 
--with-scalar-type=complex --download-mumps --download-scalapack 
--with-blas-lapack-dir=/opt/intel/compilers_and_libraries_2018.2.199/linux/mkl





I wanted to ask a few things:


- Is there a known issue with openmpi causing random segmentation faults?


- I've also tried to install everything needed by configuring PETSc with

./configure \
--with-cc=gcc --with-cxx=g++ --with-fc=gfortran --with-scalar-type=complex \
--download-mumps --download-scalapack --download-mpich --download-fblaslapack

Here, the problem is that performing the checks after "make" stops after the 
check with 1 MPI process, i.e., the check using 2 MPI just never finishes.
Is that a known issue of conflict between the downloaded mpich and the 
installed openmpi?
Do you know a way to install mpich without conflicts with openmpi without 
actually removing openmpi?


- Some time ago a posted a question in the mailing list about how to compile 
SLEPc/PETSc with OpenMP only instead of MPI. After some time, I was able to get 
MPI to work on a different computer,
but I was never really able to use OpenMP with slepc, but it would be very 
useful in the present situation. The programs compile but they never take more 
than 100% CPU load as displayed by top.
The answers to my question contained the recommendations that I should 
configure with --download-openblas and have the OMP_NUM_THREADS variable set 
when executing the program. I did it, but it didn't help either.
So, my question: has someone ever managed to find a configure line that 
disables MPI but enables the usage of OpenMP so that the slepc ex4 program uses 
significantly more than 100% CPU usage when executing the standard Krylov-Schur 
method?





Regards,

Moritz



Re: [petsc-users] PetscInt overflow

2018-10-19 Thread Smith, Barry F.


> On Oct 19, 2018, at 7:56 AM, Zhang, Junchao  wrote:
> 
> 
> On Fri, Oct 19, 2018 at 4:02 AM Jan Grießer  
> wrote:
> With more than 1 MPI process you mean i should use spectrum slicing in divide 
> the full problem in smaller subproblems? 
> The --with-64-bit-indices is not a possibility for me since i configured 
> petsc with mumps, which does not allow to use the 64-bit version (At least 
> this was the error message when i tried to configure PETSc )
>  
> MUMPS 5.1.2 manual chapter 2.4.2 says it supports "Selective 64-bit integer 
> feature" and "full 64-bit integer version" as well. 

They use to achieve this by compiling with special Fortran flags to promote 
integers to 64 bit; this is too fragile for our taste so we never hooked PETSc 
up wit it. If they have a dependable way of using 64 bit integers we should add 
that to our mumps.py and test it.

   Barry

> 
> Am Mi., 17. Okt. 2018 um 18:24 Uhr schrieb Jose E. Roman :
> To use BVVECS just add the command-line option -bv_type vecs
> This causes to use a separate Vec for each column, instead of a single long 
> Vec of size n*m. But it is considerably slower than the default.
> 
> Anyway, for such large problems you should consider using more than 1 MPI 
> process. In that case the error may disappear because the local size is 
> smaller than 768000.
> 
> Jose
> 
> 
> > El 17 oct 2018, a las 17:58, Matthew Knepley  escribió:
> > 
> > On Wed, Oct 17, 2018 at 11:54 AM Jan Grießer  
> > wrote:
> > Hi all,
> > i am using slepc4py and petsc4py to solve for the smallest real eigenvalues 
> > and eigenvectors. For my test cases with a matrix A of the size 30k x 30k 
> > solving for the smallest soutions works quite well, but when i increase the 
> > dimension of my system to around A = 768000 x 768000 or 3 million x 3 
> > million and ask for the smallest real 3000 (the number is increasing with 
> > increasing system size) eigenvalues and eigenvectors i get the output (for 
> > the 768000): 
> >  The product 4001 times 768000 overflows the size of PetscInt; consider 
> > reducing the number of columns, or use BVVECS instead
> > i understand that the requested number of eigenvectors and eigenvalues is 
> > causing an overflow but i do not understand the solution of the problem 
> > which is stated in the error message. Can someone tell me what exactly 
> > BVVECS is and how i can use it? Or is there any other solution to my 
> > problem ?
> > 
> > You can also reconfigure with 64-bit integers: --with-64-bit-indices
> > 
> >   Thanks,
> > 
> > Matt
> >  
> > Thank you very much in advance,
> > Jan 
> > 
> > 
> > 
> > -- 
> > What most experimenters take for granted before they begin their 
> > experiments is infinitely more interesting than any results to which their 
> > experiments lead.
> > -- Norbert Wiener
> > 
> > https://www.cse.buffalo.edu/~knepley/
> 



Re: [petsc-users] fortran INTENT with petsc object gives segfault after upgrade from 3.8.4 to 3.10.2

2018-10-19 Thread Matthew Knepley
On Fri, Oct 19, 2018 at 10:38 AM Klaij, Christiaan  wrote:

> As far as I (mis)understand fortran, this is a data protection
> thing: all arguments are passed in from above but the subroutine
> is only allowed to change rr and ierr, not aa and xx (if you try,
> you get a compiler warning). That's why I find it very odd to
> give an intent(in) to rr.


rr is not changed, and it cannot be. You pass in a pointer to the object
and we
fill up the object with values. We cannot change that pointer.

   Matt


> But I've tried your suggestion anyway:
> both intent(in) and intent(inout) for rr do work! Can't say I
> understand though.
>
> Below's a small example of what I was expecting. Change rr to
> intent(in) and the compiler complains.
>
> Chris
>
> $ cat intent.f90
> program intent
>
>   implicit none
>
>   real, allocatable :: aa(:), xx(:), rr(:)
>   integer :: ierr
>
>   allocate(aa(10),xx(10),rr(10))
>
>   aa = 1.0
>   xx = 2.0
>
>   call matmult(aa,xx,rr,ierr)
>
>   print *, rr(1)
>   print *, ierr
>
>   deallocate(aa,xx,rr)
>
>   contains
>
> subroutine matmult(aa,xx,rr,ierr)
>   real, intent(in) :: aa(:), xx(:)
>   real, intent(out):: rr(:)
>   integer, intent(out) :: ierr
>   rr=aa*xx
>   ierr=0
> end subroutine matmult
>
> end program intent
> $ ./a.out
>2.00
>0
>
>
>
>
> dr. ir. Christiaan Klaij  | Senior Researcher | Research & Development
> MARIN | T +31 317 49 33 44 | mailto:c.kl...@marin.nl | http://www.marin.nl
>
> MARIN news:
> http://www.marin.nl/web/News/News-items/Seminar-Scheepsbrandstof-en-de-mondiale-zwavelnorm-2020.htm
>
> 
> From: Smith, Barry F. 
> Sent: Friday, October 19, 2018 2:32 PM
> To: Klaij, Christiaan
> Cc: petsc-users@mcs.anl.gov
> Subject: Re: [petsc-users] fortran INTENT with petsc object gives segfault
> after upgrade from 3.8.4 to 3.10.2
>
>Hmm, the intent of all three first arguments should be in since they
> are passed in from the routine above. Does it work if you replace
>
> >  Vec, INTENT(out) :: rr_system
>
> with
>
> >  Vec, INTENT(in) :: rr_system
>
> ?
>
> Barry
>
>
> > On Oct 19, 2018, at 3:51 AM, Klaij, Christiaan  wrote:
> >
> > I've recently upgraded from petsc-3.8.4 to petsc-3.10.2 and was
> > surprised to find a number of segmentation faults in my test
> > cases. These turned out to be related to user-defined MatMult and
> > PCApply for shell matrices. For example:
> >
> > SUBROUTINE systemMatMult(aa_system,xx_system,rr_system,ierr)
> >  Mat, INTENT(in) :: aa_system
> >  Vec, INTENT(in) :: xx_system
> >  Vec, INTENT(out) :: rr_system
> >  PetscErrorCode, INTENT(out) :: ierr
> >  ...
> > END
> >
> > where aa_system is the shell matrix. This code works fine with
> > 3.8.4 and older, but fails with 3.10.2 due to invalid
> > pointers (gdb backtrace shows failure of VecSetValues due to
> > invalid first argument). After replacing by:
> >
> > SUBROUTINE
> mass_momentum_systemMatMult(aa_system,xx_system,rr_system,ierr)
> >  Mat :: aa_system
> >  Vec :: xx_system
> >  Vec :: rr_system
> >  PetscErrorCode :: ierr
> >  ...
> > END
> >
> > everything's fine again. So clearly something has changed since
> > 3.8.4 that now prevents the use of INTENT in Fortran (at least
> > using intel 17.0.1 compilers). Any reason for this?
> >
> > Chris
> >
> >
> > dr. ir. Christiaan Klaij  | Senior Researcher | Research & Development
> > MARIN | T +31 317 49 33 44 | mailto:c.kl...@marin.nl |
> http://www.marin.nl
> >
> > MARIN news:
> http://www.marin.nl/web/News/News-items/ReFRESCO-successfully-coupled-to-ParaView-Catalyst-for-insitu-analysis-1.htm
> >
>
>

-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener

https://www.cse.buffalo.edu/~knepley/ 


Re: [petsc-users] fortran INTENT with petsc object gives segfault after upgrade from 3.8.4 to 3.10.2

2018-10-19 Thread Klaij, Christiaan
As far as I (mis)understand fortran, this is a data protection
thing: all arguments are passed in from above but the subroutine
is only allowed to change rr and ierr, not aa and xx (if you try,
you get a compiler warning). That's why I find it very odd to
give an intent(in) to rr. But I've tried your suggestion anyway:
both intent(in) and intent(inout) for rr do work! Can't say I
understand though.

Below's a small example of what I was expecting. Change rr to
intent(in) and the compiler complains.

Chris

$ cat intent.f90
program intent

  implicit none

  real, allocatable :: aa(:), xx(:), rr(:)
  integer :: ierr

  allocate(aa(10),xx(10),rr(10))

  aa = 1.0
  xx = 2.0

  call matmult(aa,xx,rr,ierr)

  print *, rr(1)
  print *, ierr

  deallocate(aa,xx,rr)

  contains

subroutine matmult(aa,xx,rr,ierr)
  real, intent(in) :: aa(:), xx(:)
  real, intent(out):: rr(:)
  integer, intent(out) :: ierr
  rr=aa*xx
  ierr=0
end subroutine matmult

end program intent
$ ./a.out
   2.00
   0




dr. ir. Christiaan Klaij  | Senior Researcher | Research & Development
MARIN | T +31 317 49 33 44 | mailto:c.kl...@marin.nl | http://www.marin.nl

MARIN news: 
http://www.marin.nl/web/News/News-items/Seminar-Scheepsbrandstof-en-de-mondiale-zwavelnorm-2020.htm


From: Smith, Barry F. 
Sent: Friday, October 19, 2018 2:32 PM
To: Klaij, Christiaan
Cc: petsc-users@mcs.anl.gov
Subject: Re: [petsc-users] fortran INTENT with petsc object gives segfault 
after upgrade from 3.8.4 to 3.10.2

   Hmm, the intent of all three first arguments should be in since they are 
passed in from the routine above. Does it work if you replace

>  Vec, INTENT(out) :: rr_system

with

>  Vec, INTENT(in) :: rr_system

?

Barry


> On Oct 19, 2018, at 3:51 AM, Klaij, Christiaan  wrote:
>
> I've recently upgraded from petsc-3.8.4 to petsc-3.10.2 and was
> surprised to find a number of segmentation faults in my test
> cases. These turned out to be related to user-defined MatMult and
> PCApply for shell matrices. For example:
>
> SUBROUTINE systemMatMult(aa_system,xx_system,rr_system,ierr)
>  Mat, INTENT(in) :: aa_system
>  Vec, INTENT(in) :: xx_system
>  Vec, INTENT(out) :: rr_system
>  PetscErrorCode, INTENT(out) :: ierr
>  ...
> END
>
> where aa_system is the shell matrix. This code works fine with
> 3.8.4 and older, but fails with 3.10.2 due to invalid
> pointers (gdb backtrace shows failure of VecSetValues due to
> invalid first argument). After replacing by:
>
> SUBROUTINE mass_momentum_systemMatMult(aa_system,xx_system,rr_system,ierr)
>  Mat :: aa_system
>  Vec :: xx_system
>  Vec :: rr_system
>  PetscErrorCode :: ierr
>  ...
> END
>
> everything's fine again. So clearly something has changed since
> 3.8.4 that now prevents the use of INTENT in Fortran (at least
> using intel 17.0.1 compilers). Any reason for this?
>
> Chris
>
>
> dr. ir. Christiaan Klaij  | Senior Researcher | Research & Development
> MARIN | T +31 317 49 33 44 | mailto:c.kl...@marin.nl | http://www.marin.nl
>
> MARIN news: 
> http://www.marin.nl/web/News/News-items/ReFRESCO-successfully-coupled-to-ParaView-Catalyst-for-insitu-analysis-1.htm
>



Re: [petsc-users] PetscInt overflow

2018-10-19 Thread Zhang, Junchao

On Fri, Oct 19, 2018 at 4:02 AM Jan Grießer 
mailto:griesser@googlemail.com>> wrote:
With more than 1 MPI process you mean i should use spectrum slicing in divide 
the full problem in smaller subproblems?
The --with-64-bit-indices is not a possibility for me since i configured petsc 
with mumps, which does not allow to use the 64-bit version (At least this was 
the error message when i tried to configure PETSc )

MUMPS 5.1.2 manual chapter 2.4.2 says it supports "Selective 64-bit integer 
feature" and "full 64-bit integer version" as well.

Am Mi., 17. Okt. 2018 um 18:24 Uhr schrieb Jose E. Roman 
mailto:jro...@dsic.upv.es>>:
To use BVVECS just add the command-line option -bv_type vecs
This causes to use a separate Vec for each column, instead of a single long Vec 
of size n*m. But it is considerably slower than the default.

Anyway, for such large problems you should consider using more than 1 MPI 
process. In that case the error may disappear because the local size is smaller 
than 768000.

Jose


> El 17 oct 2018, a las 17:58, Matthew Knepley 
> mailto:knep...@gmail.com>> escribió:
>
> On Wed, Oct 17, 2018 at 11:54 AM Jan Grießer 
> mailto:griesser@googlemail.com>> wrote:
> Hi all,
> i am using slepc4py and petsc4py to solve for the smallest real eigenvalues 
> and eigenvectors. For my test cases with a matrix A of the size 30k x 30k 
> solving for the smallest soutions works quite well, but when i increase the 
> dimension of my system to around A = 768000 x 768000 or 3 million x 3 million 
> and ask for the smallest real 3000 (the number is increasing with increasing 
> system size) eigenvalues and eigenvectors i get the output (for the 768000):
>  The product 4001 times 768000 overflows the size of PetscInt; consider 
> reducing the number of columns, or use BVVECS instead
> i understand that the requested number of eigenvectors and eigenvalues is 
> causing an overflow but i do not understand the solution of the problem which 
> is stated in the error message. Can someone tell me what exactly BVVECS is 
> and how i can use it? Or is there any other solution to my problem ?
>
> You can also reconfigure with 64-bit integers: --with-64-bit-indices
>
>   Thanks,
>
> Matt
>
> Thank you very much in advance,
> Jan
>
>
>
> --
> What most experimenters take for granted before they begin their experiments 
> is infinitely more interesting than any results to which their experiments 
> lead.
> -- Norbert Wiener
>
> https://www.cse.buffalo.edu/~knepley/



Re: [petsc-users] PetscInt overflow

2018-10-19 Thread Matthew Knepley
On Fri, Oct 19, 2018 at 5:01 AM Jan Grießer 
wrote:

> With more than 1 MPI process you mean i should use spectrum slicing in
> divide the full problem in smaller subproblems?
> The --with-64-bit-indices is not a possibility for me since i configured
> petsc with mumps, which does not allow to use the 64-bit version (At least
> this was the error message when i tried to configure PETSc )
>

I believe you can replace MUMPS with SuperLU_dist for 64-bit ints.

   Matt


> Am Mi., 17. Okt. 2018 um 18:24 Uhr schrieb Jose E. Roman <
> jro...@dsic.upv.es>:
>
>> To use BVVECS just add the command-line option -bv_type vecs
>> This causes to use a separate Vec for each column, instead of a single
>> long Vec of size n*m. But it is considerably slower than the default.
>>
>> Anyway, for such large problems you should consider using more than 1 MPI
>> process. In that case the error may disappear because the local size is
>> smaller than 768000.
>>
>> Jose
>>
>>
>> > El 17 oct 2018, a las 17:58, Matthew Knepley 
>> escribió:
>> >
>> > On Wed, Oct 17, 2018 at 11:54 AM Jan Grießer <
>> griesser@googlemail.com> wrote:
>> > Hi all,
>> > i am using slepc4py and petsc4py to solve for the smallest real
>> eigenvalues and eigenvectors. For my test cases with a matrix A of the size
>> 30k x 30k solving for the smallest soutions works quite well, but when i
>> increase the dimension of my system to around A = 768000 x 768000 or 3
>> million x 3 million and ask for the smallest real 3000 (the number is
>> increasing with increasing system size) eigenvalues and eigenvectors i get
>> the output (for the 768000):
>> >  The product 4001 times 768000 overflows the size of PetscInt; consider
>> reducing the number of columns, or use BVVECS instead
>> > i understand that the requested number of eigenvectors and eigenvalues
>> is causing an overflow but i do not understand the solution of the problem
>> which is stated in the error message. Can someone tell me what exactly
>> BVVECS is and how i can use it? Or is there any other solution to my
>> problem ?
>> >
>> > You can also reconfigure with 64-bit integers: --with-64-bit-indices
>> >
>> >   Thanks,
>> >
>> > Matt
>> >
>> > Thank you very much in advance,
>> > Jan
>> >
>> >
>> >
>> > --
>> > What most experimenters take for granted before they begin their
>> experiments is infinitely more interesting than any results to which their
>> experiments lead.
>> > -- Norbert Wiener
>> >
>> > https://www.cse.buffalo.edu/~knepley/
>>
>>

-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener

https://www.cse.buffalo.edu/~knepley/ 


Re: [petsc-users] fortran INTENT with petsc object gives segfault after upgrade from 3.8.4 to 3.10.2

2018-10-19 Thread Smith, Barry F.


   Hmm, the intent of all three first arguments should be in since they are 
passed in from the routine above. Does it work if you replace

>  Vec, INTENT(out) :: rr_system

with

>  Vec, INTENT(in) :: rr_system

?

Barry


> On Oct 19, 2018, at 3:51 AM, Klaij, Christiaan  wrote:
> 
> I've recently upgraded from petsc-3.8.4 to petsc-3.10.2 and was
> surprised to find a number of segmentation faults in my test
> cases. These turned out to be related to user-defined MatMult and
> PCApply for shell matrices. For example:
> 
> SUBROUTINE systemMatMult(aa_system,xx_system,rr_system,ierr)
>  Mat, INTENT(in) :: aa_system
>  Vec, INTENT(in) :: xx_system
>  Vec, INTENT(out) :: rr_system
>  PetscErrorCode, INTENT(out) :: ierr
>  ...
> END
> 
> where aa_system is the shell matrix. This code works fine with
> 3.8.4 and older, but fails with 3.10.2 due to invalid
> pointers (gdb backtrace shows failure of VecSetValues due to
> invalid first argument). After replacing by:
> 
> SUBROUTINE mass_momentum_systemMatMult(aa_system,xx_system,rr_system,ierr)
>  Mat :: aa_system
>  Vec :: xx_system
>  Vec :: rr_system
>  PetscErrorCode :: ierr
>  ...
> END
> 
> everything's fine again. So clearly something has changed since
> 3.8.4 that now prevents the use of INTENT in Fortran (at least
> using intel 17.0.1 compilers). Any reason for this?
> 
> Chris
> 
> 
> dr. ir. Christiaan Klaij  | Senior Researcher | Research & Development
> MARIN | T +31 317 49 33 44 | mailto:c.kl...@marin.nl | http://www.marin.nl
> 
> MARIN news: 
> http://www.marin.nl/web/News/News-items/ReFRESCO-successfully-coupled-to-ParaView-Catalyst-for-insitu-analysis-1.htm
> 



Re: [petsc-users] PetscInt overflow

2018-10-19 Thread Jose E. Roman
Sorry, running in parallel does not change the thing. I was wrong, the 
limitation is for the global size and not the local size. So what you have to 
do is use -bv_type vecs or also -bv_type mat
Let me know how this works.

Jose


> El 19 oct 2018, a las 13:12, Jan Grießer  
> escribió:
> 
> Thiis i already did with mpiexec -n 20 ... and there the error occured. I was 
> also a little bit surprised that this error occured. Our computation nodes 
> have 20 cores with 6GB RAM. 
> Is PETSc/ SLEPc saving the dense eigenvector error in one core ? 
> 
> 
> Am Fr., 19. Okt. 2018 um 12:52 Uhr schrieb Jan Grießer 
> :
> Thiis i already did with mpiexec -n 20 ... and there the error occured. I was 
> also a little bit surprised that this error occured. Our computation nodes 
> have 20 cores with 6GB RAM. 
> Is PETSc/ SLEPc saving the dense eigenvector error in one core ? 
> 
> Am Fr., 19. Okt. 2018 um 11:08 Uhr schrieb Jose E. Roman :
> No, I mean to run in parallel:
> 
> $ mpiexec -n 8 python ex1.py 
> 
> Jose
> 
> 
> > El 19 oct 2018, a las 11:01, Jan Grießer  
> > escribió:
> > 
> > With more than 1 MPI process you mean i should use spectrum slicing in 
> > divide the full problem in smaller subproblems? 
> > The --with-64-bit-indices is not a possibility for me since i configured 
> > petsc with mumps, which does not allow to use the 64-bit version (At least 
> > this was the error message when i tried to configure PETSc )
> > 
> > Am Mi., 17. Okt. 2018 um 18:24 Uhr schrieb Jose E. Roman 
> > :
> > To use BVVECS just add the command-line option -bv_type vecs
> > This causes to use a separate Vec for each column, instead of a single long 
> > Vec of size n*m. But it is considerably slower than the default.
> > 
> > Anyway, for such large problems you should consider using more than 1 MPI 
> > process. In that case the error may disappear because the local size is 
> > smaller than 768000.
> > 
> > Jose
> > 
> > 
> > > El 17 oct 2018, a las 17:58, Matthew Knepley  escribió:
> > > 
> > > On Wed, Oct 17, 2018 at 11:54 AM Jan Grießer 
> > >  wrote:
> > > Hi all,
> > > i am using slepc4py and petsc4py to solve for the smallest real 
> > > eigenvalues and eigenvectors. For my test cases with a matrix A of the 
> > > size 30k x 30k solving for the smallest soutions works quite well, but 
> > > when i increase the dimension of my system to around A = 768000 x 768000 
> > > or 3 million x 3 million and ask for the smallest real 3000 (the number 
> > > is increasing with increasing system size) eigenvalues and eigenvectors i 
> > > get the output (for the 768000): 
> > >  The product 4001 times 768000 overflows the size of PetscInt; consider 
> > > reducing the number of columns, or use BVVECS instead
> > > i understand that the requested number of eigenvectors and eigenvalues is 
> > > causing an overflow but i do not understand the solution of the problem 
> > > which is stated in the error message. Can someone tell me what exactly 
> > > BVVECS is and how i can use it? Or is there any other solution to my 
> > > problem ?
> > > 
> > > You can also reconfigure with 64-bit integers: --with-64-bit-indices
> > > 
> > >   Thanks,
> > > 
> > > Matt
> > >  
> > > Thank you very much in advance,
> > > Jan 
> > > 
> > > 
> > > 
> > > -- 
> > > What most experimenters take for granted before they begin their 
> > > experiments is infinitely more interesting than any results to which 
> > > their experiments lead.
> > > -- Norbert Wiener
> > > 
> > > https://www.cse.buffalo.edu/~knepley/
> > 
> 



Re: [petsc-users] PetscInt overflow

2018-10-19 Thread Jan Grießer
Thiis i already did with mpiexec -n 20 ... and there the error occured. I
was also a little bit surprised that this error occured. Our computation
nodes have 20 cores with 6GB RAM.
Is PETSc/ SLEPc saving the dense eigenvector error in one core ?


Am Fr., 19. Okt. 2018 um 12:52 Uhr schrieb Jan Grießer <
griesser@googlemail.com>:

> Thiis i already did with mpiexec -n 20 ... and there the error occured. I
> was also a little bit surprised that this error occured. Our computation
> nodes have 20 cores with 6GB RAM.
> Is PETSc/ SLEPc saving the dense eigenvector error in one core ?
>
> Am Fr., 19. Okt. 2018 um 11:08 Uhr schrieb Jose E. Roman <
> jro...@dsic.upv.es>:
>
>> No, I mean to run in parallel:
>>
>> $ mpiexec -n 8 python ex1.py
>>
>> Jose
>>
>>
>> > El 19 oct 2018, a las 11:01, Jan Grießer 
>> escribió:
>> >
>> > With more than 1 MPI process you mean i should use spectrum slicing in
>> divide the full problem in smaller subproblems?
>> > The --with-64-bit-indices is not a possibility for me since i
>> configured petsc with mumps, which does not allow to use the 64-bit version
>> (At least this was the error message when i tried to configure PETSc )
>> >
>> > Am Mi., 17. Okt. 2018 um 18:24 Uhr schrieb Jose E. Roman <
>> jro...@dsic.upv.es>:
>> > To use BVVECS just add the command-line option -bv_type vecs
>> > This causes to use a separate Vec for each column, instead of a single
>> long Vec of size n*m. But it is considerably slower than the default.
>> >
>> > Anyway, for such large problems you should consider using more than 1
>> MPI process. In that case the error may disappear because the local size is
>> smaller than 768000.
>> >
>> > Jose
>> >
>> >
>> > > El 17 oct 2018, a las 17:58, Matthew Knepley 
>> escribió:
>> > >
>> > > On Wed, Oct 17, 2018 at 11:54 AM Jan Grießer <
>> griesser@googlemail.com> wrote:
>> > > Hi all,
>> > > i am using slepc4py and petsc4py to solve for the smallest real
>> eigenvalues and eigenvectors. For my test cases with a matrix A of the size
>> 30k x 30k solving for the smallest soutions works quite well, but when i
>> increase the dimension of my system to around A = 768000 x 768000 or 3
>> million x 3 million and ask for the smallest real 3000 (the number is
>> increasing with increasing system size) eigenvalues and eigenvectors i get
>> the output (for the 768000):
>> > >  The product 4001 times 768000 overflows the size of PetscInt;
>> consider reducing the number of columns, or use BVVECS instead
>> > > i understand that the requested number of eigenvectors and
>> eigenvalues is causing an overflow but i do not understand the solution of
>> the problem which is stated in the error message. Can someone tell me what
>> exactly BVVECS is and how i can use it? Or is there any other solution to
>> my problem ?
>> > >
>> > > You can also reconfigure with 64-bit integers: --with-64-bit-indices
>> > >
>> > >   Thanks,
>> > >
>> > > Matt
>> > >
>> > > Thank you very much in advance,
>> > > Jan
>> > >
>> > >
>> > >
>> > > --
>> > > What most experimenters take for granted before they begin their
>> experiments is infinitely more interesting than any results to which their
>> experiments lead.
>> > > -- Norbert Wiener
>> > >
>> > > https://www.cse.buffalo.edu/~knepley/
>> >
>>
>>


Re: [petsc-users] PetscInt overflow

2018-10-19 Thread Jose E. Roman
No, I mean to run in parallel:

$ mpiexec -n 8 python ex1.py 

Jose


> El 19 oct 2018, a las 11:01, Jan Grießer  
> escribió:
> 
> With more than 1 MPI process you mean i should use spectrum slicing in divide 
> the full problem in smaller subproblems? 
> The --with-64-bit-indices is not a possibility for me since i configured 
> petsc with mumps, which does not allow to use the 64-bit version (At least 
> this was the error message when i tried to configure PETSc )
> 
> Am Mi., 17. Okt. 2018 um 18:24 Uhr schrieb Jose E. Roman :
> To use BVVECS just add the command-line option -bv_type vecs
> This causes to use a separate Vec for each column, instead of a single long 
> Vec of size n*m. But it is considerably slower than the default.
> 
> Anyway, for such large problems you should consider using more than 1 MPI 
> process. In that case the error may disappear because the local size is 
> smaller than 768000.
> 
> Jose
> 
> 
> > El 17 oct 2018, a las 17:58, Matthew Knepley  escribió:
> > 
> > On Wed, Oct 17, 2018 at 11:54 AM Jan Grießer  
> > wrote:
> > Hi all,
> > i am using slepc4py and petsc4py to solve for the smallest real eigenvalues 
> > and eigenvectors. For my test cases with a matrix A of the size 30k x 30k 
> > solving for the smallest soutions works quite well, but when i increase the 
> > dimension of my system to around A = 768000 x 768000 or 3 million x 3 
> > million and ask for the smallest real 3000 (the number is increasing with 
> > increasing system size) eigenvalues and eigenvectors i get the output (for 
> > the 768000): 
> >  The product 4001 times 768000 overflows the size of PetscInt; consider 
> > reducing the number of columns, or use BVVECS instead
> > i understand that the requested number of eigenvectors and eigenvalues is 
> > causing an overflow but i do not understand the solution of the problem 
> > which is stated in the error message. Can someone tell me what exactly 
> > BVVECS is and how i can use it? Or is there any other solution to my 
> > problem ?
> > 
> > You can also reconfigure with 64-bit integers: --with-64-bit-indices
> > 
> >   Thanks,
> > 
> > Matt
> >  
> > Thank you very much in advance,
> > Jan 
> > 
> > 
> > 
> > -- 
> > What most experimenters take for granted before they begin their 
> > experiments is infinitely more interesting than any results to which their 
> > experiments lead.
> > -- Norbert Wiener
> > 
> > https://www.cse.buffalo.edu/~knepley/
> 



Re: [petsc-users] PetscInt overflow

2018-10-19 Thread Jan Grießer
With more than 1 MPI process you mean i should use spectrum slicing in
divide the full problem in smaller subproblems?
The --with-64-bit-indices is not a possibility for me since i configured
petsc with mumps, which does not allow to use the 64-bit version (At least
this was the error message when i tried to configure PETSc )

Am Mi., 17. Okt. 2018 um 18:24 Uhr schrieb Jose E. Roman :

> To use BVVECS just add the command-line option -bv_type vecs
> This causes to use a separate Vec for each column, instead of a single
> long Vec of size n*m. But it is considerably slower than the default.
>
> Anyway, for such large problems you should consider using more than 1 MPI
> process. In that case the error may disappear because the local size is
> smaller than 768000.
>
> Jose
>
>
> > El 17 oct 2018, a las 17:58, Matthew Knepley 
> escribió:
> >
> > On Wed, Oct 17, 2018 at 11:54 AM Jan Grießer <
> griesser@googlemail.com> wrote:
> > Hi all,
> > i am using slepc4py and petsc4py to solve for the smallest real
> eigenvalues and eigenvectors. For my test cases with a matrix A of the size
> 30k x 30k solving for the smallest soutions works quite well, but when i
> increase the dimension of my system to around A = 768000 x 768000 or 3
> million x 3 million and ask for the smallest real 3000 (the number is
> increasing with increasing system size) eigenvalues and eigenvectors i get
> the output (for the 768000):
> >  The product 4001 times 768000 overflows the size of PetscInt; consider
> reducing the number of columns, or use BVVECS instead
> > i understand that the requested number of eigenvectors and eigenvalues
> is causing an overflow but i do not understand the solution of the problem
> which is stated in the error message. Can someone tell me what exactly
> BVVECS is and how i can use it? Or is there any other solution to my
> problem ?
> >
> > You can also reconfigure with 64-bit integers: --with-64-bit-indices
> >
> >   Thanks,
> >
> > Matt
> >
> > Thank you very much in advance,
> > Jan
> >
> >
> >
> > --
> > What most experimenters take for granted before they begin their
> experiments is infinitely more interesting than any results to which their
> experiments lead.
> > -- Norbert Wiener
> >
> > https://www.cse.buffalo.edu/~knepley/
>
>


[petsc-users] fortran INTENT with petsc object gives segfault after upgrade from 3.8.4 to 3.10.2

2018-10-19 Thread Klaij, Christiaan
I've recently upgraded from petsc-3.8.4 to petsc-3.10.2 and was
surprised to find a number of segmentation faults in my test
cases. These turned out to be related to user-defined MatMult and
PCApply for shell matrices. For example:

SUBROUTINE systemMatMult(aa_system,xx_system,rr_system,ierr)
  Mat, INTENT(in) :: aa_system
  Vec, INTENT(in) :: xx_system
  Vec, INTENT(out) :: rr_system
  PetscErrorCode, INTENT(out) :: ierr
  ...
END

where aa_system is the shell matrix. This code works fine with
3.8.4 and older, but fails with 3.10.2 due to invalid
pointers (gdb backtrace shows failure of VecSetValues due to
invalid first argument). After replacing by:

SUBROUTINE mass_momentum_systemMatMult(aa_system,xx_system,rr_system,ierr)
  Mat :: aa_system
  Vec :: xx_system
  Vec :: rr_system
  PetscErrorCode :: ierr
  ...
END

everything's fine again. So clearly something has changed since
3.8.4 that now prevents the use of INTENT in Fortran (at least
using intel 17.0.1 compilers). Any reason for this?

Chris


dr. ir. Christiaan Klaij  | Senior Researcher | Research & Development
MARIN | T +31 317 49 33 44 | mailto:c.kl...@marin.nl | http://www.marin.nl

MARIN news: 
http://www.marin.nl/web/News/News-items/ReFRESCO-successfully-coupled-to-ParaView-Catalyst-for-insitu-analysis-1.htm



Re: [petsc-users] KSP and matrix-free matrix (shell)

2018-10-19 Thread Florian Lindner
Thanks for your help! Using -pc_type none makes it working so far.

>  Where the hell is the variable "matrix"? Is it a global variable?? If yes - 
> don't do that.

Yes, it's a global matrix and as I said, I just use it simulate a meaningful 
matrix-vector product in my 30 lines test program.

Best Thanks,
Florian

Am 18.10.18 um 17:56 schrieb Florian Lindner:
> Hello,
> 
> I try to use the KSP solver package together with a shell matrix:
> 
> 
>   MyContext mycontext; // an empty struct, not sure it it's needed?
>   Mat s;
>   ierr = MatCreateShell(PETSC_COMM_WORLD, size, size, PETSC_DECIDE, 
> PETSC_DECIDE, , );
>   ierr = MatShellSetOperation(s, MATOP_MULT, (void(*)(void))usermult); 
> CHKERRQ(ierr);
> 
> To simulate a meaningfull usermult, I use MatMult on an actual existing 
> matrix of same dimensions:
> 
> extern PetscErrorCode usermult(Mat m ,Vec x, Vec y)
> {
>   PetscErrorCode ierr = 0;
>   ierr = MatMult(matrix, x, y);
>   printf("Call\n");
>   return ierr;
> }
> 
> Btw, what is the significance of the Mat m argument here?
> 
> matrix is created like:
> 
>   ierr = MatCreate(PETSC_COMM_WORLD, ); CHKERRQ(ierr);
>   ierr = MatSetSizes(matrix, size, size, PETSC_DECIDE, PETSC_DECIDE); 
> CHKERRQ(ierr);
>   ierr = MatSetFromOptions(matrix); CHKERRQ(ierr);
>   ierr = MatSetUp(matrix); CHKERRQ(ierr);
> 
> 
>   MatMult(s, b, x);
> 
> works. The usermult function is called.
> 
> But trying to use a KSP gives an error:
> 
>   KSP solver;
>   KSPCreate(PETSC_COMM_WORLD, );
>   KSPSetFromOptions(solver);
>   KSPSetOperators(solver, s, s);
> 
> 
> error:
> 
> [0]PETSC ERROR: - Error Message 
> --
> [0]PETSC ERROR: See 
> http://www.mcs.anl.gov/petsc/documentation/linearsolvertable.html for 
> possible LU and Cholesky solvers
> [0]PETSC ERROR: Could not locate a solver package. Perhaps you must 
> ./configure with --download-
> [0]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html for 
> trouble shooting.
> [0]PETSC ERROR: Petsc Release Version 3.9.3, unknown 
> [0]PETSC ERROR: ./a.out on a arch-linux2-c-opt named asaru by lindnefn Thu 
> Oct 18 17:39:52 2018
> [0]PETSC ERROR: Configure options --with-debugging=0 COPTFLAGS="-O3 
> -march=native -mtune=native" CXXOPTFLAGS="-O3 -march=native -mtune=native" 
> FOPTFLAGS="-O3 -march=native -mtune=native" --download-petsc4py 
> --download-mpi4py --with-mpi-dir=/opt/mpich
> [0]PETSC ERROR: #1 MatGetFactor() line 4328 in 
> /home/lindnefn/software/petsc/src/mat/interface/matrix.c
> [0]PETSC ERROR: #2 PCSetUp_ILU() line 142 in 
> /home/lindnefn/software/petsc/src/ksp/pc/impls/factor/ilu/ilu.c
> [0]PETSC ERROR: #3 PCSetUp() line 923 in 
> /home/lindnefn/software/petsc/src/ksp/pc/interface/precon.c
> [0]PETSC ERROR: #4 KSPSetUp() line 381 in 
> /home/lindnefn/software/petsc/src/ksp/ksp/interface/itfunc.c
> [0]PETSC ERROR: #5 KSPSolve() line 612 in 
> /home/lindnefn/software/petsc/src/ksp/ksp/interface/itfunc.c
> 
> Do I need to MatShellSetOperations additional operations? Like 
> MATOP_ILUFACTOR? How can I know what operations to implement?
> 
> Best Thanks,
> Florian
> 
>