Re: [petsc-users] random SLEPc segfault using openmpi-3.0.1

2018-10-19 Thread Smith, Barry F.
> On Oct 19, 2018, at 2:08 PM, Moritz Cygorek wrote: > > Hi, > > I'm using SLEPc to diagonalize a huge sparse matrix and I've encountered > random segmentation faults. https://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind > > > I'm actually using a the slepc example 4 without

Re: [petsc-users] fortran INTENT with petsc object gives segfault after upgrade from 3.8.4 to 3.10.2

2018-10-19 Thread Smith, Barry F.
> On Oct 19, 2018, at 9:37 AM, Klaij, Christiaan wrote: > > As far as I (mis)understand fortran, this is a data protection > thing: all arguments are passed in from above but the subroutine > is only allowed to change rr and ierr, not aa and xx (if you try, > you get a compiler warning).

Re: [petsc-users] random SLEPc segfault using openmpi-3.0.1

2018-10-19 Thread Matthew Knepley
On Fri, Oct 19, 2018 at 3:09 PM Moritz Cygorek wrote: > Hi, > > I'm using SLEPc to diagonalize a huge sparse matrix and I've encountered > random segmentation faults. > > I'm actually using a the slepc example 4 without modifications to rule out > errors due to coding. > > Concretely, I use the

[petsc-users] random SLEPc segfault using openmpi-3.0.1

2018-10-19 Thread Moritz Cygorek
Hi, I'm using SLEPc to diagonalize a huge sparse matrix and I've encountered random segmentation faults. I'm actually using a the slepc example 4 without modifications to rule out errors due to coding. Concretely, I use the command line ompirun -n 28 ex4 \ -file amatrix.bin -eps_tol 1e-6

Re: [petsc-users] PetscInt overflow

2018-10-19 Thread Smith, Barry F.
> On Oct 19, 2018, at 7:56 AM, Zhang, Junchao wrote: > > > On Fri, Oct 19, 2018 at 4:02 AM Jan Grießer > wrote: > With more than 1 MPI process you mean i should use spectrum slicing in divide > the full problem in smaller subproblems? > The --with-64-bit-indices is not a possibility for

Re: [petsc-users] fortran INTENT with petsc object gives segfault after upgrade from 3.8.4 to 3.10.2

2018-10-19 Thread Matthew Knepley
On Fri, Oct 19, 2018 at 10:38 AM Klaij, Christiaan wrote: > As far as I (mis)understand fortran, this is a data protection > thing: all arguments are passed in from above but the subroutine > is only allowed to change rr and ierr, not aa and xx (if you try, > you get a compiler warning). That's

Re: [petsc-users] fortran INTENT with petsc object gives segfault after upgrade from 3.8.4 to 3.10.2

2018-10-19 Thread Klaij, Christiaan
As far as I (mis)understand fortran, this is a data protection thing: all arguments are passed in from above but the subroutine is only allowed to change rr and ierr, not aa and xx (if you try, you get a compiler warning). That's why I find it very odd to give an intent(in) to rr. But I've tried

Re: [petsc-users] PetscInt overflow

2018-10-19 Thread Zhang, Junchao
On Fri, Oct 19, 2018 at 4:02 AM Jan Grießer mailto:griesser@googlemail.com>> wrote: With more than 1 MPI process you mean i should use spectrum slicing in divide the full problem in smaller subproblems? The --with-64-bit-indices is not a possibility for me since i configured petsc with

Re: [petsc-users] PetscInt overflow

2018-10-19 Thread Matthew Knepley
On Fri, Oct 19, 2018 at 5:01 AM Jan Grießer wrote: > With more than 1 MPI process you mean i should use spectrum slicing in > divide the full problem in smaller subproblems? > The --with-64-bit-indices is not a possibility for me since i configured > petsc with mumps, which does not allow to use

Re: [petsc-users] fortran INTENT with petsc object gives segfault after upgrade from 3.8.4 to 3.10.2

2018-10-19 Thread Smith, Barry F.
Hmm, the intent of all three first arguments should be in since they are passed in from the routine above. Does it work if you replace > Vec, INTENT(out) :: rr_system with > Vec, INTENT(in) :: rr_system ? Barry > On Oct 19, 2018, at 3:51 AM, Klaij, Christiaan wrote: > > I've

Re: [petsc-users] PetscInt overflow

2018-10-19 Thread Jose E. Roman
Sorry, running in parallel does not change the thing. I was wrong, the limitation is for the global size and not the local size. So what you have to do is use -bv_type vecs or also -bv_type mat Let me know how this works. Jose > El 19 oct 2018, a las 13:12, Jan Grießer > escribió: > >

Re: [petsc-users] PetscInt overflow

2018-10-19 Thread Jan Grießer
Thiis i already did with mpiexec -n 20 ... and there the error occured. I was also a little bit surprised that this error occured. Our computation nodes have 20 cores with 6GB RAM. Is PETSc/ SLEPc saving the dense eigenvector error in one core ? Am Fr., 19. Okt. 2018 um 12:52 Uhr schrieb Jan

Re: [petsc-users] PetscInt overflow

2018-10-19 Thread Jose E. Roman
No, I mean to run in parallel: $ mpiexec -n 8 python ex1.py Jose > El 19 oct 2018, a las 11:01, Jan Grießer > escribió: > > With more than 1 MPI process you mean i should use spectrum slicing in divide > the full problem in smaller subproblems? > The --with-64-bit-indices is not a

Re: [petsc-users] PetscInt overflow

2018-10-19 Thread Jan Grießer
With more than 1 MPI process you mean i should use spectrum slicing in divide the full problem in smaller subproblems? The --with-64-bit-indices is not a possibility for me since i configured petsc with mumps, which does not allow to use the 64-bit version (At least this was the error message when

[petsc-users] fortran INTENT with petsc object gives segfault after upgrade from 3.8.4 to 3.10.2

2018-10-19 Thread Klaij, Christiaan
I've recently upgraded from petsc-3.8.4 to petsc-3.10.2 and was surprised to find a number of segmentation faults in my test cases. These turned out to be related to user-defined MatMult and PCApply for shell matrices. For example: SUBROUTINE systemMatMult(aa_system,xx_system,rr_system,ierr)

Re: [petsc-users] KSP and matrix-free matrix (shell)

2018-10-19 Thread Florian Lindner
Thanks for your help! Using -pc_type none makes it working so far. > Where the hell is the variable "matrix"? Is it a global variable?? If yes - > don't do that. Yes, it's a global matrix and as I said, I just use it simulate a meaningful matrix-vector product in my 30 lines test program.