> On Oct 19, 2018, at 2:08 PM, Moritz Cygorek wrote:
>
> Hi,
>
> I'm using SLEPc to diagonalize a huge sparse matrix and I've encountered
> random segmentation faults.
https://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind
>
>
> I'm actually using a the slepc example 4 without m
> On Oct 19, 2018, at 9:37 AM, Klaij, Christiaan wrote:
>
> As far as I (mis)understand fortran, this is a data protection
> thing: all arguments are passed in from above but the subroutine
> is only allowed to change rr and ierr, not aa and xx (if you try,
> you get a compiler warning).
On Fri, Oct 19, 2018 at 3:09 PM Moritz Cygorek wrote:
> Hi,
>
> I'm using SLEPc to diagonalize a huge sparse matrix and I've encountered
> random segmentation faults.
>
> I'm actually using a the slepc example 4 without modifications to rule out
> errors due to coding.
>
> Concretely, I use the c
Hi,
I'm using SLEPc to diagonalize a huge sparse matrix and I've encountered random
segmentation faults.
I'm actually using a the slepc example 4 without modifications to rule out
errors due to coding.
Concretely, I use the command line
ompirun -n 28 ex4 \
-file amatrix.bin -eps_tol 1e-6
> On Oct 19, 2018, at 7:56 AM, Zhang, Junchao wrote:
>
>
> On Fri, Oct 19, 2018 at 4:02 AM Jan Grießer
> wrote:
> With more than 1 MPI process you mean i should use spectrum slicing in divide
> the full problem in smaller subproblems?
> The --with-64-bit-indices is not a possibility for me
On Fri, Oct 19, 2018 at 10:38 AM Klaij, Christiaan wrote:
> As far as I (mis)understand fortran, this is a data protection
> thing: all arguments are passed in from above but the subroutine
> is only allowed to change rr and ierr, not aa and xx (if you try,
> you get a compiler warning). That's w
As far as I (mis)understand fortran, this is a data protection
thing: all arguments are passed in from above but the subroutine
is only allowed to change rr and ierr, not aa and xx (if you try,
you get a compiler warning). That's why I find it very odd to
give an intent(in) to rr. But I've tried yo
On Fri, Oct 19, 2018 at 4:02 AM Jan Grießer
mailto:griesser@googlemail.com>> wrote:
With more than 1 MPI process you mean i should use spectrum slicing in divide
the full problem in smaller subproblems?
The --with-64-bit-indices is not a possibility for me since i configured petsc
with mump
On Fri, Oct 19, 2018 at 5:01 AM Jan Grießer
wrote:
> With more than 1 MPI process you mean i should use spectrum slicing in
> divide the full problem in smaller subproblems?
> The --with-64-bit-indices is not a possibility for me since i configured
> petsc with mumps, which does not allow to use
Hmm, the intent of all three first arguments should be in since they are
passed in from the routine above. Does it work if you replace
> Vec, INTENT(out) :: rr_system
with
> Vec, INTENT(in) :: rr_system
?
Barry
> On Oct 19, 2018, at 3:51 AM, Klaij, Christiaan wrote:
>
> I've re
Sorry, running in parallel does not change the thing. I was wrong, the
limitation is for the global size and not the local size. So what you have to
do is use -bv_type vecs or also -bv_type mat
Let me know how this works.
Jose
> El 19 oct 2018, a las 13:12, Jan Grießer
> escribió:
>
> Thiis
Thiis i already did with mpiexec -n 20 ... and there the error occured. I
was also a little bit surprised that this error occured. Our computation
nodes have 20 cores with 6GB RAM.
Is PETSc/ SLEPc saving the dense eigenvector error in one core ?
Am Fr., 19. Okt. 2018 um 12:52 Uhr schrieb Jan Grie
No, I mean to run in parallel:
$ mpiexec -n 8 python ex1.py
Jose
> El 19 oct 2018, a las 11:01, Jan Grießer
> escribió:
>
> With more than 1 MPI process you mean i should use spectrum slicing in divide
> the full problem in smaller subproblems?
> The --with-64-bit-indices is not a possibi
With more than 1 MPI process you mean i should use spectrum slicing in
divide the full problem in smaller subproblems?
The --with-64-bit-indices is not a possibility for me since i configured
petsc with mumps, which does not allow to use the 64-bit version (At least
this was the error message when
I've recently upgraded from petsc-3.8.4 to petsc-3.10.2 and was
surprised to find a number of segmentation faults in my test
cases. These turned out to be related to user-defined MatMult and
PCApply for shell matrices. For example:
SUBROUTINE systemMatMult(aa_system,xx_system,rr_system,ierr)
Mat
Thanks for your help! Using -pc_type none makes it working so far.
> Where the hell is the variable "matrix"? Is it a global variable?? If yes -
> don't do that.
Yes, it's a global matrix and as I said, I just use it simulate a meaningful
matrix-vector product in my 30 lines test program.
Bes
16 matches
Mail list logo