Re: [petsc-users] Communication during MatAssemblyEnd

2019-07-03 Thread Ale Foggia via petsc-users
you are running with 64-core KNL > nodes, there are 4*64 = 256 logical processors available, because each core > supports four hardware threads. Asking for logical processors 0-63 might > not actually be using all of the cores, as, depending on the BIOS numbering > (which can be arbitrary), s

Re: [petsc-users] [SLEPc] Number of iterations changes with MPI processes in Lanczos

2018-10-25 Thread Ale Foggia
:48, Jose E. Roman () escribió: > Everything seems correct. I don't know, maybe your problem is very > sensitive? Is the eigenvalue tiny? > I would still try with Krylov-Schur. > Jose > > > > El 24 oct 2018, a las 14:59, Ale Foggia escribió: > > > > Th

Re: [petsc-users] [SLEPc] Number of iterations changes with MPI processes in Lanczos

2018-10-25 Thread Ale Foggia
El mar., 23 oct. 2018 a las 13:53, Matthew Knepley () escribió: > On Tue, Oct 23, 2018 at 6:24 AM Ale Foggia wrote: > >> Hello, >> >> I'm currently using Lanczos solver (EPSLANCZOS) to get the smallest real >> eigenvalue (EPS_SMALLEST_REAL) of a Hermitian problem (E

Re: [petsc-users] [SLEPc] Number of iterations changes with MPI processes in Lanczos

2018-10-24 Thread Ale Foggia
ault options is too sensitive (by default it does > not reorthogonalize). Suggest using Krylov-Schur or Lanczos with full > reorthogonalization (EPSLanczosSetReorthog). > Also, send the output of -eps_view to see if there is anything abnormal. > > Jose > > > > El 24 oct

Re: [petsc-users] [SLEPc] Number of iterations changes with MPI processes in Lanczos

2018-10-24 Thread Ale Foggia
23 oct 2018, a las 15:46, Ale Foggia escribió: > > > > > > > > El mar., 23 oct. 2018 a las 15:33, Jose E. Roman () > escribió: > > > > > > > El 23 oct 2018, a las 15:17, Ale Foggia escribió: > > > > > > Hello Jose, thanks for your answ

Re: [petsc-users] [SLEPc] Number of iterations changes with MPI processes in Lanczos

2018-10-23 Thread Ale Foggia
El mar., 23 oct. 2018 a las 15:33, Jose E. Roman () escribió: > > > > El 23 oct 2018, a las 15:17, Ale Foggia escribió: > > > > Hello Jose, thanks for your answer. > > > > El mar., 23 oct. 2018 a las 12:59, Jose E. Roman () > escrib

Re: [petsc-users] [SLEPc] Number of iterations changes with MPI processes in Lanczos

2018-10-23 Thread Ale Foggia
. Is there another easier (or with less computations) way to get this? > Jose > > > > > El 23 oct 2018, a las 12:13, Ale Foggia escribió: > > > > Hello, > > > > I'm currently using Lanczos solver (EPSLANCZOS) to get the smallest real > eigenvalue (EPS_SMALLEST_

[petsc-users] [SLEPc] Number of iterations changes with MPI processes in Lanczos

2018-10-23 Thread Ale Foggia
Hello, I'm currently using Lanczos solver (EPSLANCZOS) to get the smallest real eigenvalue (EPS_SMALLEST_REAL) of a Hermitian problem (EPS_HEP). Those are the only options I set for the solver. My aim is to be able to predict/estimate the time-to-solution. To do so, I was doing a scaling of the

Re: [petsc-users] PETSc/SLEPc: Memory consumption, particularly during solver initialization/solve

2018-10-10 Thread Ale Foggia
of the memory I need for a particular size of the problem. Thank you very much for all your answers and suggestions. El vie., 5 oct. 2018 a las 9:38, Jose E. Roman () escribió: > > > > El 4 oct 2018, a las 19:54, Ale Foggia escribió: > > > > Jose: > > - By eac

Re: [petsc-users] PETSc/SLEPc: Memory consumption, particularly during solver initialization/solve

2018-10-04 Thread Ale Foggia
> - I would suggest using the default solver Krylov-Schur - it will do > Lanczos with implicit restart, which will give faster convergence than the > EPSLANCZOS solver. > > Jose > > > > El 4 oct 2018, a las 12:49, Matthew Knepley > escribió: > > > > On Thu, Oct 4, 201

[petsc-users] PETSc/SLEPc: Memory consumption, particularly during solver initialization/solve

2018-10-04 Thread Ale Foggia
Hello all, I'm using SLEPc 3.9.2 (and PETSc 3.9.3) to get the EPS_SMALLEST_REAL of a matrix with the following characteristics: * type: real, Hermitian, sparse * linear size: 2333606220 * distributed in 2048 processes (64 nodes, 32 procs per node) My code first preallocates the necessary memory