> On 19 Nov 2015, at 11:19, Jose E. Roman <[email protected]> wrote: > >> >> El 19 nov 2015, a las 10:49, Denis Davydov <[email protected]> escribió: >> >> Dear all, >> >> I was trying to get some scaling results for the GD eigensolver as applied >> to the density functional theory. >> Interestingly enough, the number of self-consistent iterations (solution of >> couple eigenvalue problem and poisson equations) >> depends on the number of MPI cores used. For my case the range of iterations >> is 19-24 for MPI cores between 2 and 160. >> That makes the whole scaling check useless as the eigenproblem is solved >> different number of times. >> >> That is **not** the case when I use Krylov-Schur eigensolver with zero >> shift, which makes me believe that I am missing some settings on GD to make >> it fully deterministic. The only non-deterministic part I am currently aware >> of is the initial subspace for the first SC iterations. But that’s the case >> for both KS and GD. For subsequent iterations I provide previously obtained >> eigenvectors as initial subspace. >> >> Certainly there will be some round-off error due to different partition of >> DoFs for different number of MPI cores, >> but i don’t expect it to have such a strong influence. Especially given the >> fact that I don’t see this problem with KS. >> >> Below is the output of -eps-view for GD with -eps_type gd -eps_harmonic >> -st_pc_type bjacobi -eps_gd_krylov_start -eps_target -10.0 >> I would appreciate any suggestions on how to address the issue. > > The block Jacobi preconditioner differs when you change the number of > processes. This will probably make GD iterate more when you use more > processes.
I figured else was causing different solution for different MPI cores: -eps_harmonic. As soon as I remove it from GD and JD, i have the same number of eigenproblems solved until convergence for all MPI cores (1,2,4,10,20) and for all methods (KS/GD/JD). Regards, Denis.
