Just a quick amend on the previous statement ; the problem arises in sequential and parallel. The MWE as is is provided for the parallel case, but imposing m_local=m makes it go sequential.
Cheers, Quentin CHEVALIER – IA parcours recherche LadHyX - Ecole polytechnique __________ On Thu, 5 May 2022 at 10:34, Quentin Chevalier <[email protected]> wrote: > > Hello all and thanks for your great work in bringing this very helpful > package to the community ! > > That said, I wouldn't need this mailing list if everything was running > smoothly. I have a rather involved eigenvalue problem that I've been working > on that's been throwing a mysterious error : >> >> petsc4py.PETSc.Error: error code 77 >> >> [1] EPSSolve() at /usr/local/slepc/src/eps/interface/epssolve.c:149 >> [1] EPSSolve_KrylovSchur_Default() at >> /usr/local/slepc/src/eps/impls/krylov/krylovschur/krylovschur.c:289 >> [1] EPSGetStartVector() at /usr/local/slepc/src/eps/interface/epssolve.c:824 >> [1] Petsc has generated inconsistent data >> [1] Initial vector is zero or belongs to the deflation space > > > This problem occurs in parallel with two processors, using the petsc4py > library using the dolfinx/dolfinx docker container. I have PETSc version > 3.16.0, in complex mode, python 3, and I'm running all of that on a OpenSUSE > Leap 15.2 machine (but I think the docker container has a Ubuntu OS). > > I wrote a minimal working example below, but I'm afraid the process for > building the matrices is involved, so I decided to directly share the > matrices instead : https://seminaris.polytechnique.fr/share/s/ryJ6L2nR4ketDwP > > They are in binary format, but inside the container I hope someone could > reproduce my issue. A word on the structure and intent behind these matrices : > > QE is a diagonal rectangular real matrix. Think of it as some sort of > preconditioner > L is the least dense of them all, the only one that is complex, and in order > to avoid inverting it I'm using two KSPs to compute solve problems on the fly > Mf is a diagonal square real matrix, its on the right-hand side of the > Generalised Hermitian Eigenvalue problem (I'm solving > QE^H*L^-1H*L^-1*QE*x=lambda*Mf*x > > Full MWE is below : > > from petsc4py import PETSc as pet > from slepc4py import SLEPc as slp > from mpi4py.MPI import COMM_WORLD > > # Global sizes > m_local=COMM_WORLD.rank*(490363-489780)+489780 > n_local=COMM_WORLD.rank*(452259-451743)+451743 > m=980143 > n=904002 > > QE=pet.Mat().createAIJ([[m_local,m],[n_local,n]]) > L=pet.Mat().createAIJ([[m_local,m],[m_local,m]]) > Mf=pet.Mat().createAIJ([[n_local,n],[n_local,n]]) > > viewerQE = pet.Viewer().createMPIIO("QE.dat", 'r', COMM_WORLD) > QE.load(viewerQE) > viewerL = pet.Viewer().createMPIIO("L.dat", 'r', COMM_WORLD) > L.load(viewerL) > viewerMf = pet.Viewer().createMPIIO("Mf.dat", 'r', COMM_WORLD) > Mf.load(viewerMf) > > QE.assemble() > L.assemble() > > KSPs = [] > # Useful solvers (here to put options for computing a smart R) > for Mat in [L,L.hermitianTranspose()]: > KSP = pet.KSP().create() > KSP.setOperators(Mat) > KSP.setFromOptions() > KSPs.append(KSP) > class LHS_class: > def mult(self,A,x,y): > w=pet.Vec().createMPI([m_local,m],comm=COMM_WORLD) > z=pet.Vec().createMPI([m_local,m],comm=COMM_WORLD) > QE.mult(x,w) > KSPs[0].solve(w,z) > KSPs[1].solve(z,w) > QE.multTranspose(w,y) > > # Matrix free operator > LHS=pet.Mat() > LHS.create(comm=COMM_WORLD) > LHS.setSizes([[n_local,n],[n_local,n]]) > LHS.setType(pet.Mat.Type.PYTHON) > LHS.setPythonContext(LHS_class()) > LHS.setUp() > > # Eigensolver > EPS = slp.EPS(); EPS.create() > EPS.setOperators(LHS,Mf) # Solve QE^T*L^-1H*L^-1*QE*x=lambda*Mf*x > EPS.setProblemType(slp.EPS.ProblemType.GHEP) # Specify that A is hermitian > (by construction), and B is semi-positive > EPS.setWhichEigenpairs(EPS.Which.LARGEST_MAGNITUDE) # Find largest eigenvalues > EPS.setFromOptions() > EPS.solve() > > Quentin CHEVALIER – IA parcours recherche > > LadHyX - Ecole polytechnique > > __________
