Please respond to the list.

Why would you need different matrices for sequential and parallel? PETSc takes 
care of loading matrices from binary files in parallel codes. If I use the 
'seq' matrices for both sequential and parallel runs it works. I get the 
eigenvalue 128662.745858

Take into account that in parallel you need to use a parallel linear solver, 
e.g., configure PETSc with MUMPS. See the FAQ #10 
https://slepc.upv.es/documentation/faq.htm#faq10

Jose

> El 5 may 2022, a las 14:57, Quentin Chevalier 
> <[email protected]> escribió:
> 
> Thank you for your answer, Jose.
> 
> It would appear my problem is slightly more complicated than it appears. With 
> slight modification of the original MWE to account for functioning in serial 
> or parallel, and computing the matrices in either sequential or parallel 
> (apologies, it's a large file), and including a slightly modified version of 
> your test case, I obtain two different results in serial and parallel 
> (commands python3 MWE.py and mpirun -n 2 python3 MWE.py).
> 
> Serial gives my a finite norm (around 28) and parallel gives me two norms of 
> 0 and a code 77.
> 
> This is still a problem as I would really like my code to be parallel. I 
> saved my matrices using :
> viewerQE = pet.Viewer().createMPIIO("QE.dat", 'w', COMM_WORLD)
> QE.view(viewerQE)
> viewerL = pet.Viewer().createMPIIO("L.dat", 'w', COMM_WORLD)
> L.view(viewerL)
> viewerMq = pet.Viewer().createMPIIO("Mq.dat", 'w', COMM_WORLD)
> Mq.view(viewerMq)
> viewerMf = pet.Viewer().createMPIIO("Mf.dat", 'w', COMM_WORLD)
> Mf.view(viewerMf)
> 
> New MWE (also attached for convenience) :
> from petsc4py import PETSc as pet
> from slepc4py import SLEPc as slp
> from mpi4py.MPI import COMM_WORLD
> 
> dir="./mats/"
> 
> # Global sizes
> m=980143; m_local=m
> n=904002; n_local=n
> if COMM_WORLD.size>1:
>     m_local=COMM_WORLD.rank*(490363-489780)+489780
>     n_local=COMM_WORLD.rank*(452259-451743)+451743
>     dir+="par/"
> else:
>     m_local=m
>     n_local=n
>     dir+="seq/"
> 
> QE=pet.Mat().createAIJ([[m_local,m],[n_local,n]])
> L=pet.Mat().createAIJ([[m_local,m],[m_local,m]])
> Mf=pet.Mat().createAIJ([[n_local,n],[n_local,n]])
> 
> viewerQE = pet.Viewer().createMPIIO(dir+"QE.dat", 'r', COMM_WORLD)
> QE.load(viewerQE)
> viewerL = pet.Viewer().createMPIIO(dir+"L.dat", 'r', COMM_WORLD)
> L.load(viewerL)
> viewerMf = pet.Viewer().createMPIIO(dir+"Mf.dat", 'r', COMM_WORLD)
> Mf.load(viewerMf)
> 
> QE.assemble()
> L.assemble()
> Mf.assemble()
> 
> KSPs = []
> # Useful solvers (here to put options for computing a smart R)
> for Mat in [L,L.hermitianTranspose()]:
>     KSP = pet.KSP().create()
>     KSP.setOperators(Mat)
>     KSP.setFromOptions()
>     KSPs.append(KSP)
>     
> class LHS_class:            
>     def mult(self,A,x,y):
>         w=pet.Vec().createMPI([m_local,m],comm=COMM_WORLD)
>         z=pet.Vec().createMPI([m_local,m],comm=COMM_WORLD)
>         QE.mult(x,w)
>         KSPs[0].solve(w,z)
>         KSPs[1].solve(z,w)
>         QE.multTranspose(w,y)
> 
> # Matrix free operator
> LHS=pet.Mat()
> LHS.create(comm=COMM_WORLD)
> LHS.setSizes([[n_local,n],[n_local,n]])
> LHS.setType(pet.Mat.Type.PYTHON)
> LHS.setPythonContext(LHS_class())
> LHS.setUp()
> 
> x, y = LHS.createVecs()
> x.set(1)
> LHS.mult(x,y)
> print(y.norm())
> 
> # Eigensolver
> EPS = slp.EPS(); EPS.create()
> EPS.setOperators(LHS,Mf) # Solve QE^T*L^-1H*L^-1*QE*f=sigma^2*Mf*f (cheaper 
> than a proper SVD)
> EPS.setProblemType(slp.EPS.ProblemType.GHEP) # Specify that A is hermitian 
> (by construction), but M is semi-positive
> EPS.setWhichEigenpairs(EPS.Which.LARGEST_MAGNITUDE) # Find largest eigenvalues
> EPS.setFromOptions()
> EPS.solve()
> 
> 
> Quentin CHEVALIER – IA parcours recherche
> LadHyX - Ecole polytechnique
> __________
> 
> 
>  
> 
> 
> Quentin CHEVALIER – IA parcours recherche
> LadHyX - Ecole polytechnique
> 
> __________
> 
> 
> 
> On Thu, 5 May 2022 at 12:05, Jose E. Roman <[email protected]> wrote:
> Your operator is not well formed. If you do this:
> 
> x, y = LHS.createVecs()
> x.set(1)
> LHS.mult(x,y)
> y.view()
> 
> you will see that the output is all zeros. That is why SLEPc complains that 
> "Initial vector is zero or belongs to the deflation space".
> 
> Jose
> 
> 
> > El 5 may 2022, a las 10:46, Quentin Chevalier 
> > <[email protected]> escribió:
> > 
> > Just a quick amend on the previous statement ; the problem arises in
> > sequential and parallel. The MWE as is is provided for the parallel
> > case, but imposing m_local=m makes it go sequential.
> > 
> > Cheers,
> > 
> > Quentin CHEVALIER – IA parcours recherche
> > LadHyX - Ecole polytechnique
> > __________
> > 
> > 
> > 
> > On Thu, 5 May 2022 at 10:34, Quentin Chevalier
> > <[email protected]> wrote:
> >> 
> >> Hello all and thanks for your great work in bringing this very helpful 
> >> package to the community !
> >> 
> >> That said, I wouldn't need this mailing list if everything was running 
> >> smoothly. I have a rather involved eigenvalue problem that I've been 
> >> working on that's been throwing a mysterious error :
> >>> 
> >>> petsc4py.PETSc.Error: error code 77
> >>> 
> >>> [1] EPSSolve() at /usr/local/slepc/src/eps/interface/epssolve.c:149
> >>> [1] EPSSolve_KrylovSchur_Default() at 
> >>> /usr/local/slepc/src/eps/impls/krylov/krylovschur/krylovschur.c:289
> >>> [1] EPSGetStartVector() at 
> >>> /usr/local/slepc/src/eps/interface/epssolve.c:824
> >>> [1] Petsc has generated inconsistent data
> >>> [1] Initial vector is zero or belongs to the deflation space
> >> 
> >> 
> >> This problem occurs in parallel with two processors, using the petsc4py 
> >> library using the dolfinx/dolfinx docker container. I have PETSc version 
> >> 3.16.0, in complex mode, python 3, and I'm running all of that on a 
> >> OpenSUSE Leap 15.2 machine (but I think the docker container has a Ubuntu 
> >> OS).
> >> 
> >> I wrote a minimal working example below, but I'm afraid the process for 
> >> building the matrices is involved, so I decided to directly share the 
> >> matrices instead : 
> >> https://seminaris.polytechnique.fr/share/s/ryJ6L2nR4ketDwP
> >> 
> >> They are in binary format, but inside the container I hope someone could 
> >> reproduce my issue. A word on the structure and intent behind these 
> >> matrices :
> >> 
> >> QE is a diagonal rectangular real matrix. Think of it as some sort of 
> >> preconditioner
> >> L is the least dense of them all, the only one that is complex, and in 
> >> order to avoid inverting it I'm using two KSPs to compute solve problems 
> >> on the fly
> >> Mf is a diagonal square real matrix, its on the right-hand side of the 
> >> Generalised Hermitian Eigenvalue problem (I'm solving 
> >> QE^H*L^-1H*L^-1*QE*x=lambda*Mf*x
> >> 
> >> Full MWE is below :
> >> 
> >> from petsc4py import PETSc as pet
> >> from slepc4py import SLEPc as slp
> >> from mpi4py.MPI import COMM_WORLD
> >> 
> >> # Global sizes
> >> m_local=COMM_WORLD.rank*(490363-489780)+489780
> >> n_local=COMM_WORLD.rank*(452259-451743)+451743
> >> m=980143
> >> n=904002
> >> 
> >> QE=pet.Mat().createAIJ([[m_local,m],[n_local,n]])
> >> L=pet.Mat().createAIJ([[m_local,m],[m_local,m]])
> >> Mf=pet.Mat().createAIJ([[n_local,n],[n_local,n]])
> >> 
> >> viewerQE = pet.Viewer().createMPIIO("QE.dat", 'r', COMM_WORLD)
> >> QE.load(viewerQE)
> >> viewerL = pet.Viewer().createMPIIO("L.dat", 'r', COMM_WORLD)
> >> L.load(viewerL)
> >> viewerMf = pet.Viewer().createMPIIO("Mf.dat", 'r', COMM_WORLD)
> >> Mf.load(viewerMf)
> >> 
> >> QE.assemble()
> >> L.assemble()
> >> 
> >> KSPs = []
> >> # Useful solvers (here to put options for computing a smart R)
> >> for Mat in [L,L.hermitianTranspose()]:
> >> KSP = pet.KSP().create()
> >> KSP.setOperators(Mat)
> >> KSP.setFromOptions()
> >> KSPs.append(KSP)
> >> class LHS_class:
> >> def mult(self,A,x,y):
> >> w=pet.Vec().createMPI([m_local,m],comm=COMM_WORLD)
> >> z=pet.Vec().createMPI([m_local,m],comm=COMM_WORLD)
> >> QE.mult(x,w)
> >> KSPs[0].solve(w,z)
> >> KSPs[1].solve(z,w)
> >> QE.multTranspose(w,y)
> >> 
> >> # Matrix free operator
> >> LHS=pet.Mat()
> >> LHS.create(comm=COMM_WORLD)
> >> LHS.setSizes([[n_local,n],[n_local,n]])
> >> LHS.setType(pet.Mat.Type.PYTHON)
> >> LHS.setPythonContext(LHS_class())
> >> LHS.setUp()
> >> 
> >> # Eigensolver
> >> EPS = slp.EPS(); EPS.create()
> >> EPS.setOperators(LHS,Mf) # Solve QE^T*L^-1H*L^-1*QE*x=lambda*Mf*x
> >> EPS.setProblemType(slp.EPS.ProblemType.GHEP) # Specify that A is hermitian 
> >> (by construction), and B is semi-positive
> >> EPS.setWhichEigenpairs(EPS.Which.LARGEST_MAGNITUDE) # Find largest 
> >> eigenvalues
> >> EPS.setFromOptions()
> >> EPS.solve()
> >> 
> >> Quentin CHEVALIER – IA parcours recherche
> >> 
> >> LadHyX - Ecole polytechnique
> >> 
> >> __________
> 
> <image003.jpg><MWE.py>

Reply via email to