Lucas,

I have tried your matrices. Below are some results with complex scalars and 
MUMPS using 2 MPI processes.

Using shift-and-invert with eps_target=-0.95 I get three eigenvalues (two of 
them equal to -1), and MUMPS is taking 65 seconds out of 67. Convergence is 
very fast, nothing can be improved because most of the time is due to MUMPS.

Adding a region filter with -rg_type interval -rg_interval_endpoints 
-.99,1,-.1,.1  the times are essentially the same, but you get rid of the 
unwanted eigenvalues (-1). This is the best option.

If you need to compute many eigenvalues, then you should consider specifying an 
interval (spectrum slicing), see section 3.4.5 of the manual. But this cannot 
be used with complex scalars with MUMPS (see the note in the manual). Since 
your matrices are real and symmetric, I tried it with real scalars, using 
-eps_interval -.99,1 and in that case I get 33 eigenvalues and MUMPS takes 33 
seconds out of the overall 68 seconds (three numerical factorizations are done 
in this execution).

$ mpiexec -n 2 ./ex7 -f1 Areal.mat -f2 Breal.mat -eps_gen_hermitian -st_type 
sinvert -st_pc_type cholesky -eps_interval -.99,1 -st_mat_mumps_icntl_13 1 
-st_mat_mumps_icntl_24 1 

Generalized eigenproblem stored in file.

 Reading REAL matrices from binary files...
 Number of iterations of the method: 3
 Number of linear iterations of the method: 0
 Solution method: krylovschur

 Number of requested eigenvalues: 33
 Stopping condition: tol=1e-10, maxit=100
 Linear eigensolve converged (33 eigenpairs) due to CONVERGED_TOL; iterations 3
 ---------------------- --------------------
            k             ||Ax-kBx||/||kx||
 ---------------------- --------------------
       -0.698786            4.61016e-14
       -0.598058            5.34239e-14
       -0.598051            5.53609e-14
       -0.380951            7.83403e-14
       -0.280707            2.91772e-13
       -0.280671            3.86414e-13
       -0.273832            2.18507e-13
       -0.273792            2.25672e-13
       -0.064625            2.71132e-12
       -0.064558            2.74757e-12
       -0.034888            4.02325e-12
        0.138192            1.56285e-12
        0.138298            3.58149e-12
        0.197123            1.77274e-12
        0.197391            1.93185e-12
        0.268338            1.09276e-12
        0.268416            8.24014e-13
        0.363498            9.21471e-13
        0.420608            7.18076e-13
        0.420669            5.13068e-13
        0.523661            1.28491e-12
        0.621233            1.07663e-12
        0.621648            5.91783e-13
        0.662408            4.36285e-13
        0.662578            5.11942e-13
        0.708328            3.94862e-13
        0.708488            3.56613e-13
        0.709269            2.73414e-13
        0.733286            5.73269e-13
        0.733524            4.52308e-13
        0.814093             2.5299e-13
        0.870087            2.02513e-13
        0.870229            3.19166e-13
 ---------------------- --------------------



> El 31 may 2022, a las 22:28, Jose E. Roman <[email protected]> escribió:
> 
> Probably MUMPS is taking most of the time...
> 
> If the matrices are not too large, send them to my personal email and I will 
> have a look.
> 
> Jose
> 
> 
>> El 31 may 2022, a las 22:13, Lucas Banting <[email protected]> 
>> escribió:
>> 
>> Thanks for the sharing the article. 
>> For my application, I think using an interval region to exclude the unneeded 
>> eigenvalues will still be faster than forming a larger constrained system. 
>> Specifying an interval appears to run in a similar amount of time.
>> 
>> Lucas
>> From: Jose E. Roman <[email protected]>
>> Sent: Tuesday, May 31, 2022 2:08 PM
>> To: Lucas Banting <[email protected]>
>> Cc: PETSc <[email protected]>
>> Subject: Re: [petsc-users] Accelerating eigenvalue computation / removing 
>> portion of spectrum
>> 
>> Caution: This message was sent from outside the University of Manitoba.
>> 
>> 
>> Please respond to the list also.
>> 
>> The problem with EPSSetDeflationSpace() is that it internally orthogonalizes 
>> the vectors that you pass in, so it is not viable for thousands of vectors.
>> 
>> You can try implementing any of the alternative schemes described in 
>> https://doi.org/10.1002/nla.307
>> 
>> Another thing you can try is to use a region for filtering, as explained in 
>> section 2.6.4 of the users manual. Use a region that excludes -1.0 and you 
>> will have more chances to get the wanted eigenvalues faster. But still 
>> convergence may be slow.
>> 
>> Jose
>> 
>> 
>>> El 31 may 2022, a las 20:52, Lucas Banting <[email protected]> 
>>> escribió:
>>> 
>>> Thanks for the response Jose,
>>> 
>>> There is an analytical solution for these modes actually, however there are 
>>> thousands of them and they are all sparse.
>>> I assume it is a non-trivial thing for EPSSetDeflationSpace() to take 
>>> something like a MATAIJ as input?
>>> 
>>> Lucas
>>> From: Jose E. Roman <[email protected]>
>>> Sent: Tuesday, May 31, 2022 1:11 PM
>>> To: Lucas Banting <[email protected]>
>>> Cc: [email protected] <[email protected]>
>>> Subject: Re: [petsc-users] Accelerating eigenvalue computation / removing 
>>> portion of spectrum
>>> 
>>> Caution: This message was sent from outside the University of Manitoba.
>>> 
>>> 
>>> If you know how to cheaply compute a basis of the nullspace of S, then you 
>>> can try passing it to the solver via 
>>> EPSSetDeflationSpace()https://slepc.upv.es/documentation/current/docs/manualpages/EPS/EPSSetDeflationSpace.html
>>> 
>>> Jose
>>> 
>>> 
>>>> El 31 may 2022, a las 19:28, Lucas Banting <[email protected]> 
>>>> escribió:
>>>> 
>>>> Hello,
>>>> 
>>>> I have a general non hermitian eigenvalue problem arising from the 3D 
>>>> helmholtz equation.
>>>> The form of the helmholtz equaton is:
>>>> 
>>>> (S - k^2M)v = lambda k^2 M v
>>>> 
>>>> Where S is the stiffness/curl-curl matrix and M is the mass matrix 
>>>> associated with edge elements used to discretize the problem.
>>>> The helmholtz equation creates eigenvalues of -1.0, which I believe are 
>>>> eigenvectors that are part of the null space of the curl-curl operator S.
>>>> 
>>>> For my application, I would like to compute eigenvalues > -1.0, and avoid 
>>>> computation of eigenvalues of -1.0.
>>>> I am currently using shift invert ST with mumps LU direct solver. By 
>>>> increasing the shift away from lambda=-1.0. I get faster computation of 
>>>> eigenvectors, and the lambda=-1.0 eigenvectors appear to slow down the 
>>>> computation by about a factor of two.
>>>> Is there a way to avoid these lambda = -1.0 eigenpairs with a GNHEP 
>>>> problem type?
>>>> 
>>>> Regards,
>>>> Lucas
> 

Reply via email to