No, the eigenvalue is around -15. I've tried KS and the number of
iterations differs in one when I change the number of MPI processes, which
seems fine for me. So, I'll see if this method is fine for my specific goal
or not, and I'll try to use. Thanks for the help.
El mié., 24 oct. 2018 a las
El mar., 23 oct. 2018 a las 13:53, Matthew Knepley ()
escribió:
> On Tue, Oct 23, 2018 at 6:24 AM Ale Foggia wrote:
>
>> Hello,
>>
>> I'm currently using Lanczos solver (EPSLANCZOS) to get the smallest real
>> eigenvalue (EPS_SMALLEST_REAL) of a Hermitian problem (EPS_HEP). Those are
>> the only
Everything seems correct. I don't know, maybe your problem is very sensitive?
Is the eigenvalue tiny?
I would still try with Krylov-Schur.
Jose
> El 24 oct 2018, a las 14:59, Ale Foggia escribió:
>
> The functions called to set the solver are (in this order): EPSCreate();
>
The functions called to set the solver are (in this order): EPSCreate();
EPSSetOperators(); EPSSetProblemType(EPS_HEP); EPSSetType(EPSLANCZOS);
EPSSetWhichEigenpairs(EPS_SMALLEST_REAL); EPSSetFromOptions();
The output of -eps_view for each run is:
This is very strange. Make sure you call EPSSetFromOptions in the code. Do
iteration counts change also for two different runs with the same number of
processes?
Maybe Lanczos with default options is too sensitive (by default it does not
reorthogonalize). Suggest using Krylov-Schur or Lanczos
I've tried the option that you gave me but I still get different number of
iterations when changing the number of MPI processes: I did 960 procs and
1024 procs and I got 435 and 176 iterations, respectively.
El mar., 23 oct. 2018 a las 16:48, Jose E. Roman ()
escribió:
>
>
> > El 23 oct 2018, a
> El 23 oct 2018, a las 15:46, Ale Foggia escribió:
>
>
>
> El mar., 23 oct. 2018 a las 15:33, Jose E. Roman ()
> escribió:
>
>
> > El 23 oct 2018, a las 15:17, Ale Foggia escribió:
> >
> > Hello Jose, thanks for your answer.
> >
> > El mar., 23 oct. 2018 a las 12:59, Jose E. Roman ()
El mar., 23 oct. 2018 a las 15:33, Jose E. Roman ()
escribió:
>
>
> > El 23 oct 2018, a las 15:17, Ale Foggia escribió:
> >
> > Hello Jose, thanks for your answer.
> >
> > El mar., 23 oct. 2018 a las 12:59, Jose E. Roman ()
> escribió:
> > There is an undocumented option:
> >
> >
> El 23 oct 2018, a las 15:17, Ale Foggia escribió:
>
> Hello Jose, thanks for your answer.
>
> El mar., 23 oct. 2018 a las 12:59, Jose E. Roman ()
> escribió:
> There is an undocumented option:
>
> -bv_reproducible_random
>
> It will force the initial vector of the Krylov subspace to
Hello Jose, thanks for your answer.
El mar., 23 oct. 2018 a las 12:59, Jose E. Roman ()
escribió:
> There is an undocumented option:
>
> -bv_reproducible_random
>
> It will force the initial vector of the Krylov subspace to be the same
> irrespective of the number of MPI processes. This should
On Tue, Oct 23, 2018 at 6:24 AM Ale Foggia wrote:
> Hello,
>
> I'm currently using Lanczos solver (EPSLANCZOS) to get the smallest real
> eigenvalue (EPS_SMALLEST_REAL) of a Hermitian problem (EPS_HEP). Those are
> the only options I set for the solver. My aim is to be able to
> predict/estimate
There is an undocumented option:
-bv_reproducible_random
It will force the initial vector of the Krylov subspace to be the same
irrespective of the number of MPI processes. This should be used for scaling
analyses as the one you are trying to do.
An additional comment is that we strongly
Hello,
I'm currently using Lanczos solver (EPSLANCZOS) to get the smallest real
eigenvalue (EPS_SMALLEST_REAL) of a Hermitian problem (EPS_HEP). Those are
the only options I set for the solver. My aim is to be able to
predict/estimate the time-to-solution. To do so, I was doing a scaling of
the
13 matches
Mail list logo