Ah ok. When I find the time I will have a look into mapping processes to
cores. I guess it is possible using the torque scheduler.
Thank you!
On Tue, Apr 4, 2017 at 2:00 PM Matthew Knepley wrote:
> On Tue, Apr 4, 2017 at 6:58 AM, Toon Weyens wrote:
>
On Tue, Apr 4, 2017 at 6:58 AM, Toon Weyens wrote:
> Dear Matthew,
>
> Thanks for your answer, but this is something I do not really know much
> about... The node I used has 12 cores and about 24GB of RAM.
>
> But for these test cases, isn't the distribution of memory over
Dear Matthew,
Thanks for your answer, but this is something I do not really know much
about... The node I used has 12 cores and about 24GB of RAM.
But for these test cases, isn't the distribution of memory over cores
handled automatically by SLEPC?
Regards
On Tue, Apr 4, 2017 at 1:40 PM
On Tue, Apr 4, 2017 at 2:20 AM, Toon Weyens wrote:
> Dear Jose and Matthew,
>
> Thank you so much for the effort!
>
> I still don't manage to converge using the range interval technique to
> filter out the positive eigenvalues, but using shift-invert combined with a
>
Dear Jose and Matthew,
Thank you so much for the effort!
I still don't manage to converge using the range interval technique to
filter out the positive eigenvalues, but using shift-invert combined with a
target eigenvalue does true miracles. I get extremely fast convergence.
The truth of the
> El 1 abr 2017, a las 0:01, Toon Weyens escribió:
>
> Dear jose,
>
> I have saved the matrices in Matlab format and am sending them to you using
> pCloud. If you want another format, please tell me. Please also note that
> they are about 1.4GB each.
>
> I also attach
Sorry, I forgot to add the download link for the matrix files:
https://transfer.pcloud.com/download.html?code=5ZViHIZI96yPIODHYSZ7y1HZMloBfcyhAHunjQVMpWUJIykLt76k
Thanks
On Sat, Apr 1, 2017 at 12:01 AM Toon Weyens wrote:
> Dear jose,
>
> I have saved the matrices in
Dear jose,
I have saved the matrices in Matlab format and am sending them to you using
pCloud. If you want another format, please tell me. Please also note that
they are about 1.4GB each.
I also attach a typical output of eps_view and log_view in output.txt, for
8 processes.
Thanks so much for
In order to answer about GD I would need to know all the settings you are
using. Also if you could send me the matrix I could do some tests.
GD and JD are preconditioned eigensolvers, which need a reasonably good
preconditioner. But MUMPS is a direct solver, not a preconditioner, and that is
Dear both,
I have recompiled slepc and petsc without debugging, as well as with the
recommended --with-fortran-kernels=1. In the attachment I show the scaling
for a typical "large" simulation with about 120 000 unkowns, using
Krylov-Schur.
There are two sets of datapoints there, as I do two EPS
On Thu, Mar 30, 2017 at 3:05 AM, Jose E. Roman wrote:
>
> > El 30 mar 2017, a las 9:27, Toon Weyens
> escribió:
> >
> > Hi, thanks for the answer.
> >
> > I use MUMPS as a PC. The options -ksp_converged_reason,
> -ksp_monitor_true_residual and
> El 30 mar 2017, a las 9:27, Toon Weyens escribió:
>
> Hi, thanks for the answer.
>
> I use MUMPS as a PC. The options -ksp_converged_reason,
> -ksp_monitor_true_residual and -ksp_view are not used.
>
> The difference between the log_view outputs of running a simple
Hi, thanks for the answer.
I use MUMPS as a PC. The
options -ksp_converged_reason, -ksp_monitor_true_residual and -ksp_view
are not used.
The difference between the log_view outputs of running a simple solution
with 1, 2, 3 or 4 MPI procs is attached (debug version).
I can see that with 2
On Wed, Mar 29, 2017 at 6:58 AM, Toon Weyens wrote:
> Dear Jose,
>
> Thanks for the answer. I am looking for the smallest real, indeed.
>
> I have, just now, accidentally figured out that I can get correct
> convergence by increasing NCV to higher values, so that's
Dear Jose,
Thanks for the answer. I am looking for the smallest real, indeed.
I have, just now, accidentally figured out that I can get correct
convergence by increasing NCV to higher values, so that's covered! I
thought I had checked this before, but apparently not. It's converging well
now,
> El 29 mar 2017, a las 9:08, Toon Weyens escribió:
>
> I started looking for alternatives from the standard Krylov-Schur method to
> solve the generalized eigenvalue problem Ax = kBx in my code. These matrices
> have a block-band structure (typically 5, 7 or 9 blocks
16 matches
Mail list logo