I guess conditioning is getting worse. I would try using MUMPS for the sub_pc
LU.
Jose
> El 10 dic 2019, a las 18:36, Perceval Desforges
> escribió:
>
> Hello again,
>
> I have tried following your advice to use preconditioned iterative solvers
> for my 3D systems, and have been
Hello again,
I have tried following your advice to use preconditioned iterative
solvers for my 3D systems, and have been encountering some difficulties.
I have been following the recommendations of section 3.4.1 of the slepc
user's manual, setting the following options: -st_ksp_type gmres
> I am basically trying to solve a finite element problem, which is why in 3D I
> have 7 non-zero diagonals that are quite farm apart from one another. In 2D I
> only have 5 non-zero diagonals that are less far apart. So is it normal that
> the set up time is around 400 times greater in the 3D
Hello,
This is the output of -log_view. I selected what I thought were the
important parts. I don't know if this is the best format to send the
logs. If a text file is better let me know. Thanks again,
-- PETSc Performance
Summary:
On Mon, Nov 25, 2019 at 11:45 AM Perceval Desforges <
perceval.desfor...@polytechnique.edu> wrote:
> I am basically trying to solve a finite element problem, which is why in
> 3D I have 7 non-zero diagonals that are quite farm apart from one another.
> In 2D I only have 5 non-zero diagonals that
In 3D problems it is recommended to use preconditioned iterative solvers.
Unfortunately the spectrum slicing technique requires the full factorization
(because it uses matrix inertia).
> El 25 nov 2019, a las 18:44, Perceval Desforges
> escribió:
>
> I am basically trying to solve a finite
I am basically trying to solve a finite element problem, which is why in
3D I have 7 non-zero diagonals that are quite farm apart from one
another. In 2D I only have 5 non-zero diagonals that are less far apart.
So is it normal that the set up time is around 400 times greater in the
3D case? Is
Probably it is not a preallocation issue, as it shows "total number of mallocs
used during MatSetValues calls =0".
Adding new diagonals may increase fill-in a lot, if the new diagonals are
displaced with respect to the other ones.
The partitions option is intended for running several nodes. If
On Mon, Nov 25, 2019 at 11:20 AM Perceval Desforges <
perceval.desfor...@polytechnique.edu> wrote:
> Hi,
>
> So I'm loading two matrices from files, both 100 by 1000. I ran
> the program with -mat_view::ascii_info and I got:
>
> Mat Object: 1 MPI processes
> type: seqaij
>
Hi,
So I'm loading two matrices from files, both 100 by 1000. I ran
the program with -mat_view::ascii_info and I got:
Mat Object: 1 MPI processes
type: seqaij
rows=100, cols=100
total: nonzeros=700, allocated nonzeros=700
total number of mallocs used during
Then I guess it is the factorization that is failing. How many nonzero entries
do you have? Run with
-mat_view ::ascii_info
Jose
> El 22 nov 2019, a las 19:56, Perceval Desforges
> escribió:
>
> Hi,
>
> Thanks for your answer. I tried looking at the inertias before solving, but
> the
Hi,
Thanks for your answer. I tried looking at the inertias before solving,
but the problem is that the program crashes when I call EPSSetUp with
this error:
slurmstepd: error: Step 2140.0 exceeded virtual memory limit (313526508
> 107317760), being killed
I get this error even when there
Don't use -mat_mumps_icntl_14 to reduce the memory used by MUMPS.
Most likely the problem is that the interval you gave is too large and contains
too many eigenvalues (SLEPc needs to allocate at least one vector per each
eigenvalue). You can count the eigenvalues in the interval with the
Hello all,
I am trying to obtain all the eigenvalues in a certain interval for a
fairly large matrix (100 * 100). I therefore use the spectrum
slicing method detailed in section 3.4.5 of the manual. The calculations
are run on a processor with 20 cores and 96 Go of RAM.
The options I
14 matches
Mail list logo