David,
GAMG picks the number of levels based on how the coarsening process etc
proceeds. You cannot hardwire it to a particular value. You can run with -info
to get more info potentially on the decisions GAMG is making.
Barry
> On Oct 20, 2017, at 2:06 PM, David Nolte
On Fri, Oct 20, 2017 at 7:43 PM, Kong, Fande wrote:
> Hi All,
>
> I am trying to solve a generalized eigenvalue problem (using SLEPc) with
> "-eps_type krylovschur -st_type sinvert". I got an error message: "Must
> select a target sorting criterion if using shift-and-invert".
Hi All,
I am trying to solve a generalized eigenvalue problem (using SLEPc) with
"-eps_type krylovschur -st_type sinvert". I got an error message: "Must
select a target sorting criterion if using shift-and-invert".
Not sure how to proceed. I do not quite understand this sentence.
Fande,
On Fri, Oct 20, 2017 at 6:42 PM, Barry Smith wrote:
>
> > On Oct 18, 2017, at 4:14 AM, Jaganathan, Srikrishna <
> srikrishna.jaganat...@fau.de> wrote:
> >
> > Hello,
> >
> >
> > I have been trying to distribute a already existing stiffness matrix in
> my FEM code to petsc
> On Oct 18, 2017, at 4:14 AM, Jaganathan, Srikrishna
> wrote:
>
> Hello,
>
>
> I have been trying to distribute a already existing stiffness matrix in my
> FEM code to petsc parallel matrix object , but I am unable to find any
> documentation regarding it.
PS: I didn't realize at first, it looks as if the -pc_mg_levels 3 option
was not taken into account:
type: gamg
MG: type is MULTIPLICATIVE, levels=1 cycles=v
On 10/20/2017 03:32 PM, David Nolte wrote:
> Dear all,
>
> I have some problems using GAMG as a preconditioner for (F)GMRES.
>
Dear all,
I have some problems using GAMG as a preconditioner for (F)GMRES.
Background: I am solving the incompressible, unsteady Navier-Stokes
equations with a coupled mixed FEM approach, using P1/P1 elements for
velocity and pressure on an unstructured tetrahedron mesh with about
2mio DOFs (and
Justin is right that parallelism will be of limited value for such small
systems. This looks like a serial optimization job.
Moreover, in this case, a better numerical method usually trumps any kind
of machine optimization.
Matt
On Fri, Oct 20, 2017 at 2:55 AM, Justin Chang
600 unknowns is way too small to parallelize. Need at least 10,000 unknowns
per MPI process:
https://www.mcs.anl.gov/petsc/documentation/faq.html#slowerparallel
What problem are you solving? Sounds like you either compiled PETSc with
debugging mode on or you just have a really terrible solver.