Thank you both for your answers :)
Matt:
-Yes, sorry I forgot to tell you that, but I've also called
PetscMemorySetGetMaximumUsage() right after initializing SLEPc. Also I've
seen a strange behaviour: if I ran the same code in my computer and in the
cluster *without* the command line option
On Thu, Oct 4, 2018 at 1:54 PM Ale Foggia wrote:
> Thank you both for your answers :)
>
> Matt:
> -Yes, sorry I forgot to tell you that, but I've also called
> PetscMemorySetGetMaximumUsage() right after initializing SLEPc. Also I've
> seen a strange behaviour: if I ran the same code in my
Matthew Knepley writes:
> On Thu, Oct 4, 2018 at 1:54 PM Ale Foggia wrote:
>
>> Thank you both for your answers :)
>>
>> Matt:
>> -Yes, sorry I forgot to tell you that, but I've also called
>> PetscMemorySetGetMaximumUsage() right after initializing SLEPc. Also I've
>> seen a strange behaviour:
Hi, I'm running PFLOTRAN and in PFLOTRAN, we have flow_ and flow_sub_
processes. I was wondering what the red underlined values meant (each block
tolerance?) and how to change them (would it affect convergence?). Blue
marked bold values are changed from the default values for linear solvers.
FLOW
> On Oct 4, 2018, at 6:09 PM, HeeHo Park wrote:
>
> Barry and Jed,
>
> Thank you for your answers. I think I need to learn more about domain
> decomposition as I am a bit confused.
> Is it true that we are using BiCGstab here to solve the system of equations,
> using Additive Schwartz as
The subdomain KSP (flow_sub_) has type "preonly" so it always does
exactly one iteration. If you were to use an iterative subdomain solver
(e.g., -flow_sub_ksp_type gmres) then those tolerances would be used.
HeeHo Park writes:
> Hi, I'm running PFLOTRAN and in PFLOTRAN, we have flow_ and
Barry and Jed,
Thank you for your answers. I think I need to learn more about domain
decomposition as I am a bit confused.
Is it true that we are using BiCGstab here to solve the system of
equations, using Additive Schwartz as a domain decomposition
preconditioner, and that precondition matrix
Since
KSP Object: (flow_sub_) 1 MPI processes
type: preonly
this means only a single iteration of the inner solver is used so the numbers
in red are not used.
You could do something like -flow_ksp_type fgmres -flow_sub_ksp_type gmres
-flow_sub_ksp_rtol 1.e-2 but it wouldn't
On Thu, Oct 4, 2018 at 4:43 AM Ale Foggia wrote:
> Hello all,
>
> I'm using SLEPc 3.9.2 (and PETSc 3.9.3) to get the EPS_SMALLEST_REAL of a
> matrix with the following characteristics:
>
> * type: real, Hermitian, sparse
> * linear size: 2333606220
> * distributed in 2048 processes (64 nodes, 32
Hello all,
I'm using SLEPc 3.9.2 (and PETSc 3.9.3) to get the EPS_SMALLEST_REAL of a
matrix with the following characteristics:
* type: real, Hermitian, sparse
* linear size: 2333606220
* distributed in 2048 processes (64 nodes, 32 procs per node)
My code first preallocates the necessary memory
Regarding the SLEPc part:
- What do you mean by "each step"? Are you calling EPSSolve() several times?
- Yes, the BV object is generally what takes most of the memory. It is
allocated at the beginning of EPSSolve(). Depending on the solver/options,
other memory may be allocated as well.
- You
So if you have three processors you would want the first processor to have
all x gradients, etc.?
You could use ISCreateGeneral:
https://www.mcs.anl.gov/petsc/petsc-current/src/dm/examples/tutorials/ex6.c.html
You need to create an integer array with the global indices that you want
to bring to
Hi all,
I am trying to understand how to create a custom scatter from petsc
ordering in a local vector to natural ordering in a global vector.
I have a 3D DMDA and local vector containing field data and am
calculating the x y and z gradients into their own local vectors. I
then need to scatter
On Thu, Oct 4, 2018 at 8:56 AM Phil Tooley
wrote:
> Hi all,
>
> I am trying to understand how to create a custom scatter from petsc
> ordering in a local vector to natural ordering in a global vector.
>
> I have a 3D DMDA and local vector containing field data and am
> calculating the x y and z
14 matches
Mail list logo