On Mon, Jun 13, 2016 at 11:03 AM, Xujun Zhao wrote:
> Hi John,
>
> Thank you for your information. I am using the recent version of libMesh.
>
> Also I finally figured out the culprit last weekend, which is due to hypre
> preconditioner used in one of my fieldsplit variables.
> The BoomerAMG uses
Hi John,
Thank you for your information. I am using the recent version of libMesh.
Also I finally figured out the culprit last weekend, which is due to hypre
preconditioner used in one of my fieldsplit variables.
The BoomerAMG uses very large memory during the KSPSolve()
-Xujun
On Mon, Jun 13,
On Thu, Jun 9, 2016 at 3:29 PM, Xujun Zhao wrote:
> Derek,
>
> Excellent analysis! It really helps.
> I further looked at the PETSc log summary when running on 1 CPU, the max
> memory Petsc allocated is about 22.2G (close to what you predicted).
> However, the total memory usage is up to 100G. Th
Derek,
Excellent analysis! It really helps.
I further looked at the PETSc log summary when running on 1 CPU, the max
memory Petsc allocated is about 22.2G (close to what you predicted).
However, the total memory usage is up to 100G. This is much more than
expected. I think there should be somethin
Back of the envelope:
(Assuming you're using HEX27 elements... but the analysis won't be off by
much if you're using HEX20)
~60 first order nodes in each direction: 216,000 first order nodes
~120 second order nodes in each direction: 1,728,000 second order nodes
One first order variable: 216,000
On Wed, Jun 8, 2016 at 3:41 PM Xujun Zhao wrote:
> Hi Cody,
>
> This sounds like the mesh data keeps a copy on each processor, but the
> matrices and vectors are still stored distributedly. is it correct?
>
> Yes
> I have a 3D stokes problem with 60x60x60 mesh, 2nd order element for
> velocity
Hi Cody,
This sounds like the mesh data keeps a copy on each processor, but the
matrices and vectors are still stored distributedly. is it correct?
I have a 3D stokes problem with 60x60x60 mesh, 2nd order element for
velocity u,v,w, and first order for pressure p. Totally about 2.9M dofs.
This ca
That's right!
This is the classic space versus time tradeoff. In the bigger scheme of
things, using a little more memory is usually fine on a modern system. The
SerialMesh (now called ReplicatedMesh) is quite a bit faster. I think the
general consensus is: use ReplicatedMesh until you are truly me