Re: [petsc-dev] [petsc-users] Bad memory scaling with PETSc 3.10

2019-03-28 Thread Victor Eijkhout via petsc-dev
On Mar 27, 2019, at 8:30 AM, Matthew Knepley mailto:knep...@gmail.com>> wrote: I think Satish now prefers --with-cc=${MPICH_HOME}/mpicc --with-cxx=${MPICH_HOME}/mpicxx --with-fc=${MPICH_HOME}/mpif90 That still requires with-mpi:

Re: [petsc-dev] [petsc-users] Bad memory scaling with PETSc 3.10

2019-03-27 Thread Victor Eijkhout via petsc-dev
module load mkl use petsc options: --with-blas-lapack-dir=${MKLROOT} --with-cc=${MPICH_HOME}/mpicc --with-cxx=${MPICH_HOME}/mpicxx --with-fc=${MPICH_HOME}/mpif90 Sorry about the extraneous crap. I was cutting/pasting from my own much longer script. V. On Mar 27, 2019, at 8:47 AM, Mark Adams

Re: [petsc-dev] [petsc-users] Bad memory scaling with PETSc 3.10

2019-03-27 Thread Zhang, Hong via petsc-dev
Myriam, - PETSc 3.6.4 (reference) - PETSc 3.10.4 without specific options - PETSc 3.10.4 with the three scalability options you mentionned What are the 'three scalability options' here? What is "MaxMemRSS", the max memory used by a single core? How many cores do you start with? Do you have

Re: [petsc-dev] [petsc-users] Bad memory scaling with PETSc 3.10

2019-03-27 Thread Mark Adams via petsc-dev
So is this the instructions that I should give him? This grad student is a quick study but he has not computing background. So we don't care what we use, we just want to work (easily). Thanks Do not use "--download-fblaslapack=1". Set it to 0. Same for "--download-mpich=1". Now do: > module

Re: [petsc-dev] [petsc-users] Bad memory scaling with PETSc 3.10

2019-03-27 Thread Victor Eijkhout via petsc-dev
On Mar 27, 2019, at 7:29 AM, Mark Adams mailto:mfad...@lbl.gov>> wrote: How should he configure to this? remove "--download-fblaslapack=1" and add 1. If using gcc module load mkl with either compiler: export BLAS_LAPACK_LOAD=--with-blas-lapack-dir=${MKLROOT} 2. We define MPICH_HOME

Re: [petsc-dev] [petsc-users] Bad memory scaling with PETSc 3.10

2019-03-27 Thread Mark Adams via petsc-dev
On Wed, Mar 27, 2019 at 12:06 AM Victor Eijkhout wrote: > > > On Mar 26, 2019, at 6:25 PM, Mark Adams via petsc-dev < > petsc-dev@mcs.anl.gov> wrote: > > /home1/04906/bonnheim/olympus-keaveny/Olympus/olympus.petsc-3.9.3.skx-cxx-O > on a skx-cxx-O named c478-062.stampede2.tacc.utexas.edu with

Re: [petsc-dev] [petsc-users] Bad memory scaling with PETSc 3.10

2019-03-26 Thread Victor Eijkhout via petsc-dev
On Mar 26, 2019, at 6:25 PM, Mark Adams via petsc-dev mailto:petsc-dev@mcs.anl.gov>> wrote: /home1/04906/bonnheim/olympus-keaveny/Olympus/olympus.petsc-3.9.3.skx-cxx-O on a skx-cxx-O named c478-062.stampede2.tacc.utexas.edu with 4800 processors,

Re: [petsc-dev] [petsc-users] Bad memory scaling with PETSc 3.10

2019-03-26 Thread Mark Adams via petsc-dev
> > > The way to reduce the memory is to have the all-at-once algorithm (Mark is > an expert on this). But I am not sure how efficient it could be > implemented. > I have some data from a 3D elasticity problem with 1.4B equations on:

Re: [petsc-dev] [petsc-users] Bad memory scaling with PETSc 3.10

2019-03-22 Thread Zhang, Hong via petsc-dev
Fande, The images are very interesting and helpful. How did you get these images? Petsc PtAP uses 753MB for PtAPSymbolic and only 116MB for PtAPNumeric, while hypre uses 215MB -- it seems hypre does not implement symbolic PtAP. When I implement PtAP, my focus was on numeric part because it was

Re: [petsc-dev] [petsc-users] Bad memory scaling with PETSc 3.10

2019-03-21 Thread Mark Adams via petsc-dev
> > > Could you explain this more by adding some small examples? > > Since you are considering implementing all-at-once (four nested loops, right?) I'll give you my old code. This code is hardwired for two AMG and for a geometric-AMG, where the blocks of the R (and hence P) matrices are scaled

Re: [petsc-dev] [petsc-users] Bad memory scaling with PETSc 3.10

2019-03-21 Thread Fande Kong via petsc-dev
Hi Mark, Thanks for your email. On Thu, Mar 21, 2019 at 6:39 AM Mark Adams via petsc-dev < petsc-dev@mcs.anl.gov> wrote: > I'm probably screwing up some sort of history by jumping into dev, but > this is a dev comment ... > > (1) -matptap_via hypre: This call the hypre package to do the PtAP

Re: [petsc-dev] [petsc-users] Bad memory scaling with PETSc 3.10

2019-03-21 Thread Mark Adams via petsc-dev
I'm probably screwing up some sort of history by jumping into dev, but this is a dev comment ... (1) -matptap_via hypre: This call the hypre package to do the PtAP trough > an all-at-once triple product. In our experiences, it is the most memory > efficient, but could be slow. > FYI, I visited