Re: [petsc-users] About recent changes in GAMG

2024-04-24 Thread Jed Brown




 Ashish Patel  writes: > Hi Jed, > VmRss is on a higher side and seems to match what PetscMallocGetMaximumUsage is reporting. HugetlbPages was 0 for me. > > Mark, running without the near nullspace also




ZjQcmQRYFpfptBannerStart




  

  
	This Message Is From an External Sender
  
  
This message came from outside your organization.
  



 
  


ZjQcmQRYFpfptBannerEnd




Ashish Patel  writes:

> Hi Jed,
> VmRss is on a higher side and seems to match what PetscMallocGetMaximumUsage is reporting. HugetlbPages was 0 for me.
>
> Mark, running without the near nullspace also gives similar results. I have attached the malloc_view and gamg info for serial and 2 core runs. Some of the standout functions on rank 0 for parallel run seems to be
> 5.3 GB MatSeqAIJSetPreallocation_SeqAIJ
> 7.7 GB MatStashSortCompress_Private
> 10.1 GB PetscMatStashSpaceGet
> 7.7 GB  PetscSegBufferAlloc_Private
>
> malloc_view also says the following
> [0] Maximum memory PetscMalloc()ed 32387548912 maximum size of entire process 8270635008
> which fits the PetscMallocGetMaximumUsage > PetscMemoryGetMaximumUsage output.

This would occur if there was a large PetscMalloc'd block that did not get used (so only a portion of it is faulted and thus becomes resident).

Can you run a heap profiler like heaptrack?

https://urldefense.us/v3/__https://github.com/KDE/heaptrack__;!!G_uCfscf7eWS!df4YJk0DTT-nhR7waR508BVDCwjsXjQWK-Ng4rwx9hY2N6Wzg-qLoMvB5seh4A-GpnUIad0xjnCKheATOMQ$



Re: [petsc-users] Spack build and ptscotch

2024-04-24 Thread Satish Balay via petsc-users
This is the complexity with maintaining dependencies (and dependencies
of dependencies), and different build systems

- Its not easy to keep the "defaults" in both builds exactly the same.
- And its not easy to expose all "variants" or keep the same variants in both 
builds.
- And each pkg has its own issues that prevents some combinations to
  work or not [or tested combinations vs untested].

This e-mail query has multiple things:

- understand "why" the current impl of [spack, petsc] build tools are the way 
they are.
- if they can be improved
- and build use cases that you need working
- [and subsequently your code working]

Addressing them all is not easy - so lets stick with what you need to make 
progress.

For one - we recommend using latest petsc version [i.e 3.21 - not 3.19] - any 
fixes we have will address the current release.

> - spack: ptscotch will always be built without parmetis wrappers, can't turn 
> on

diff --git a/var/spack/repos/builtin/packages/petsc/package.py 
b/var/spack/repos/builtin/packages/petsc/package.py
index b7b1d86b15..ae27ba4c4e 100644
--- a/var/spack/repos/builtin/packages/petsc/package.py
+++ b/var/spack/repos/builtin/packages/petsc/package.py
@@ -268,9 +268,7 @@ def check_fortran_compiler(self):
 depends_on("metis@5:~int64", when="@3.8:+metis~int64")
 depends_on("metis@5:+int64", when="@3.8:+metis+int64")
 
-# PTScotch: Currently disable Parmetis wrapper, this means
-# nested disection won't be available thought PTScotch
-depends_on("scotch+esmumps~metis+mpi", when="+ptscotch")
+depends_on("scotch+esmumps+mpi", when="+ptscotch")
 depends_on("scotch+int64", when="+ptscotch+int64")
 
 depends_on("hdf5@:1.10+mpi", when="@:3.12+hdf5+mpi")

Now you can try:

spack install petsc~metis+ptscotch ^scotch+metis
vs
spack install petsc~metis+ptscotch ^scotch~metis [~metis is the default for 
scotch]

Note the following comment in 
spack/var/spack/repos/builtin/packages/scotch/package.py


# Vendored dependency of METIS/ParMETIS conflicts with standard
# installations
conflicts("metis", when="+metis")
conflicts("parmetis", when="+metis")
<

> - classical: ptscotch will always be built with parmetis wrappers, can't seem 
> to turn off

Looks like spack uses cmake build of ptscotch. PETSc uses Makefile interface. 
It likely doesn't support turning off metis wrappers [without hacks].

So you might either need to hack scotch build via petsc - or just install it 
separately - and use it with petsc.

I see  an effort to migrate scotch build in petsc to cmake

https://urldefense.us/v3/__https://gitlab.com/petsc/petsc/-/merge_requests/7242/__;!!G_uCfscf7eWS!dL00pokNVI6oaNk_chaSyfI1zWFeTgYA9jbRW6n9YT73s51VwLBuXYc-MAJWEKXr8uBgEFrmhFQ_VJOSlvzW6OA$
 
https://urldefense.us/v3/__https://gitlab.com/petsc/petsc/-/merge_requests/7495/__;!!G_uCfscf7eWS!dL00pokNVI6oaNk_chaSyfI1zWFeTgYA9jbRW6n9YT73s51VwLBuXYc-MAJWEKXr8uBgEFrmhFQ_VJOS6OuSrPs$
 

Satish

On Wed, 24 Apr 2024, Daniel Stone wrote:

> Hi PETSc community,
> 
> I've been looking at using Spack to build PETSc, in particular I need to
> disable the default metis/parmetis dependencies and use PTScotch instead,
> for our software.
> I've had quite a bit of trouble with this - it seems like something in the
> resulting build of our simulator ends up badly optimised and an mpi
> bottleneck, when I build against
> PETSc built with Spack.
> 
> I've been trying to track this down, and noticed this in the PETSc Spack
> build recipe:
> 
> # PTScotch: Currently disable Parmetis wrapper, this means
> # nested disection won't be available thought PTScotch
> depends_on("scotch+esmumps~metis+mpi", when="+ptscotch")
> depends_on("scotch+int64", when="+ptscotch+int64")
> 
> 
> Sure enough - when I compare the build with Spack and a traditional build
> with ./configure etc, I see that, in the traditional build, Scotch is
> always built with the parmetis wrapper,
> but not in the Scotch build. In fact, I'm not sure how to turn off the
> parmetis wrapper option for scotch, in the case of a traditional build
> (i.e. there doesn't seem to be a flag in the
> configure script for it) - which would be a very useful test for me (I can
> of course do similar experiments by doing a classical build of petsc
> against ptscotch built separately without the
> wrappers, etc - will try that).
> 
> Does anyone know why the parmetis wrapper is always disabled in the spack
> build options? Is there something about Spack that would prevent it from
> working? It's notable - but I might
> be missing it - that there's no warning that there's a difference in the
> way ptscotch is built between the spack and classical builds:
> - classical: ptscotch will always be built with parmetis wrappers, can't
> seem to turn off
> - spack: ptscotch will always be built without parmetis wrappers, can't
> turn on
> 
> Any insight at all would be great, I'm new to Spack and am not super
> familiar with the logic that goes 

Re: [petsc-users] CUDA GPU supported KSPs and PCs

2024-04-24 Thread Barry Smith

   It is less a question of what KSP and PC support running with CUDA and more 
a question of what parts of each KSP and PC run with CUDA (and which parts 
don't causing memory traffic back and forth between the CPU and GPU).  

 Generally speaking, all the PETSc Vec operations run on CUDA. Thus "all" 
the KSP "support CUDA". For Mat operations, it is more complicated; triangular 
solves do not run well (or at all) on CUDA but much of the other operations do 
run on CUDA. Since setting and and solving with some PC involved rather 
complicated Mat operations (like PCGAMG and PCFIELDSPLIT) parts may work on 
CUDA and parts may not. 

 The best way to determine how the GPU is being utilized is to run with 
-log_view and look at the columns that present the amount of memory traffic 
between the CPU and GPU and the percentage of floating point that is done on 
the GPU. Feel free to ask specific questions about the output. In some cases, 
given the output, we may be able to add additional CUDA support that is missing 
to decrease the memory traffic between the CPU and GPU and increase the flops 
done on the GPU.

We cannot produce a table of what is "supported" and what is not supported 
or even how much is supported since there are so many combinations of possible, 
hence it is best to run to determine the problematic places.

> On Apr 24, 2024, at 10:22 AM, Giyantha Binu Amaratunga Mukadange 
>  wrote:
> 
> This Message Is From an External Sender
> This message came from outside your organization.
> Hi, 
> 
> Is it possible to know which KSPs and PCs currently support running on Nvidia 
> GPUs with CUDA, or a source that has this information?
> The following page doesn't provide details about the supported KSPs and PCs. 
> https://urldefense.us/v3/__https://petsc.org/main/overview/gpu_roadmap/__;!!G_uCfscf7eWS!Y4JoiNMqvhSlcSuHnANYAK0LByq3ybAxBp_7_NTmzTcV2gBNyzA0D08G7lhPAZnQZsdtwo3zvr0Dlh3jIGpKCnM$
>   
> 
> 
> Thank you very much!
> 
> Best regards, 
> Binu



[petsc-users] CUDA GPU supported KSPs and PCs

2024-04-24 Thread Giyantha Binu Amaratunga Mukadange
Hi,


Is it possible to know which KSPs and PCs currently support running on Nvidia 
GPUs with CUDA, or a source that has this information?

The following page doesn't provide details about the supported KSPs and PCs.

https://urldefense.us/v3/__https://petsc.org/main/overview/gpu_roadmap/__;!!G_uCfscf7eWS!Yi-yp6M1qurQgY1iD0qI5XhdMGGfQqSpZpHdvfnh7DxMH4BL7V-0_6HaM47ifrfJqtWBWCBbGPCdqN29E8KNZTc$
 


Thank you very much!


Best regards,

Binu


[petsc-users] Spack build and ptscotch

2024-04-24 Thread Daniel Stone
Hi PETSc community,

I've been looking at using Spack to build PETSc, in particular I need to
disable the default metis/parmetis dependencies and use PTScotch instead,
for our software.
I've had quite a bit of trouble with this - it seems like something in the
resulting build of our simulator ends up badly optimised and an mpi
bottleneck, when I build against
PETSc built with Spack.

I've been trying to track this down, and noticed this in the PETSc Spack
build recipe:

# PTScotch: Currently disable Parmetis wrapper, this means
# nested disection won't be available thought PTScotch
depends_on("scotch+esmumps~metis+mpi", when="+ptscotch")
depends_on("scotch+int64", when="+ptscotch+int64")


Sure enough - when I compare the build with Spack and a traditional build
with ./configure etc, I see that, in the traditional build, Scotch is
always built with the parmetis wrapper,
but not in the Scotch build. In fact, I'm not sure how to turn off the
parmetis wrapper option for scotch, in the case of a traditional build
(i.e. there doesn't seem to be a flag in the
configure script for it) - which would be a very useful test for me (I can
of course do similar experiments by doing a classical build of petsc
against ptscotch built separately without the
wrappers, etc - will try that).

Does anyone know why the parmetis wrapper is always disabled in the spack
build options? Is there something about Spack that would prevent it from
working? It's notable - but I might
be missing it - that there's no warning that there's a difference in the
way ptscotch is built between the spack and classical builds:
- classical: ptscotch will always be built with parmetis wrappers, can't
seem to turn off
- spack: ptscotch will always be built without parmetis wrappers, can't
turn on

Any insight at all would be great, I'm new to Spack and am not super
familiar with the logic that goes into setting up builds for the system.

Here is the kind of command I give to Spack for PETSc builds, which may
well be less than ideal:

spack install petsc@3.19.1 ~metis +ptscotch ^hdf5 +fortran +hl

Separate tiny note: when building with hdf5, I have to ensure that the
fortran flag is set for it, as above. There's a fortran flag for the petsc
module, default true, and a fortran flag for the hdf5
module, default false. A naive user (i.e. me), will see the fortran flag
for the petsc module, and assume that all dependencies will correspondingly
be built with fortran capability - then see that
hdf5.mod is missing when trying to build their software against petsc. It's
the old "did you forget --with-hdf5-fortran-bindings?" issue, resurrected
for a new build system.

Thanks,

Daniel