Re: [petsc-users] Running CG with HYPRE AMG preconditioner in AMD GPUs

2024-03-19 Thread Vanella, Marcos (Fed) via petsc-users
Ok, thanks. I'll try it when the machine comes back online.
Cheers,
M

From: Mark Adams 
Sent: Tuesday, March 19, 2024 5:15 PM
To: Vanella, Marcos (Fed) 
Cc: PETSc users list 
Subject: Re: [petsc-users] Running CG with HYPRE AMG preconditioner in AMD GPUs

You want: -mat_type aijhipsparse

On Tue, Mar 19, 2024 at 5:06 PM Vanella, Marcos (Fed) 
mailto:marcos.vane...@nist.gov>> wrote:
Hi Mark, thanks. I'll try your suggestions. So, I would keep -mat_type 
mpiaijkokkos but -vec_type hip as runtime options?
Thanks,
Marcos

From: Mark Adams mailto:mfad...@lbl.gov>>
Sent: Tuesday, March 19, 2024 4:57 PM
To: Vanella, Marcos (Fed) 
mailto:marcos.vane...@nist.gov>>
Cc: PETSc users list mailto:petsc-users@mcs.anl.gov>>
Subject: Re: [petsc-users] Running CG with HYPRE AMG preconditioner in AMD GPUs

[keep on list]

I have little experience with running hypre on GPUs but others might have more.

1M dogs/node is not a lot and NVIDIA has larger L1 cache and more mature 
compilers, etc. so it is not surprising that NVIDIA is faster.
I suspect the gap would narrow with a larger problem.

Also, why are you using Kokkos? It should not make a difference but you could 
check easily. Just use -vec_type hip with your current code.

You could also test with GAMG, -pc_type gamg

Mark


On Tue, Mar 19, 2024 at 4:12 PM Vanella, Marcos (Fed) 
mailto:marcos.vane...@nist.gov>> wrote:
Hi Mark, I run a canonical test we have to time our code. It is a propane fire 
on a burner within a box with around 1 million cells.
I split the problem in 4 GPUS, single node, both in Polaris and Frontier. I 
compiled PETSc with gnu and HYPRE being downloaded and the following configure 
options:


  *
Polaris:
$./configure COPTFLAGS="-O3" CXXOPTFLAGS="-O3" FOPTFLAGS="-O3" FCOPTFLAGS="-O3" 
CUDAOPTFLAGS="-O3" --with-debugging=0 --download-suitesparse --download-hypre 
--with-cuda --with-cc=cc --with-cxx=CC --with-fc=ftn --with-cudac=nvcc 
--with-cuda-arch=80 --download-cmake


  *
Frontier:
$./configure COPTFLAGS="-O3" CXXOPTFLAGS="-O3" FOPTFLAGS="-O3" FCOPTFLAGS="-O3" 
HIPOPTFLAGS="-O3" --with-debugging=0 --with-cc=cc --with-cxx=CC --with-fc=ftn 
--with-hip --with-hipc=hipcc --LIBS="-L${MPICH_DIR}/lib -lmpi 
${PE_MPICH_GTL_DIR_amd_gfx90a} ${PE_MPICH_GTL_LIBS_amd_gfx90a}" 
--download-kokkos --download-kokkos-kernels --download-suitesparse 
--download-hypre --download-cmake

Our code was compiled also with gnu compilers and -O3 flag. I used latest (from 
this week) PETSc repo update. These are the timings for the test case:


  *   8 meshes + 1Million cells case, 8 MPI processes, 4 GPUS, 2 MPI Procs per 
GPU, 1 sec run time (~580 time steps, ~1160 Poisson solves):

System  Poisson Solver  GPU Implementation  Poisson 
Wall time (sec) Total Wall time (sec)
Polaris CG + HYPRE PC   CUDA80  
287
FrontierCG + HYPRE PC   Kokkos + HIP158 
401

It is interesting to see that the Poisson solves take twice the time in 
Frontier than in Polaris.
Do you have experience on running HYPRE AMG on these machines? Is this 
difference between the CUDA implementation and Kokkos-kernels to be expected?

I can run the case in both computers with the log flags you suggest. Might give 
more information on where the differences are.

Thank you for your time,
Marcos



From: Mark Adams mailto:mfad...@lbl.gov>>
Sent: Tuesday, March 5, 2024 2:41 PM
To: Vanella, Marcos (Fed) 
mailto:marcos.vane...@nist.gov>>
Cc: petsc-users@mcs.anl.gov 
mailto:petsc-users@mcs.anl.gov>>
Subject: Re: [petsc-users] Running CG with HYPRE AMG preconditioner in AMD GPUs

You can run with -log_view_gpu_time to get rid of the nans and get more data.

You can run with -ksp_view to get more info on the solver and send that output.

-options_left is also good to use so we can see what parameters you used.

The last 100 in this row:

KSPSolve1197 0.0 2.0291e+02 0.0 2.55e+11 0.0 3.9e+04 8.0e+04 
3.1e+04 12 100 100 100 49  12 100 100 100 98  2503-nan  0 1.80e-050 
0.00e+00  100

tells us that all the flops were logged on GPUs.

You do need at least 100K equations per GPU to see speedup, so don't worry 
about small problems.

Mark




On Tue, Mar 5, 2024 at 12:52 PM Vanella, Marcos (Fed) via petsc-users 
mailto:petsc-users@mcs.anl.gov>> wrote:
Hi all, I compiled the latest PETSc source in Frontier using gcc+kokkos and hip 
options: ./configure COPTFLAGS="-O3" CXXOPTFLAGS="-O3" FOPTFLAGS="-O3" 
FCOPTFLAGS="-O3" HIPOPTFLAGS="-O3" --with-debugging=0
ZjQcmQRYFpfptBannerStart
This Message Is From an External Sender
This message came from outside your organization.

ZjQcmQRYFpfptBannerEnd
Hi all, I compiled the latest PETSc source in Frontier using gcc+kokkos and hip 
options:

./configure 

Re: [petsc-users] Running CG with HYPRE AMG preconditioner in AMD GPUs

2024-03-19 Thread Mark Adams
You want: -mat_type aijhipsparse

On Tue, Mar 19, 2024 at 5:06 PM Vanella, Marcos (Fed) <
marcos.vane...@nist.gov> wrote:

> Hi Mark, thanks. I'll try your suggestions. So, I would keep -mat_type
> mpiaijkokkos but -vec_type hip as runtime options?
> Thanks,
> Marcos
> --
> *From:* Mark Adams 
> *Sent:* Tuesday, March 19, 2024 4:57 PM
> *To:* Vanella, Marcos (Fed) 
> *Cc:* PETSc users list 
> *Subject:* Re: [petsc-users] Running CG with HYPRE AMG preconditioner in
> AMD GPUs
>
> [keep on list]
>
> I have little experience with running hypre on GPUs but others might have
> more.
>
> 1M dogs/node is not a lot and NVIDIA has larger L1 cache and more mature
> compilers, etc. so it is not surprising that NVIDIA is faster.
> I suspect the gap would narrow with a larger problem.
>
> Also, why are you using Kokkos? It should not make a difference but you
> could check easily. Just use -vec_type hip with your current code.
>
> You could also test with GAMG, -pc_type gamg
>
> Mark
>
>
> On Tue, Mar 19, 2024 at 4:12 PM Vanella, Marcos (Fed) <
> marcos.vane...@nist.gov> wrote:
>
> Hi Mark, I run a canonical test we have to time our code. It is a propane
> fire on a burner within a box with around 1 million cells.
> I split the problem in 4 GPUS, single node, both in Polaris and Frontier.
> I compiled PETSc with gnu and HYPRE being downloaded and the following
> configure options:
>
>
>- Polaris:
>$./configure COPTFLAGS="-O3" CXXOPTFLAGS="-O3" FOPTFLAGS="-O3"
>FCOPTFLAGS="-O3" CUDAOPTFLAGS="-O3" --with-debugging=0
>--download-suitesparse --download-hypre --with-cuda --with-cc=cc
>--with-cxx=CC --with-fc=ftn --with-cudac=nvcc --with-cuda-arch=80
>--download-cmake
>
>
>
>- Frontier:
>$./configure COPTFLAGS="-O3" CXXOPTFLAGS="-O3" FOPTFLAGS="-O3"
>FCOPTFLAGS="-O3" HIPOPTFLAGS="-O3" --with-debugging=0 --with-cc=cc
>--with-cxx=CC --with-fc=ftn --with-hip --with-hipc=hipcc
>--LIBS="-L${MPICH_DIR}/lib -lmpi ${PE_MPICH_GTL_DIR_amd_gfx90a}
>${PE_MPICH_GTL_LIBS_amd_gfx90a}" --download-kokkos
>--download-kokkos-kernels --download-suitesparse --download-hypre
>--download-cmake
>
>
> Our code was compiled also with gnu compilers and -O3 flag. I used latest
> (from this week) PETSc repo update. These are the timings for the test case:
>
>
>- 8 meshes + 1Million cells case, 8 MPI processes, 4 GPUS, 2 MPI Procs
>per GPU, 1 sec run time (~580 time steps, ~1160 Poisson solves):
>
>
> System  Poisson Solver  GPU Implementation
> Poisson Wall time (sec) Total Wall time (sec)
> Polaris CG + HYPRE PC   CUDA
> 80  287
> FrontierCG + HYPRE PC   Kokkos + HIP
> 158 401
>
> It is interesting to see that the Poisson solves take twice the time in
> Frontier than in Polaris.
> Do you have experience on running HYPRE AMG on these machines? Is this
> difference between the CUDA implementation and Kokkos-kernels to be
> expected?
>
> I can run the case in both computers with the log flags you suggest. Might
> give more information on where the differences are.
>
> Thank you for your time,
> Marcos
>
>
> --
> *From:* Mark Adams 
> *Sent:* Tuesday, March 5, 2024 2:41 PM
> *To:* Vanella, Marcos (Fed) 
> *Cc:* petsc-users@mcs.anl.gov 
> *Subject:* Re: [petsc-users] Running CG with HYPRE AMG preconditioner in
> AMD GPUs
>
> You can run with -log_view_gpu_time to get rid of the nans and get more
> data.
>
> You can run with -ksp_view to get more info on the solver and send that
> output.
>
> -options_left is also good to use so we can see what parameters you used.
>
> The last 100 in this row:
>
> KSPSolve1197 0.0 2.0291e+02 0.0 2.55e+11 0.0 3.9e+04 8.0e+04
> 3.1e+04 12 100 100 100 49  12 100 100 100 98  2503-nan  0 1.80e-05
>0 0.00e+00  100
>
> tells us that all the flops were logged on GPUs.
>
> You do need at least 100K equations per GPU to see speedup, so don't worry
> about small problems.
>
> Mark
>
>
>
>
> On Tue, Mar 5, 2024 at 12:52 PM Vanella, Marcos (Fed) via petsc-users <
> petsc-users@mcs.anl.gov> wrote:
>
> Hi all, I compiled the latest PETSc source in Frontier using gcc+kokkos
> and hip options: ./configure COPTFLAGS="-O3" CXXOPTFLAGS="-O3"
> FOPTFLAGS="-O3" FCOPTFLAGS="-O3" HIPOPTFLAGS="-O3" --with-debugging=0
> ZjQcmQRYFpfptBannerStart
> This Message Is From an External Sender
> This message came from outside your organization.
>
> ZjQcmQRYFpfptBannerEnd
> Hi all, I compiled the latest PETSc source in Frontier using gcc+kokkos
> and hip options:
>
> ./configure COPTFLAGS="-O3" CXXOPTFLAGS="-O3" FOPTFLAGS="-O3"
> FCOPTFLAGS="-O3" HIPOPTFLAGS="-O3" --with-debugging=0 --with-cc=cc
> --with-cxx=CC --with-fc=ftn --with-hip --with-hipc=hipcc
> --LIBS="-L${MPICH_DIR}/lib -lmpi ${PE_MPICH_GTL_DIR_amd_gfx90a}
> ${PE_MPICH_GTL_LIBS_amd_gfx90a}" --download-kokkos
> 

Re: [petsc-users] Running CG with HYPRE AMG preconditioner in AMD GPUs

2024-03-19 Thread Vanella, Marcos (Fed) via petsc-users
Hi Mark, thanks. I'll try your suggestions. So, I would keep -mat_type 
mpiaijkokkos but -vec_type hip as runtime options?
Thanks,
Marcos

From: Mark Adams 
Sent: Tuesday, March 19, 2024 4:57 PM
To: Vanella, Marcos (Fed) 
Cc: PETSc users list 
Subject: Re: [petsc-users] Running CG with HYPRE AMG preconditioner in AMD GPUs

[keep on list]

I have little experience with running hypre on GPUs but others might have more.

1M dogs/node is not a lot and NVIDIA has larger L1 cache and more mature 
compilers, etc. so it is not surprising that NVIDIA is faster.
I suspect the gap would narrow with a larger problem.

Also, why are you using Kokkos? It should not make a difference but you could 
check easily. Just use -vec_type hip with your current code.

You could also test with GAMG, -pc_type gamg

Mark


On Tue, Mar 19, 2024 at 4:12 PM Vanella, Marcos (Fed) 
mailto:marcos.vane...@nist.gov>> wrote:
Hi Mark, I run a canonical test we have to time our code. It is a propane fire 
on a burner within a box with around 1 million cells.
I split the problem in 4 GPUS, single node, both in Polaris and Frontier. I 
compiled PETSc with gnu and HYPRE being downloaded and the following configure 
options:


  *
Polaris:
$./configure COPTFLAGS="-O3" CXXOPTFLAGS="-O3" FOPTFLAGS="-O3" FCOPTFLAGS="-O3" 
CUDAOPTFLAGS="-O3" --with-debugging=0 --download-suitesparse --download-hypre 
--with-cuda --with-cc=cc --with-cxx=CC --with-fc=ftn --with-cudac=nvcc 
--with-cuda-arch=80 --download-cmake


  *
Frontier:
$./configure COPTFLAGS="-O3" CXXOPTFLAGS="-O3" FOPTFLAGS="-O3" FCOPTFLAGS="-O3" 
HIPOPTFLAGS="-O3" --with-debugging=0 --with-cc=cc --with-cxx=CC --with-fc=ftn 
--with-hip --with-hipc=hipcc --LIBS="-L${MPICH_DIR}/lib -lmpi 
${PE_MPICH_GTL_DIR_amd_gfx90a} ${PE_MPICH_GTL_LIBS_amd_gfx90a}" 
--download-kokkos --download-kokkos-kernels --download-suitesparse 
--download-hypre --download-cmake

Our code was compiled also with gnu compilers and -O3 flag. I used latest (from 
this week) PETSc repo update. These are the timings for the test case:


  *   8 meshes + 1Million cells case, 8 MPI processes, 4 GPUS, 2 MPI Procs per 
GPU, 1 sec run time (~580 time steps, ~1160 Poisson solves):

System  Poisson Solver  GPU Implementation  Poisson 
Wall time (sec) Total Wall time (sec)
Polaris CG + HYPRE PC   CUDA80  
287
FrontierCG + HYPRE PC   Kokkos + HIP158 
401

It is interesting to see that the Poisson solves take twice the time in 
Frontier than in Polaris.
Do you have experience on running HYPRE AMG on these machines? Is this 
difference between the CUDA implementation and Kokkos-kernels to be expected?

I can run the case in both computers with the log flags you suggest. Might give 
more information on where the differences are.

Thank you for your time,
Marcos



From: Mark Adams mailto:mfad...@lbl.gov>>
Sent: Tuesday, March 5, 2024 2:41 PM
To: Vanella, Marcos (Fed) 
mailto:marcos.vane...@nist.gov>>
Cc: petsc-users@mcs.anl.gov 
mailto:petsc-users@mcs.anl.gov>>
Subject: Re: [petsc-users] Running CG with HYPRE AMG preconditioner in AMD GPUs

You can run with -log_view_gpu_time to get rid of the nans and get more data.

You can run with -ksp_view to get more info on the solver and send that output.

-options_left is also good to use so we can see what parameters you used.

The last 100 in this row:

KSPSolve1197 0.0 2.0291e+02 0.0 2.55e+11 0.0 3.9e+04 8.0e+04 
3.1e+04 12 100 100 100 49  12 100 100 100 98  2503-nan  0 1.80e-050 
0.00e+00  100

tells us that all the flops were logged on GPUs.

You do need at least 100K equations per GPU to see speedup, so don't worry 
about small problems.

Mark




On Tue, Mar 5, 2024 at 12:52 PM Vanella, Marcos (Fed) via petsc-users 
mailto:petsc-users@mcs.anl.gov>> wrote:
Hi all, I compiled the latest PETSc source in Frontier using gcc+kokkos and hip 
options: ./configure COPTFLAGS="-O3" CXXOPTFLAGS="-O3" FOPTFLAGS="-O3" 
FCOPTFLAGS="-O3" HIPOPTFLAGS="-O3" --with-debugging=0
ZjQcmQRYFpfptBannerStart
This Message Is From an External Sender
This message came from outside your organization.

ZjQcmQRYFpfptBannerEnd
Hi all, I compiled the latest PETSc source in Frontier using gcc+kokkos and hip 
options:

./configure COPTFLAGS="-O3" CXXOPTFLAGS="-O3" FOPTFLAGS="-O3" FCOPTFLAGS="-O3" 
HIPOPTFLAGS="-O3" --with-debugging=0 --with-cc=cc --with-cxx=CC --with-fc=ftn 
--with-hip --with-hipc=hipcc --LIBS="-L${MPICH_DIR}/lib -lmpi 
${PE_MPICH_GTL_DIR_amd_gfx90a} ${PE_MPICH_GTL_LIBS_amd_gfx90a}" 
--download-kokkos --download-kokkos-kernels --download-suitesparse 
--download-hypre --download-cmake

and have started testing our code solving a Poisson linear system with CG + 
HYPRE preconditioner. Timings look rather high compared to 

Re: [petsc-users] Running CG with HYPRE AMG preconditioner in AMD GPUs

2024-03-19 Thread Mark Adams
[keep on list]

I have little experience with running hypre on GPUs but others might have
more.

1M dogs/node is not a lot and NVIDIA has larger L1 cache and more mature
compilers, etc. so it is not surprising that NVIDIA is faster.
I suspect the gap would narrow with a larger problem.

Also, why are you using Kokkos? It should not make a difference but you
could check easily. Just use -vec_type hip with your current code.

You could also test with GAMG, -pc_type gamg

Mark


On Tue, Mar 19, 2024 at 4:12 PM Vanella, Marcos (Fed) <
marcos.vane...@nist.gov> wrote:

> Hi Mark, I run a canonical test we have to time our code. It is a propane
> fire on a burner within a box with around 1 million cells.
> I split the problem in 4 GPUS, single node, both in Polaris and Frontier.
> I compiled PETSc with gnu and HYPRE being downloaded and the following
> configure options:
>
>
>- Polaris:
>$./configure COPTFLAGS="-O3" CXXOPTFLAGS="-O3" FOPTFLAGS="-O3"
>FCOPTFLAGS="-O3" CUDAOPTFLAGS="-O3" --with-debugging=0
>--download-suitesparse --download-hypre --with-cuda --with-cc=cc
>--with-cxx=CC --with-fc=ftn --with-cudac=nvcc --with-cuda-arch=80
>--download-cmake
>
>
>
>- Frontier:
>$./configure COPTFLAGS="-O3" CXXOPTFLAGS="-O3" FOPTFLAGS="-O3"
>FCOPTFLAGS="-O3" HIPOPTFLAGS="-O3" --with-debugging=0 --with-cc=cc
>--with-cxx=CC --with-fc=ftn --with-hip --with-hipc=hipcc
>--LIBS="-L${MPICH_DIR}/lib -lmpi ${PE_MPICH_GTL_DIR_amd_gfx90a}
>${PE_MPICH_GTL_LIBS_amd_gfx90a}" --download-kokkos
>--download-kokkos-kernels --download-suitesparse --download-hypre
>--download-cmake
>
>
> Our code was compiled also with gnu compilers and -O3 flag. I used latest
> (from this week) PETSc repo update. These are the timings for the test case:
>
>
>- 8 meshes + 1Million cells case, 8 MPI processes, 4 GPUS, 2 MPI Procs
>per GPU, 1 sec run time (~580 time steps, ~1160 Poisson solves):
>
>
> System  Poisson Solver  GPU Implementation
> Poisson Wall time (sec) Total Wall time (sec)
> Polaris CG + HYPRE PC   CUDA
> 80  287
> FrontierCG + HYPRE PC   Kokkos + HIP
> 158 401
>
> It is interesting to see that the Poisson solves take twice the time in
> Frontier than in Polaris.
> Do you have experience on running HYPRE AMG on these machines? Is this
> difference between the CUDA implementation and Kokkos-kernels to be
> expected?
>
> I can run the case in both computers with the log flags you suggest. Might
> give more information on where the differences are.
>
> Thank you for your time,
> Marcos
>
>
> --
> *From:* Mark Adams 
> *Sent:* Tuesday, March 5, 2024 2:41 PM
> *To:* Vanella, Marcos (Fed) 
> *Cc:* petsc-users@mcs.anl.gov 
> *Subject:* Re: [petsc-users] Running CG with HYPRE AMG preconditioner in
> AMD GPUs
>
> You can run with -log_view_gpu_time to get rid of the nans and get more
> data.
>
> You can run with -ksp_view to get more info on the solver and send that
> output.
>
> -options_left is also good to use so we can see what parameters you used.
>
> The last 100 in this row:
>
> KSPSolve1197 0.0 2.0291e+02 0.0 2.55e+11 0.0 3.9e+04 8.0e+04
> 3.1e+04 12 100 100 100 49  12 100 100 100 98  2503-nan  0 1.80e-05
>0 0.00e+00  100
>
> tells us that all the flops were logged on GPUs.
>
> You do need at least 100K equations per GPU to see speedup, so don't worry
> about small problems.
>
> Mark
>
>
>
>
> On Tue, Mar 5, 2024 at 12:52 PM Vanella, Marcos (Fed) via petsc-users <
> petsc-users@mcs.anl.gov> wrote:
>
> Hi all, I compiled the latest PETSc source in Frontier using gcc+kokkos
> and hip options: ./configure COPTFLAGS="-O3" CXXOPTFLAGS="-O3"
> FOPTFLAGS="-O3" FCOPTFLAGS="-O3" HIPOPTFLAGS="-O3" --with-debugging=0
> ZjQcmQRYFpfptBannerStart
> This Message Is From an External Sender
> This message came from outside your organization.
>
> ZjQcmQRYFpfptBannerEnd
> Hi all, I compiled the latest PETSc source in Frontier using gcc+kokkos
> and hip options:
>
> ./configure COPTFLAGS="-O3" CXXOPTFLAGS="-O3" FOPTFLAGS="-O3"
> FCOPTFLAGS="-O3" HIPOPTFLAGS="-O3" --with-debugging=0 --with-cc=cc
> --with-cxx=CC --with-fc=ftn --with-hip --with-hipc=hipcc
> --LIBS="-L${MPICH_DIR}/lib -lmpi ${PE_MPICH_GTL_DIR_amd_gfx90a}
> ${PE_MPICH_GTL_LIBS_amd_gfx90a}" --download-kokkos
> --download-kokkos-kernels --download-suitesparse --download-hypre
> --download-cmake
>
> and have started testing our code solving a Poisson linear system with CG
> + HYPRE preconditioner. Timings look rather high compared to compilations
> done on other machines that have NVIDIA cards. They are also not changing
> when using more than one GPU for the simple test I doing.
> Does anyone happen to know if HYPRE has an hip GPU implementation for
> Boomer AMG and is it compiled when configuring PETSc?
>
> Thanks!
>
> Marcos
>
>
> PS: This is what I see on the 

Re: [petsc-users] Using PetscPartitioner on WINDOWS

2024-03-19 Thread Satish Balay via petsc-users
Check 
https://urldefense.us/v3/__https://gitlab.com/petsc/petsc/-/jobs/6412623047__;!!G_uCfscf7eWS!ZAg_b85bAvm8-TShDMHvxaXIu77pjwlDqU2g9AXQSNNw0gmk3peDktdf8MsGAq3jHLTJHo6WSPGyEe5QrCJ-fN0$
  for a successful build of latest petsc-3.20 [i.e release branch in git] with 
metis and parmetis

Note the usage:

>
'--with-cc=cl',
'--with-cxx=cl',
'--with-fc=ifort',


Satish

On Tue, 19 Mar 2024, Barry Smith wrote:

> Are you not able to use PETSc 3. 20. 2 ? On Mar 19, 2024, at 5: 27 AM, 程奔 
>  wrote: Hi,Barry I try to use PETSc version 
> 3. 19. 5 on windows, but it encounter a problem.
> *
> ZjQcmQRYFpfptBannerStart
> This Message Is From an External Sender
> This message came from outside your organization.
>  
> ZjQcmQRYFpfptBannerEnd
> 
>   Are you not able to use PETSc 3.20.2 ?
> 
>   On Mar 19, 2024, at 5:27 AM, 程奔  wrote:
> 
> Hi,Barry
> 
> I try to use PETSc version 3.19.5 on windows, but it encounter a problem.
> 
> 
>  
> *
>            UNABLE to CONFIGURE with GIVEN OPTIONS (see configure.log for 
> details):
> -
>                               Error configuring METIS with CMake
> *
> 
> configure.log is attached.
> 
> 
> Looking forward to your reply!
> 
> sinserely,
> 
> Ben.
> 
> 
> 
>   -原始邮件-
>   发件人: "Barry Smith" 
>   发送时间: 2024-03-18 21:11:14 (星期一)
>   收件人: 程奔 <202321009...@mail.scut.edu.cn>
>   抄送: petsc-users@mcs.anl.gov
>   主题: Re: [petsc-users] Using PetscPartitioner on WINDOWS
> 
> 
> Please switch to the latest PETSc version, it supports Metis and Parmetis on 
> Windows.
>   Barry
> 
> 
>   On Mar 17, 2024, at 11:57 PM, 程奔 <202321009...@mail.scut.edu.cn> wrote:
> 
> This Message Is From an External Sender 
> This message came from outside your organization.
> 
> Hello,
> 
> Recently I try to install PETSc with Cygwin since I'd like to use PETSc with 
> Visual Studio on Windows10 plateform.For the sake of clarity, I firstly list 
> the softwares/packages used below:
> 1. PETSc: version 3.16.5
> 2. VS: version 2022 
> 3. Intel MPI: download Intel oneAPI Base Toolkit and HPC Toolkit
> 4. Cygwin
> 
> 
> On windows,
> Then I try to calculate a simple cantilever beam  that use Tetrahedral mesh.  
> So it's  unstructured grid
> I use DMPlexCreateFromFile() to creat dmplex.
> 
> And then I want to distributing the mesh for using  PETSCPARTITIONERPARMETIS 
> type(in my opinion this PetscPartitioner type maybe the best for dmplex,
> 
> see fig 1 for my work to see different PetscPartitioner type about a  
> cantilever beam in Linux system.)
> 
> But unfortunatly, when i try to use parmetis on windows that configure PETSc 
> as follows
> 
> 
>  ./configure  --with-debugging=0  --with-cc='win32fe cl' --with-fc='win32fe 
> ifort' --with-cxx='win32fe cl'  
> 
> --download-fblaslapack=/cygdrive/g/mypetsc/petsc-pkg-fblaslapack-e8a03f57d64c.tar.gz
>   --with-shared-libraries=0 
> 
> --with-mpi-include=/cygdrive/g/Intel/oneAPI/mpi/2021.10.0/include
>  --with-mpi-lib=/cygdrive/g/Intel/oneAPI/mpi/2021.10.0/lib/release/impi.lib 
> --with-mpiexec=/cygdrive/g/Intel/oneAPI/mpi/2021.10.0/bin/mpiexec 
> --download-parmetis=/cygdrive/g/mypetsc/petsc-pkg-parmetis-475d8facbb32.tar.gz
>  
> --download-metis=/cygdrive/g/mypetsc/petsc-pkg-metis-ca7a59e6283f.tar.gz 
> 
> 
> 
> 
> it shows that 
> ***
> External package metis does not support --download-metis with Microsoft 
> compilers
> ***
> configure.log and make.log is attached
> 
> 
> 
> If I use PetscPartitioner Simple type the calculate time is much more than 
> PETSCPARTITIONERPARMETIS type.
> 
> So On windows system I want to use PetscPartitioner like parmetis , if there 
> have any other PetscPartitioner type that can do the same work as parmetis, 
> 
> or I just try to download parmetis  separatly on windows(like this website , 
> https://urldefense.us/v3/__https://boogie.inm.ras.ru/terekhov/INMOST/-/wikis/0204-Compilation-ParMETIS-Windows__;!!G_uCfscf7eWS!ZAg_b85bAvm8-TShDMHvxaXIu77pjwlDqU2g9AXQSNNw0gmk3peDktdf8MsGAq3jHLTJHo6WSPGyEe5Qgw3sA7A$
>  ) 
> 
> and then use Visual Studio to use it's library I don't know in this way PETSc 
> could use it successfully or not.
> 
> 
> So I wrrit this email to report my problem and ask for your help.
> 
> Looking forward your reply!
> 
> 
> sinserely,
> Ben.
> 
> 
> 
> 
> 
>