This is EXACTLY the CRUX of the matter, with this precompile command, there is
no more error! Thanks for your patience with my numerous and continuous
questions.
Je vous remercie !!
--Original--
From:
The macros expand differently depending on the compiler being used. In this
case
#if defined(PETSC_HAVE_BUILTIN_EXPECT)
#define PetscUnlikely(cond) __builtin_expect(!!(cond), 0)
#define PetscLikely(cond) __builtin_expect(!!(cond), 1)
#else
#define PetscUnlikely(cond) (cond)
#define
On Tue, Jun 27, 2023 at 2:56 PM Vanella, Marcos (Fed) <
marcos.vane...@nist.gov> wrote:
> Thank you Matt. I'll try the flags you recommend for monitoring. Correct,
> I'm trying to see if GPU would provide an advantage for this particular
> Poisson solution we do in our code.
>
> Our grids are
Sorry, meant 100K to 200K cells.
Also, check the release page of suitesparse. The mutli-GPU version of cholmod
might be coming soon:
https://people.engr.tamu.edu/davis/SuiteSparse/index.html
From: Vanella, Marcos (Fed)
Sent: Tuesday, June 27, 2023 2:56 PM
To:
Thank you Matt. I'll try the flags you recommend for monitoring. Correct, I'm
trying to see if GPU would provide an advantage for this particular Poisson
solution we do in our code.
Our grids are staggered with the Poisson unknown in cell centers. All my tests
for single mesh runs with 100K to
On Tue, Jun 27, 2023 at 2:20 PM Duan Junming via petsc-users <
petsc-users@mcs.anl.gov> wrote:
> Dear all,
>
>
> I try to create a compatible sparse MPI matrix A with dmplex global vector
> x, so I can do matrix-vector multiplication y = A*x.
>
> I think I can first get the local and global sizes
Dear all,
I try to create a compatible sparse MPI matrix A with dmplex global vector x,
so I can do matrix-vector multiplication y = A*x.
I think I can first get the local and global sizes of x on comm, say n and N,
also sizes of y, m, M,
then create A by using MatCreate(comm, ), set the
On Tue, Jun 27, 2023 at 11:23 AM Vanella, Marcos (Fed) <
marcos.vane...@nist.gov> wrote:
> Hi Mark and Matt, I tried swapping the preconditioner to cholmod and also
> the hypre Boomer AMG. They work just fine for my case. I also got my hands
> on a machine with NVIDIA gpus in one of our AI
Hi Jed
Thanks for your reply. I have sent the log files to petsc-ma...@mcs.anl.gov.
Zisheng
From: Jed Brown
Sent: Tuesday, June 27, 2023 1:02 PM
To: Zisheng Ye ; petsc-users@mcs.anl.gov
Subject: Re: [petsc-users] GAMG and Hypre preconditioner
[External
Regarding PetscCall(). It sounds like you are working with two different
versions of PETSc with different compilers? This isn't practical since things
do change (improve we hope) with newer versions of PETSc. You should just built
the latest version of PETSc with all the compiler suites you
I've opened https://gitlab.com/petsc/petsc/-/merge_requests/6642 which adds
a couple more scaling applications of the inverse of the diagonal of A
On Mon, Jun 26, 2023 at 6:06 PM Alexander Lindsay
wrote:
> I guess that similar to the discussions about selfp, the approximation of
> the velocity
Zisheng Ye via petsc-users writes:
> Dear PETSc Team
>
> We are testing the GPU support in PETSc's KSPSolve, especially for the GAMG
> and Hypre preconditioners. We have encountered several issues that we would
> like to ask for your suggestions.
>
> First, we have couple of questions when
Dear PETSc Team
We are testing the GPU support in PETSc's KSPSolve, especially for the GAMG and
Hypre preconditioners. We have encountered several issues that we would like to
ask for your suggestions.
First, we have couple of questions when working with a single MPI rank:
1. We have
OK, great! I'll try it out soon.
Thank you,
Philip Fackler
Research Software Engineer, Application Engineering Group
Advanced Computing Systems Research Section
Computer Science and Mathematics Division
Oak Ridge National Laboratory
From: Junchao Zhang
Sent:
Hi Mark and Matt, I tried swapping the preconditioner to cholmod and also the
hypre Boomer AMG. They work just fine for my case. I also got my hands on a
machine with NVIDIA gpus in one of our AI clusters. I compiled PETSc to make
use of cuda and cuda-enabled openmpi (with gcc).
I'm running the
Hi, Philip,
It's my fault. I should follow up early that this problem was fixed by
https://gitlab.com/petsc/petsc/-/merge_requests/6586.
Could you try petsc/main?
Thanks.
--Junchao Zhang
On Tue, Jun 27, 2023 at 9:30 AM Fackler, Philip wrote:
> Good morning Junchao! I'm following up
Good morning Junchao! I'm following up here to see if there is any update to
petsc to resolve this issue, or if we need to come up with a work-around.
Thank you,
Philip Fackler
Research Software Engineer, Application Engineering Group
Advanced Computing Systems Research Section
Computer Science
17 matches
Mail list logo