Re: [petsc-users] Preconditioning of Liouvillian Superoperator

2024-01-31 Thread Barry Smith
For large problems, preconditioners have to take advantage of some underlying mathematical structure of the operator to perform well (require few iterations). Just black-boxing the system with simple preconditioners will not be effective. So, one needs to look at the Liouvillian

[petsc-users] errors with hypre with MPI and multiple GPUs on a node

2024-01-31 Thread Yesypenko, Anna
Dear Petsc devs, I'm encountering an error running hypre on a single node with multiple GPUs. The issue is in the setup phase. I'm trying to troubleshoot, but don't know where to start. Are the system routines PetScCUDAInitialize and PetScCUDAInitializeCheck available in python? How do I verify

Re: [petsc-users] errors with hypre with MPI and multiple GPUs on a node

2024-01-31 Thread Junchao Zhang
Hi Anna, Since you said "The code works with pc-type hypre on a single GPU.", I was wondering if this is a CUDA devices to MPI ranks binding problem. You can search TACC documentation to find how its job scheduler binds GPUs to MPI ranks (usually via manipulating the CUDA_VISIBLE_DEVICES

[petsc-users] Preconditioning of Liouvillian Superoperator

2024-01-31 Thread Niclas Götting
Hi all, I've been trying for the last couple of days to solve a linear system using iterative methods. The system size itself scales exponentially (64^N) with the number of components, so I receive sizes of * (64, 64) for one component * (4096, 4096) for two components * (262144, 262144) for

Re: [petsc-users] Preconditioning of Liouvillian Superoperator

2024-01-31 Thread Mark Adams
Iterative solvers have to be designed for your particular operator. You want to look in your field to see how people solve these problems. (eg, zeros on the diagonal will need something like a block solver or maybe ILU with a particular ordering) I don't personally know anything about this

Re: [petsc-users] Preconditioning of Liouvillian Superoperator

2024-01-31 Thread Matthew Knepley
On Wed, Jan 31, 2024 at 8:21 AM Mark Adams wrote: > Iterative solvers have to be designed for your particular operator. > You want to look in your field to see how people solve these problems. > (eg, zeros on the diagonal will need something like a block solver or maybe > ILU with a particular

Re: [petsc-users] PETSc init question

2024-01-31 Thread Pierre Jolivet
> On 31 Jan 2024, at 11:31 AM, Alain O' Miniussi wrote: > > Hi, > > It is indicated in: > https://petsc.org/release/manualpages/Sys/PetscInitialize/ > that the init function will call MPI_Init. > > What if MPI_Init was already called (as it is the case in my application) and > what about

Re: [petsc-users] PETSc init question

2024-01-31 Thread Matthew Knepley
On Wed, Jan 31, 2024 at 10:10 AM Alain O' Miniussi wrote: > Hi, > > It is indicated in: > https://petsc.org/release/manualpages/Sys/PetscInitialize/ > that the init function will call MPI_Init. > > What if MPI_Init was already called (as it is the case in my application) >From the page: "

[petsc-users] PETSc init question

2024-01-31 Thread Alain O' Miniussi
Hi, It is indicated in: https://petsc.org/release/manualpages/Sys/PetscInitialize/ that the init function will call MPI_Init. What if MPI_Init was already called (as it is the case in my application) and what about MPI_Init_thread. Wouldn't it be more convenient to have a Petsc init function