As you note, the MPI linear solver server is for matrices whose entries are 
explicitly provided (so xAIJ sparse and dense) it doesn't have code that would 
help with shell matrices.

  You should write a parallel code, but have rank 0 do most of the work 
initially. You can have rank 0, for example, do the MatSetValues() calls. 

   Any specific questions, feel free to ask.

   Barry

> On Aug 21, 2025, at 3:42 PM, Jasper Hatton via petsc-users 
> <petsc-users@mcs.anl.gov> wrote:
> 
> Hi,
> 
> I am looking for advice on using PETSc in a situation where I don't want the 
> initial setup part of my application to run on multiple processes. The 
> solution stage is taking up most of the time, so I would like to avoid making 
> the setup stage a fully parallel code for now.
> 
> My solution stage takes a shell matrix, which includes an FFT as well as 
> multiple sparse mat-vec products. So I was planning to use the rank zero 
> process to setup the distributed FFT and put values into the distributed 
> matrices, then execute KSPSolve on all ranks.
> 
> However, I see there is also the MPI linear solver server option. It seems 
> mostly useful for cases where the input to KSPSolve is a sparse/dense matrix 
> rather than a shell code. Is it something I should consider for my case?
> 
> This is meant to be an first step which can at least run well on a 2-socket 
> node with many cores available, and eventually could be made a fully 
> distributed code in the future.
> 
> Any advice would be apreciated!
> 
> Thanks,
> 
> Jasper

Reply via email to