The matrix in memory is in IJV (Spooles ) or CSR3 ( Pardiso ). The application 
was written to use a variety of different direct solvers but Spooles and 
Pardiso are what I am most familiar with. 






On Tuesday, December 7, 2021, 10:33:24 PM EST, Junchao Zhang 
<[email protected]> wrote: 







On Tue, Dec 7, 2021 at 9:06 PM Faraz Hussain via petsc-users 
<[email protected]> wrote:
> Thanks, I took a look at ex10.c in ksp/tutorials . It seems to do as you 
> wrote, "it efficiently gets the matrix from the file spread out over all the 
> ranks.".
> 
> However, in my application I only want rank 0 to read and assemble the 
> matrix. I do not want other ranks trying to get the matrix data. The reason 
> is the matrix is already in memory when my application is ready to call the 
> petsc solver.
What is the data structure of your matrix in memory?
 
>  
> 
> So if I am running with multiple ranks, I don't want all ranks assembling the 
> matrix.  This would require a total re-write of my application which is not 
> possible . I realize this may sounds confusing. If so, I'll see if I can 
> create an example that shows the issue.
> 
> 
> 
> 
> 
> On Tuesday, December 7, 2021, 10:13:17 AM EST, Barry Smith <[email protected]> 
> wrote: 
> 
> 
> 
> 
> 
> 
>   If you use MatLoad() it never has the entire matrix on a single rank at the 
> same time; it efficiently gets the matrix from the file spread out over all 
> the ranks. 
> 
>> On Dec 6, 2021, at 11:04 PM, Faraz Hussain via petsc-users 
>> <[email protected]> wrote:
>> 
>> I am studying the examples but it seems all ranks read the full matrix. Is 
>> there an MPI example where only rank 0 reads the matrix? 
>> 
>> I don't want all ranks to read my input matrix and consume a lot of memory 
>> allocating data for the arrays. 
>> 
>> I have worked with Intel's cluster sparse solver and their documentation 
>> states:
>> 
>> " Most of the input parameters must be set on the master MPI process only, 
>> and ignored on other processes. Other MPI processes get all required data 
>> from the master MPI process using the MPI communicator, comm. "
> 
> 

Reply via email to