If you use MatLoad() it never has the entire matrix on a single rank at the same time; it efficiently gets the matrix from the file spread out over all the ranks.
> On Dec 6, 2021, at 11:04 PM, Faraz Hussain via petsc-users > <[email protected]> wrote: > > I am studying the examples but it seems all ranks read the full matrix. Is > there an MPI example where only rank 0 reads the matrix? > > I don't want all ranks to read my input matrix and consume a lot of memory > allocating data for the arrays. > > I have worked with Intel's cluster sparse solver and their documentation > states: > > " Most of the input parameters must be set on the master MPI process only, > and ignored on other processes. Other MPI processes get all required data > from the master MPI process using the MPI communicator, comm. "
