Thank you very much. Since i have millions of such linear equations to solve, so efficiency is my big concern.
If i keep using the uncoupled small system and use ksp to solve the linear equations sequentially, there is nowhere i can take advantage of parallel computing. I think of the coupled system with the hope that the efficiency can be improved. But it seems not a good approach. I have tried solving the small system sequentially and it works fine, now i need to think ways to optimize my code. Any hints on this aspect? btw, thanks for all these communications. On Mon, August 29, 2011 5:16 pm, Jed Brown wrote: > On Mon, Aug 29, 2011 at 15:51, Likun Tan <likunt at andrew.cmu.edu> wrote: > > >> Instead of solving Ax=b with different right-hand-side sequentially, we >> can also form a sparse block diagonal matrix A and a vector b composed >> of all the elements. Then we can set values to each section of b >> concurrently and solve the enlarged system in parallel, is this an >> efficient way? >> > > You can assemble a single AIJ matrix and then use MatCreateMAIJ() to make > it apply to a multi-vector. You can then make a larger vector and run a > normal Krylov method. > > > The only downside is that you won't get the usual property that the > Krylov > method is spectrally adaptive to the right hand side (because there is > only one inner product for all the components), but this can work alright > anyway. Preconditioning will take more effort. > > > I think this is likely premature optimization for you. I recommend > solving the systems by calling KSPSolve() multiple times for now. Later, > when everything is working, you might experiment with ways to solve the > systems together. > > >> >> And also, I found MatCreateMPIBDiag() >> >> > > This format was removed from PETSc a few years ago, but I don't think > it's what you want anyway. >
