Sounds good, thanks.
I’ve also been looking into Elemental, but the documentation seems outdated and
I can’t find good examples on how to use it. I have the LLNL fork installed.
Thanks,
-Damyn
> On Oct 28, 2023, at 8:56 AM, Matthew Knepley wrote:
>
> On Fri, Oct 27, 2023 at 3:54 PM Damyn
On Fri, Oct 27, 2023 at 3:54 PM Damyn Chipman
wrote:
> Yeah, I’ll make an issue and use a modified version of this test routine.
>
> Does anything change if I will be using MATSCALAPACK matrices instead of
> the built in MATDENSE?
>
No, that is likely worse.
> Like I said, I will be computing
Currently MATSCALAPACK does not support MatCreateSubMatrix(). I guess it would
not be difficult to implement.
Jose
> El 27 oct 2023, a las 21:53, Damyn Chipman
> escribió:
>
> Yeah, I’ll make an issue and use a modified version of this test routine.
>
> Does anything change if I will be
Yeah, I’ll make an issue and use a modified version of this test routine.
Does anything change if I will be using MATSCALAPACK matrices instead of the
built in MATDENSE? Like I said, I will be computing Schur complements and need
to use a parallel and dense matrix format.
-Damyn
> On Oct 26,
On Wed, Oct 25, 2023 at 11:55 PM Damyn Chipman <
damynchip...@u.boisestate.edu> wrote:
> Great thanks, that seemed to work well. This is something my algorithm
> will do fairly often (“elevating” a node’s communicator to a communicator
> that includes siblings). The matrices formed are dense but
More like smaller pieces that need to be combined. Combining them (merging)
means sharing the actual data across a sibling communicator and doing some
linear algebra to compute the merged matrices (it involves computing a Schur
complement of a combined system from the sibling matrices).
The
If the matrices are stored as dense it is likely new code is the best way to
go.
What pieces live on the sub communicator? Is it an m by N matrix where m is
the number of rows (on that rank) and N is the total number of columns in the
final matrix? Or are they smaller "chunks" that need
Great thanks, that seemed to work well. This is something my algorithm will do
fairly often (“elevating” a node’s communicator to a communicator that includes
siblings). The matrices formed are dense but low rank. With MatCreateSubMatrix,
it appears I do a lot of copying from one Mat to
You can place it in a parallel Mat (that has rows or columns on only one rank
or a subset of ranks) and then MatCreateSubMatrix with all new rows/columns on
a different rank or subset of ranks.
That said, you usually have a function that assembles the matrix and you can
just call that on the