> "If your goal is to replicate the matrix on every process, it might be
> easiest to create a data structure that collects all of the local
> contributions (local matrix, plus the dof indices array) and send that around
> between processes, which then all build their own matrix. That may be simpler
> than sending around the matrix because the latter is a PETSc object to which
> you don't easily have access to its internal representation."
Thank you Dr. Bangerth for this suggestion. As I am fairly new to
object oriented programing, I did not think in this light. I have a related
question.May I ask if there is a function in dealii to identify cells on the
boundaries of subdomains?
Regards,
Rahul
> On Sep 28, 2022, at 5:18 PM, Wolfgang Bangerth <[email protected]> wrote:
>
> On 9/27/22 23:50, Rahul Gopalan Ramachandran wrote:
>> Thank you for the clarification. Isend should be the way to patch this.
>> Alternatively, what is your opinion on copying the matrix to a single
>> processor (if that is possible) using MPI_Gather? Would the cell iterators
>> then no longer work? Ignoring scalability will this also be an option?
>
> At least in principle, of course, we'd like to avoid writing programs that we
> know can't scale because each process stores data replicated everywhere --
> like the entire matrix. In practice, if your goal is to run on 10 or 20
> processes, this may still work, though you should recognize that the system
> matrix is the largest object you probably have in your program (even if you
> fully distribute it).
>
> If your goal is to replicate the matrix on every process, it might be easiest
> to create a data structure that collects all of the local contributions
> (local matrix, plus the dof indices array) and send that around between
> processes, which then all build their own matrix. That may be simpler than
> sending around the matrix because the latter is a PETSc object to which you
> don't easily have access to its internal representation.
>
> Best
> W.
>
> --
> ------------------------------------------------------------------------
> Wolfgang Bangerth email: [email protected]
> www: http://www.math.colostate.edu/~bangerth/
>
> --
> The deal.II project is located at http://www.dealii.org/
> For mailing list/forum options, see
> https://groups.google.com/d/forum/dealii?hl=en
> --- You received this message because you are subscribed to a topic in the
> Google Groups "deal.II User Group" group.
> To unsubscribe from this topic, visit
> https://groups.google.com/d/topic/dealii/3PXbitrTj_k/unsubscribe.
> To unsubscribe from this group and all its topics, send an email to
> [email protected].
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/dealii/eb258ba9-a221-5229-17e5-61453b639cf6%40colostate.edu.
--
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see
https://groups.google.com/d/forum/dealii?hl=en
---
You received this message because you are subscribed to the Google Groups
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email
to [email protected].
To view this discussion on the web visit
https://groups.google.com/d/msgid/dealii/03B9C662-0833-47DF-8EA3-95C06198863E%40gmail.com.