> On Apr 3, 2018, at 1:46 PM, Rob Falgout hypre Tracker 
> <[email protected]> wrote:
> 
> 
> Rob Falgout <[email protected]> added the comment:
> 
> Hi Barry,
> 
> It looks like the only time we call MPI_Comm_create is to build a 
> communicator for the coarsest grid solve using Gaussian elimination.  There 
> are probably alternatives that do not require creating a sub-communicator.

    When the sub communicator is of size 1 you can use MPI_COMM_SELF instead of 
creating a new communicator each time.

>  Ulrike or someone else more familiar with the code should comment.
> 
> I don't see a need to do a Comm_dup() before calling hypre.

   If I have 1000 hypre solvers on the same communicator how do I know that 
hypre won't send messages between the different solvers and hence gets messed 
up? In other words how do you handle tags to prevent conflicts between 
different matrices? What communicators do you actually do communication on and 
where do you get them? From the hypre matrix?



> 
> Hope this helps.
> 
> -Rob
> 
> ----------
> status: unread -> chatting
> 
> ____________________________________________
> hypre Issue Tracker <[email protected]>
> <http://cascb1.llnl.gov/hypre/issue1595>
> ____________________________________________

Reply via email to