Re: [deal.II] Separated domains in load balanding

2020-09-15 Thread Jean-Paul Pelteret
Ah, I was just busy writing what Wolfgang said. You can find more info on 
custom partitions for parallel::shared::Triangulation here:
https://dealii.org/current/doxygen/deal.II/classparallel_1_1shared_1_1Triangulation.html#a686a3453dfec098eb64d1510aa1716e1
 


You might also be able to “encourage” another partitioning using cell weights, 
but this was more aimed at load balancing in an hp-FEM context:
https://dealii.org/current/doxygen/deal.II/classparallel_1_1CellWeights.html 

https://dealii.org/current/doxygen/deal.II/structTriangulation_1_1Signals.html#af58294e40c64257c9de55e78b6443e36
 



I hope that this helps,
Jean-Paul

> On 15 Sep 2020, at 22:46, Wolfgang Bangerth  wrote:
> 
> On 9/14/20 7:29 PM, shahab.g...@gmail.com wrote:
>> I am using load balancing and I noticed after load balancing, that the cells 
>> owned by each processor are sometimes separated from each other. In other 
>> words, some processors may own cell domains that are not connected to each 
>> other.
>> As this increases the computational cost in my case, I was wondering whether 
>> it would be possible to limit the load balancing to define only adjacent 
>> cells?
> 
> Not with parallel::distributed::Triangulation. That class uses a partitioning 
> algorithm that optimizes for the data structures used in storing 
> triangulations, sometimes at the expense of creating these kinds of 
> disconnected sub-domains. In practice, however, this has relatively little 
> effect on the performance of programs to the best of our knowledge: Yes, it 
> is not *optimal*, but it is good enough to not be a major problem in most 
> cases. You state that it increases the computational cost -- that's true, but 
> do you have evidence that that creates a bottleneck?
> 
> If you do need a different partitioning algorithm, you can use 
> parallel::shared::Triangulation or, since deal.II 9.2, the 
> parallel::fullydistributed::Triangulation class.
> 
> Best
> W.
> 
> -- 
> 
> Wolfgang Bangerth  email: bange...@colostate.edu
>   www: http://www.math.colostate.edu/~bangerth/
> 
> -- 
> The deal.II project is located at http://www.dealii.org/
> For mailing list/forum options, see 
> https://groups.google.com/d/forum/dealii?hl=en
> --- You received this message because you are subscribed to the Google Groups 
> "deal.II User Group" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to dealii+unsubscr...@googlegroups.com.
> To view this discussion on the web visit 
> https://groups.google.com/d/msgid/dealii/e8d17b06-a23c-c00a-1d8c-c3cc11464bb4%40colostate.edu.

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/E8672DB8-50A8-46A3-BBB0-D57FBE3C715F%40gmail.com.


Re: [deal.II] Separated domains in load balanding

2020-09-15 Thread Wolfgang Bangerth

On 9/14/20 7:29 PM, shahab.g...@gmail.com wrote:
I am using load balancing and I noticed after load balancing, that the cells 
owned by each processor are sometimes separated from each other. In other 
words, some processors may own cell domains that are not connected to each other.
As this increases the computational cost in my case, I was wondering whether 
it would be possible to limit the load balancing to define only adjacent cells?


Not with parallel::distributed::Triangulation. That class uses a partitioning 
algorithm that optimizes for the data structures used in storing 
triangulations, sometimes at the expense of creating these kinds of 
disconnected sub-domains. In practice, however, this has relatively little 
effect on the performance of programs to the best of our knowledge: Yes, it is 
not *optimal*, but it is good enough to not be a major problem in most cases. 
You state that it increases the computational cost -- that's true, but do you 
have evidence that that creates a bottleneck?


If you do need a different partitioning algorithm, you can use 
parallel::shared::Triangulation or, since deal.II 9.2, the 
parallel::fullydistributed::Triangulation class.


Best
 W.

--

Wolfgang Bangerth  email: bange...@colostate.edu
   www: http://www.math.colostate.edu/~bangerth/

--
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups "deal.II User Group" group.

To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/e8d17b06-a23c-c00a-1d8c-c3cc11464bb4%40colostate.edu.


[deal.II] Separated domains in load balanding

2020-09-14 Thread shahab.g...@gmail.com
Dear all,
I am using load balancing and I noticed after load balancing, that the 
cells owned by each processor are sometimes separated from each other. In 
other words, some processors may own cell domains that are not connected to 
each other.
As this increases the computational cost in my case, I was wondering 
whether it would be possible to limit the load balancing to define only 
adjacent cells?
Thank you for your helps in advance.
Best regards,
Shahab

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/7b941286-61ee-436d-92b1-8bf032026128n%40googlegroups.com.