Hello Prof. Wolfgang,

I've tried two things with single constraint partitioning:

   1. Connect triangulation.signals.weight in the constructor of my class, 
   before the GridGenerator is called. In this case, the partitioning balances 
   the cells based on the weighting function, and only works when the 
   weighting function does not depend on the material_id (makes sense, as we 
   don't set material ids while generating the Grid). 
   2. Accept the first partitioning as is (an even split across p 
   processes), and connect triangulation.signals.weight after GridGenerator is 
   called, and then call GridTools::partition_triangulation. While this 
   assigns the subdomain_ids to cells correctly, based on the partitioners 
   output, the active_cell_indices are not updated, which results in the dof 
   handler showing unexpected behaviour.

Therefore, I believe the problem simplifies to how to repartition and 
update the active cells in the new partition in the case of 
parallel::shared::Triangulation. 

Thanks and Regards,
Nihar

On Monday, 15 December 2025 at 22:40:59 UTC-8 Nihar Bhardwaj Darbhamulla 
wrote:

> Hello Prof. Wolfgang,
>
> Please find attached here a minimal example to illustrate what I've tried 
> to do. 
>
> I have different PDEs governing different parts of the domain, and I have 
> used the hp framework to create a collection of FESystem finite elements 
> which will ensure a stable set is chosen. As an example, I have the Q2-Q1 
> elements on one part of the domain (22 dofs per cell in 2D - domain 1) and 
> Q1 elements on one part of the domain (9 dofs per cell in 2D - domain 2). 
> This leads to different number of DOFS on different parts of the domain. 
>
> When I partition my domain using METIS using the single constrained 
> framework, I obtain an equal split of cells as expected across the 
> processes. This is expected since all cells are equally weighted. Now, when 
> I assign cell weights proportional to the degrees of freedom, I will end up 
> with 9/31 of the processes having the cells of domain 1 and 22/31 of the 
> processes having the cells of domain 2. In this scenario, during the linear 
> solver phase, I will end up with extensive process idling. To mitigate this 
> overhead, I tried to setup a multiconstraint partitioning through METIS, by 
> weighing each cell by the phase of computation during which it will be 
> active. This leads to some cells of domain 1 and domain 2 ending up on the 
> same process. But now, I will effectively mitigate some part of process 
> idling. The partitioning with METIS under single (right) and 
> multi-constraints (left) are shown in the images attached.
>
> The issue now arises when I try to renumber my degrees of freedom in a 
> block wise manner. Right after the multi-constraint partitioning, I can see 
> that the Degrees of freedom each process has to deal with are nearly 
> identical, but I'm not sure that renumbering is maintaining consistency 
> with the way it is meant to perform. I have attached the code below along 
> with sample output. The code is built with deal.II/9.6.2, openmpi/4.1.1, 
> and gcc/13.3. The code can be built using cmake and can be run using
>
> mpirun -n <np> MultiConstraint <n_grid> 
>
> np is the number of processes and n_grid is the number of cells along the 
> x-direction. The code will generate output to the screen with details 
> containing the partitioning information as well as the dof information 
> before and after renumbering. Furthermore, pvtu and vtu files are written 
> illustrating the single constraint partitioning (Grid_BasePart_<n_grid>) 
> and multi-constraint partitioning (Grid_MCPart_<ngrid>). Please let me know 
> if I should add further details regarding my build. 
>
> Thank you once again.
> Regards,
> Nihar
>
>
> On Friday, 5 December 2025 at 09:40:23 UTC-8 Wolfgang Bangerth wrote:
>
>> On 12/2/25 20:47, Nihar Bhardwaj Darbhamulla wrote: 
>> > 
>> > The reason for this partitioning is that the mesh undergoes computation 
>> in 
>> > phases. Given this partitioning, I am attempting to renumber dofs first 
>> by 
>> > Cutthill McKee followed by block wise renumbering. However on doing 
>> either 
>> > operation, my renumbering gets skewed with number of degrees of freedom 
>> far 
>> > exceeding the balance. I have attached the output of 
>> locally_owned_dofs() 
>> > below from each partition before and after renumbering. The number of 
>> dofs 
>> > associated with each block also appear to shuffle around. In this case, 
>> what 
>> > would be a viable way forward since my objective is to construct and 
>> use block 
>> > preconditioners for my problem. 
>>
>> Nihar: 
>> I'm not entirely sure I understand what you see. It would probably help 
>> if you 
>> created a small test case that showed how you ended up with the problem. 
>>
>> In any case, if I interpret things right, then you partition the mesh so 
>> that 
>> the two halves have roughly equal number of cells. That's how it should 
>> be. Do 
>> you have different numbers of degrees of freedom on cells, via the hp 
>> framework? If so, you may of course get different numbers of DoFs on each 
>> partition -- just because the number of cells in each partition is 
>> balanced 
>> does not mean that the number of DoFs is balanced if cells have different 
>> numbers of local DoFs. If that's not the case: How do you calculate the 
>> number 
>> of DoFs owned by each partition? 
>>
>> Best 
>> W. 
>>
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion visit 
https://groups.google.com/d/msgid/dealii/3c97a847-6b78-4a11-8ab2-b5127d4c3779n%40googlegroups.com.

Reply via email to