Hey there,
admittedly I didn't look close into this, but one of the things I'm 
sceptical about is the mesh you are using, Rishab (Remember our 
discussion?). In your mesh there are cells that don't share a face but only 
a line or a single vertex. The interior of your Triangulation is possibly 
disconnected. I don't know if this is what you want and how deal.II handles 
this. 
Usually one of the first sentences you write when you define a PDE problem 
is "Let Omega be a bounded domain with [...] boundary". A domain is a 
non-empty, *open* and *connected* set. The triangulation you posted here, 
is not the triangulation of a domain. 

Rishabh Saxena schrieb am Mittwoch, 11. Dezember 2024 um 15:45:09 UTC+1:

> I tried in this way as well but still it doesn't work. 
>
> If you have any suggestion, would be helpful for me 
>
>
>
> // Fully distributed cell data
> TriangulationDescription::CellData<dim> cell_data;
> cell_data.id = CellId(cell_id, {}).template to_binary<dim>();
> cell_data.subdomain_id = (i >= start_x_mpi && i <= end_x_mpi) ? mpi_rank : 
> (i < start_x_mpi ? mpi_rank - 1 : mpi_rank + 1);
> cell_data.level_subdomain_id = cell_data.subdomain_id;
>
> // Assign boundary IDs for inlet and outlet
> if (i == 0)
> cell_data.boundary_ids.emplace_back(0, 1); // Inlet boundary
> else if (i + 1 == n_subdivision[0])
> cell_data.boundary_ids.emplace_back(1, 2); // Outlet boundary
>
> construction_data.cell_infos[0].push_back(cell_data);
>
> On Wed, Dec 11, 2024 at 3:24 PM Peter Munch <[email protected]> wrote:
>
>> Isn't 
>> https://github.com/dealii/dealii/blob/b2108aabb2ba014b6d35029db19034bded6f135a/tests/fullydistributed_grids/create_manually_01.cc#L106-L113
>>  
>> working?
>>
>> Best,
>> Peter
>>
>> On Wednesday, December 11, 2024 at 3:11:44 PM UTC+1 [email protected] 
>> wrote:
>>
>>> Hi Peter,
>>>
>>> Great to hear from you, and thank you so much for pointing this out! I 
>>> revisited the implementation and corrected the setup for ghost cells as you 
>>> suggested. Now, the DoFs between subdomains are handled correctly, and the 
>>> total number of DoFs remains consistent. Your observation was spot 
>>> on—thanks for catching that!
>>>
>>> One question: Can I set boundary_id as usual or do we have add in 
>>> construction_data?
>>> I am attaching here boundary_ids implementation along with 
>>> generate_mesh_with_matrix.
>>> template <int dim>
>>> void LaplaceProblem<dim>::set_boundary_ids(int Nx, int Ny, int Nz)
>>> {
>>> // Boundary IDs
>>> const types::boundary_id inlet_boundary_id = 0;
>>> const types::boundary_id outlet_boundary_id = 1;
>>>
>>> // Iterate over all active cells
>>> for (const auto &cell : triangulation_pft.active_cell_iterators())
>>> {
>>> // Only modify locally owned cells
>>> if (cell->is_locally_owned())
>>> {
>>> for (unsigned int face = 0; face < GeometryInfo<dim>::faces_per_cell; ++
>>> face)
>>> {
>>> // Check if the face is at the inlet (x = 0)
>>> if (std::abs(cell->face(face)->center()[0]) < 1e-12) // Near x = 0
>>> {
>>> cell->face(face)->set_boundary_id(inlet_boundary_id);
>>> }
>>> // Check if the face is at the outlet (x = Nx)
>>> else if (std::abs(cell->face(face)->center()[0] - Nx) < 1e-12) // Near 
>>> x = Nx
>>> {
>>> cell->face(face)->set_boundary_id(outlet_boundary_id);
>>> }
>>> }
>>> }
>>> }
>>> }
>>>
>>>
>>> /////This one with GHost cells is important
>>> template <int dim>
>>> void LaplaceProblem<dim>::generate_mesh_from_matrix(int Nx, int Ny, int 
>>> Nz)
>>> {
>>> // Prepare the construction data object
>>> TriangulationDescription::Description<dim, dim> construction_data;
>>> construction_data.comm = _mpi_communicator;
>>> construction_data.cell_infos.resize(1); // Level 0 cell info
>>>
>>> // Define the number of subdivisions and domain size
>>> std::array<unsigned int, dim> n_subdivision = {{static_cast<unsigned int
>>> >(Nx),
>>> static_cast<unsigned int>(Ny),
>>> static_cast<unsigned int>(Nz)}};
>>>
>>> std::array<double, dim> sizes = {{1.0, 1.0, 1.0}}; // Assume a unit 
>>> cube domain for simplicity
>>>
>>> // Partition the mesh along the x-axis
>>> const unsigned int mpi_rank = Utilities::MPI::this_mpi_process
>>> (_mpi_communicator);
>>> const unsigned int mpi_size = Utilities::MPI::n_mpi_processes
>>> (_mpi_communicator);
>>> const auto partitioning_x = Utilities::
>>> create_evenly_distributed_partitioning(mpi_rank, mpi_size, n_subdivision
>>> [0]);
>>>
>>> if (!partitioning_x.is_empty())
>>> {
>>> const int start_x_mpi = partitioning_x.nth_index_in_set(0);
>>> const int end_x_mpi = partitioning_x.nth_index_in_set(partitioning_x.
>>> n_elements() - 1);
>>>
>>> // Generate vertices
>>> std::vector<Point<dim>> vertices;
>>> std::map<std::tuple<int, int, int>, unsigned int> local_vertex_map;
>>> unsigned int vertex_counter = 0;
>>>
>>> for (int i = std::max(0, start_x_mpi - 1); i <= std::min<int>(
>>> n_subdivision[0], end_x_mpi + 1); ++i)
>>> for (int j = 0; j <= n_subdivision[1]; ++j)
>>> for (int k = 0; k <= n_subdivision[2]; ++k)
>>> {
>>> Point<dim> p(sizes[0] / n_subdivision[0] * i,
>>> sizes[1] / n_subdivision[1] * j,
>>> sizes[2] / n_subdivision[2] * k);
>>> vertices.push_back(p);
>>> local_vertex_map[std::make_tuple(i, j, k)] = vertex_counter++;
>>> }
>>>
>>> construction_data.coarse_cell_vertices = vertices;
>>>
>>> // Generate local cells and a single layer of ghost cells
>>> for (int i = std::max(0, start_x_mpi - 1); i <= std::min<int>(
>>> n_subdivision[0] - 1, end_x_mpi + 1); ++i)
>>> for (int j = 0; j < n_subdivision[1]; ++j)
>>> for (int k = 0; k < n_subdivision[2]; ++k)
>>> {
>>> int material_id = mat_matrix[j + k * n_subdivision[1]][i];
>>> if (material_id == _mat_tau_1)
>>> {
>>> CellData<dim> coarse_cell;
>>> coarse_cell.vertices[0] = local_vertex_map[std::make_tuple(i, j, k)];
>>> coarse_cell.vertices[1] = local_vertex_map[std::make_tuple(i + 1, j, 
>>> k)];
>>> coarse_cell.vertices[2] = local_vertex_map[std::make_tuple(i, j + 1, 
>>> k)];
>>> coarse_cell.vertices[3] = local_vertex_map[std::make_tuple(i + 1, j + 1, 
>>> k)];
>>> if (dim == 3)
>>> {
>>> coarse_cell.vertices[4] = local_vertex_map[std::make_tuple(i, j, k + 1
>>> )];
>>> coarse_cell.vertices[5] = local_vertex_map[std::make_tuple(i + 1, j, k + 
>>> 1)];
>>> coarse_cell.vertices[6] = local_vertex_map[std::make_tuple(i, j + 1, k + 
>>> 1)];
>>> coarse_cell.vertices[7] = local_vertex_map[std::make_tuple(i + 1, j + 1, 
>>> k + 1)];
>>> }
>>>
>>> coarse_cell.material_id = material_id;
>>> construction_data.coarse_cells.push_back(coarse_cell);
>>>
>>> // Map the coarse cell index
>>> unsigned int cell_id = i + j * n_subdivision[0] + k * n_subdivision[0] * 
>>> n_subdivision[1];
>>> construction_data.coarse_cell_index_to_coarse_cell_id.push_back
>>> (cell_id);
>>>
>>> // Fully distributed cell data
>>> TriangulationDescription::CellData<dim> cell_data;
>>> cell_data.id = CellId(cell_id, {}).template to_binary<dim>();
>>> cell_data.subdomain_id = (i >= start_x_mpi && i <= end_x_mpi) ? 
>>> mpi_rank : (i < start_x_mpi ? mpi_rank - 1 : mpi_rank + 1);
>>> cell_data.level_subdomain_id = 0;
>>>
>>> construction_data.cell_infos[0].push_back(cell_data);
>>> }
>>> }
>>> }
>>>
>>> // Create the fully distributed triangulation
>>> triangulation_pft.create_triangulation(construction_data);
>>> }
>>>
>>>
>>> However, I’ve run into a new issue with the solution. Somehow, I’m 
>>> getting a zero solution despite following the approach in step 40. I 
>>> suspect that the system might not be assembling properly.
>>>
>>> Do you have any insights or advice on working with parallel fully 
>>> distributed meshes? I would really appreciate your thoughts.
>>>
>>>
>>> Best Regards
>>> Rishabh 
>>>
>>> On Tue, Dec 10, 2024 at 9:35 PM Peter Munch <[email protected]> wrote:
>>>
>>>> Hi Rishabh,
>>>>
>>>> To me it looks like that you don't set up the ghost cells properly. See 
>>>> that I loop over local cells and a single layer of ghost cells: 
>>>> https://github.com/dealii/dealii/blob/b2108aabb2ba014b6d35029db19034bded6f135a/tests/fullydistributed_grids/create_manually_01.cc#L70-L74.
>>>>  
>>>> Once this is fixed, dofs between subdomains will not be duplicated so that 
>>>> the number of dofs shouls stay constant.
>>>>
>>>> Say hi to Thomas and the others!
>>>>
>>>> Best,
>>>> Peter
>>>>
>>>> On Tuesday, 10 December 2024 at 16:25:36 UTC+1 [email protected] 
>>>> wrote:
>>>>
>>>>> Dear Prof. Wolfgang,
>>>>>
>>>>> Thank you for your guidance regarding the use of 
>>>>> TriangulationDescription::CellData and for suggesting Peter Munch's 
>>>>> work on creating fully distributed triangulations. It was very 
>>>>> insightful, 
>>>>> and I reviewed the example provided in the repository:
>>>>>
>>>>> https://github.com/dealii/dealii/blob/master/tests/fullydistributed_grids/create_manually_01.cc
>>>>>  
>>>>>
>>>>> Based on this reference, I implemented a method to generate a fully 
>>>>> distributed triangulation for a small microstructure with dimensions 
>>>>> 20x20x20.
>>>>> template <int dim>
>>>>> void LaplaceProblem<dim>::generate_mesh_from_matrix(int Nx, int Ny, 
>>>>> int Nz)
>>>>> {
>>>>> // Prepare the construction data object
>>>>> TriangulationDescription::Description<dim, dim> construction_data;
>>>>> construction_data.comm = _mpi_communicator;
>>>>> construction_data.cell_infos.resize(1); // Level 0 cell info
>>>>>
>>>>> // Define the number of subdivisions and domain size
>>>>> std::array<unsigned int, dim> n_subdivision = {{static_cast<unsigned 
>>>>> int>(Nx),
>>>>> static_cast<unsigned int>(Ny),
>>>>> static_cast<unsigned int>(Nz)}};
>>>>>
>>>>> //std::array<unsigned int, dim> n_subdivision = {{2,2,2}};
>>>>> std::array<double, dim> sizes = {{1.0, 1.0, 1.0}}; // Assume a unit 
>>>>> cube domain for simplicity
>>>>>
>>>>> // Partition the mesh along the x-axis
>>>>> const unsigned int mpi_rank = Utilities::MPI::this_mpi_process
>>>>> (_mpi_communicator);
>>>>> const unsigned int mpi_size = Utilities::MPI::n_mpi_processes
>>>>> (_mpi_communicator);
>>>>> const auto partitioning_x = Utilities::
>>>>> create_evenly_distributed_partitioning(mpi_rank, mpi_size, 
>>>>> n_subdivision[0]);
>>>>>
>>>>> if (!partitioning_x.is_empty())
>>>>> {
>>>>> const int start_x_mpi = partitioning_x.nth_index_in_set(0);
>>>>> const int end_x_mpi = partitioning_x.nth_index_in_set(partitioning_x.
>>>>> n_elements() - 1);
>>>>> //print start_x and end_x
>>>>> std::cout << "start_x: " << start_x << std::endl;
>>>>> std::cout << "end_x: " << end_x << std::endl; 
>>>>>
>>>>> // Generate vertices
>>>>> std::vector<Point<dim>> vertices;
>>>>> std::map<std::tuple<int, int, int>, unsigned int> local_vertex_map;
>>>>> unsigned int vertex_counter = 0;
>>>>>
>>>>> for (int i = std::max(0, start_x_mpi - 1); i <= std::min<int>(
>>>>> n_subdivision[0], end_x_mpi + 1); ++i)
>>>>> for (int j = 0; j <= n_subdivision[1]; ++j)
>>>>> for (int k = 0; k <= n_subdivision[2]; ++k)
>>>>> {
>>>>> Point<dim> p(sizes[0] / n_subdivision[0] * i,
>>>>> sizes[1] / n_subdivision[1] * j,
>>>>> sizes[2] / n_subdivision[2] * k);
>>>>> vertices.push_back(p);
>>>>> local_vertex_map[std::make_tuple(i, j, k)] = vertex_counter++;
>>>>> }
>>>>>
>>>>> construction_data.coarse_cell_vertices = vertices;
>>>>>
>>>>> // Generate local cells
>>>>> for (int i = start_x_mpi; i <= end_x_mpi; ++i)
>>>>> for (int j = 0; j < n_subdivision[1]; ++j)
>>>>> for (int k = 0; k < n_subdivision[2]; ++k)
>>>>> {
>>>>> int material_id = mat_matrix[j + k * n_subdivision[1]][i];
>>>>> if (material_id == _mat_tau_1)
>>>>> {
>>>>> CellData<dim> coarse_cell;
>>>>> coarse_cell.vertices[0] = local_vertex_map[std::make_tuple(i, j, k)];
>>>>> coarse_cell.vertices[1] = local_vertex_map[std::make_tuple(i + 1, j, 
>>>>> k)];
>>>>> coarse_cell.vertices[2] = local_vertex_map[std::make_tuple(i, j + 1, 
>>>>> k)];
>>>>> coarse_cell.vertices[3] = local_vertex_map[std::make_tuple(i + 1, j + 
>>>>> 1, k)];
>>>>> if (dim == 3)
>>>>> {
>>>>> coarse_cell.vertices[4] = local_vertex_map[std::make_tuple(i, j, k + 1
>>>>> )];
>>>>> coarse_cell.vertices[5] = local_vertex_map[std::make_tuple(i + 1, j, 
>>>>> k + 1)];
>>>>> coarse_cell.vertices[6] = local_vertex_map[std::make_tuple(i, j + 1, 
>>>>> k + 1)];
>>>>> coarse_cell.vertices[7] = local_vertex_map[std::make_tuple(i + 1, j + 
>>>>> 1, k + 1)];
>>>>> }
>>>>>
>>>>> coarse_cell.material_id = material_id;
>>>>> construction_data.coarse_cells.push_back(coarse_cell);
>>>>>
>>>>> // Map the coarse cell index
>>>>> unsigned int cell_id = i + j * n_subdivision[0] + k * n_subdivision[0] 
>>>>> * n_subdivision[1];
>>>>> construction_data.coarse_cell_index_to_coarse_cell_id.push_back
>>>>> (cell_id);
>>>>>
>>>>> // Fully distributed cell data
>>>>> TriangulationDescription::CellData<dim> cell_data;
>>>>> cell_data.id = CellId(cell_id, {}).template to_binary<dim>();
>>>>> cell_data.subdomain_id = mpi_rank;
>>>>> cell_data.level_subdomain_id = 0;
>>>>>
>>>>> construction_data.cell_infos[0].push_back(cell_data);
>>>>> }
>>>>> }
>>>>> }
>>>>>
>>>>> // Create the fully distributed triangulation
>>>>> triangulation_pft.create_triangulation(construction_data);
>>>>>
>>>>> // DataOut to handle distributed meshes
>>>>> DataOut<dim> data_out;
>>>>> data_out.attach_triangulation(triangulation_pft);
>>>>>
>>>>> // Add dummy data to ensure mesh output is generated
>>>>> Vector<double> dummy_solution(triangulation_pft.n_active_cells());
>>>>> for (unsigned int i = 0; i < dummy_solution.size(); ++i)
>>>>> dummy_solution[i] = 0.0;
>>>>> data_out.add_data_vector(dummy_solution, "dummy");
>>>>>
>>>>> // Build the output
>>>>> data_out.build_patches();
>>>>>
>>>>> // Output VTU file
>>>>> std::string filename = "mesh-" + Utilities::int_to_string(mpi_rank, 4) 
>>>>> + ".vtu";
>>>>> std::ofstream output(filename);
>>>>> data_out.write_vtu(output);
>>>>>
>>>>> }
>>>>>
>>>>>
>>>>>
>>>>> The VTU files generated for each processor appear correct and are 
>>>>> divided as expected. However, I am encountering an issue during the DoF 
>>>>> distribution phase. 
>>>>> Specifically, the number of DoFs increases as I use more processors. 
>>>>> For instance, using a single processor results in 7918 DoFs, while with 
>>>>> two 
>>>>> processors, the count rises to 8272, and with three processors, it 
>>>>> increases further to 8537.
>>>>>
>>>>> This suggests that there might be an overlapping of cell faces or an 
>>>>> issue with ghost cells being incorrectly accounted for during the DoF 
>>>>> distribution. I suspect the problem lies in the handling of shared faces 
>>>>> between processors or inconsistencies in cell ownership. I am currently 
>>>>> reviewing the partitioning logic and cell indexing within 
>>>>> TriangulationDescription::Description.
>>>>>
>>>>> If you have any further suggestions or recommendations to address this 
>>>>> issue, they would be greatly appreciated. Thank you once again for your 
>>>>> support, and I look forward to your feedback.
>>>>>
>>>>> Best regards,
>>>>> Rishabh
>>>>>
>>>>> On Tue, Dec 10, 2024 at 2:03 PM Wolfgang Bangerth <
>>>>> [email protected]> wrote:
>>>>>
>>>>>> On 12/9/24 08:46, Rishabh Saxena wrote:
>>>>>> >                    cell.vertices = {vertex_map[std::make_tuple(i, 
>>>>>> j, k)],
>>>>>>
>>>>>>  >[...]
>>>>>>
>>>>>> > When I compile the code, I get the following errors:
>>>>>> > 
>>>>>> > /home/saxenar/local/programs/tortuosity_github/source/ 
>>>>>> > tortuosity_parallel_FDM_discriptor_MPI.cc:957:26: Fehler: »struct 
>>>>>> > dealii::TriangulationDescription::CellData<3>« hat kein Element 
>>>>>> namens »vertices«
>>>>>>
>>>>>> Rishabh:
>>>>>> the error message says that TriangulationDescription::CellData simply 
>>>>>> does not 
>>>>>> have member variables 'vertices', 'material_id', etc., and that is 
>>>>>> true:
>>>>>>
>>>>>>
>>>>>> https://dealii.org/developer/doxygen/deal.II/structTriangulationDescription_1_1CellData.html
>>>>>> The general description of the class also explains why it does not 
>>>>>> need to 
>>>>>> store the geometric information you are trying to describe.
>>>>>>
>>>>>> I believe that Peter Munch posted a piece of code here on this forum 
>>>>>> a while 
>>>>>> ago that creates an xyz-uniform fully distributed triangulation like 
>>>>>> the one 
>>>>>> you are setting up here. You might want to search through the archive 
>>>>>> and see 
>>>>>> if you can find it!
>>>>>>
>>>>>> Best
>>>>>>   WB
>>>>>>
>>>>>> -- 
>>>>>> The deal.II project is located at http://www.dealii.org/
>>>>>> For mailing list/forum options, see 
>>>>>> https://groups.google.com/d/forum/dealii?hl=en
>>>>>> --- 
>>>>>> You received this message because you are subscribed to the Google 
>>>>>> Groups "deal.II User Group" group.
>>>>>> To unsubscribe from this group and stop receiving emails from it, 
>>>>>> send an email to [email protected].
>>>>>> To view this discussion visit 
>>>>>> https://groups.google.com/d/msgid/dealii/14fb678f-ffb8-40c1-abce-29bec8078fee%40colostate.edu
>>>>>> .
>>>>>>
>>>>>
>>>>>
>>>>> -- 
>>>>>
>>>>>
>>>>>
>>>>>
>>>>> *Thanks & RegardsRishabh SaxenaHelmut Schmidt University || University 
>>>>> of the Federal Armed Forces HamburgApplied Mathematics  Holstenhofweg 85 
>>>>> 22 
>>>>> 043 Hamburg GermanyOffice: H 1, Room No. 1225Email: [email protected]*
>>>>>
>>>> -- 
>>>> The deal.II project is located at http://www.dealii.org/
>>>> For mailing list/forum options, see 
>>>> https://groups.google.com/d/forum/dealii?hl=en
>>>> --- 
>>>> You received this message because you are subscribed to the Google 
>>>> Groups "deal.II User Group" group.
>>>> To unsubscribe from this group and stop receiving emails from it, send 
>>>> an email to [email protected].
>>>>
>>> To view this discussion visit 
>>>> https://groups.google.com/d/msgid/dealii/c703ffab-89b5-4e7d-b363-10dc062ec9fan%40googlegroups.com
>>>>  
>>>> <https://groups.google.com/d/msgid/dealii/c703ffab-89b5-4e7d-b363-10dc062ec9fan%40googlegroups.com?utm_medium=email&utm_source=footer>
>>>> .
>>>>
>>>
>>>
>>> -- 
>>>
>>>
>>>
>>>
>>> *Thanks & RegardsRishabh SaxenaHelmut Schmidt University || University 
>>> of the Federal Armed Forces HamburgApplied Mathematics  Holstenhofweg 85 22 
>>> 043 Hamburg GermanyOffice: H 1, Room No. 1225Email: [email protected]*
>>>
>> -- 
>> The deal.II project is located at http://www.dealii.org/
>> For mailing list/forum options, see 
>> https://groups.google.com/d/forum/dealii?hl=en
>> --- 
>> You received this message because you are subscribed to the Google Groups 
>> "deal.II User Group" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to [email protected].
>>
> To view this discussion visit 
>> https://groups.google.com/d/msgid/dealii/bb96467e-9dda-4739-b17e-481c16f7468bn%40googlegroups.com
>>  
>> <https://groups.google.com/d/msgid/dealii/bb96467e-9dda-4739-b17e-481c16f7468bn%40googlegroups.com?utm_medium=email&utm_source=footer>
>> .
>>
>
>
> -- 
>
>
>
>
> *Thanks & RegardsRishabh SaxenaHelmut Schmidt University || University of 
> the Federal Armed Forces HamburgApplied Mathematics  Holstenhofweg 85 22 
> 043 Hamburg GermanyOffice: H 1, Room No. 1225Email: [email protected]*
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion visit 
https://groups.google.com/d/msgid/dealii/550d457c-7527-4b63-a84d-63657e351035n%40googlegroups.com.

Reply via email to