Hi all,
I have a project requiring to read in a large coarse mesh from gmsh to
deal.ii > 1M dofs. Most of cells have their own characteristics, which
means I cannot combine them and create a coarse mesh.
Currently, I implemented it by using shared memory triangulation for
parallelization. Since I want to scale it to a cluster system and target a
100M mesh (no need for mesh refinement), I have to use distributed tria via
MPI (is there any better solution?). I found out that the initial cost is
large because of the duplication of triangulation and p4est forest. I was
wondering if there is any method to remove part of triangulation or p4est
data. Or can I build initial distributed triangulation by myself? Based on
my understanding, the vertices and cells need a cross reference for ID
(cell_id is like deal-p4est permutation vectors). However, these changes
require comprehensive understanding of the whole triangulation system of
deal.ii. I was wondering if anyone can give me some idea of it. Thanks
Here I provide one simple trick if you have similar projects like mine. In
order to read cells of different characteristics, cell_id is a good way to
id each cell and give its own parameters. However, since deal.ii limited
number of cell_id in char, then the maximum cell_id is 256. A simple way to
fix it is by changing type of cell_id in type.h to unsigned int and also
its corresponding instantiation.
On the other hand, if you are using solution transfer in shared_tria, you
can add “solution_transfer.cc :
(all_out[0]).compress(VectorOperation::insert);” so that the solution tria
should be
template<int dim, typename VectorType, typename DoFHandlerType>
void SolutionTransfer<dim, VectorType, DoFHandlerType>::interpolate
(const VectorType &in,
VectorType &out) const
{
Assert (in.size()==n_dofs_old,
ExcDimensionMismatch(in.size(), n_dofs_old));
Assert (out.size()==dof_handler->n_dofs(),
ExcDimensionMismatch(out.size(), dof_handler->n_dofs()));
std::vector<VectorType> all_in(1);
all_in[0] = in;
std::vector<VectorType> all_out(1);
all_out[0] = out;
interpolate(all_in,
all_out);
(all_out[0]).compress(VectorOperation::insert); // this helps avoid
interpolate error
out=all_out[0];
}
Another simple trick to speed up the system, you can turn off the reorder
mesh function and mesh by yourself and give a dummy permutation. The idea
of p4est is to limit the communication so that just make sure the adjacent
cells are grouped properly. This trick gives a huge save in computation
time (>1M dofs).
best
YC Chen
--
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see
https://groups.google.com/d/forum/dealii?hl=en
---
You received this message because you are subscribed to the Google Groups
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email
to [email protected].
For more options, visit https://groups.google.com/d/optout.