YC,

I have a project requiring to read in a large coarse mesh from gmsh to deal.ii
1M dofs. Most of cells have their own characteristics, which means I cannot
combine them and create a coarse mesh.
Currently, I implemented it by using shared memory triangulation for
parallelization. Since I want to scale it to a cluster system and target a
100M mesh (no need for mesh refinement), I have to use distributed tria via
MPI (is there any better solution?). I found out that the initial cost is
large because of the duplication of triangulation and p4est forest. I was
wondering if there is any method to remove part of triangulation or p4est
data.

No, unfortunately there is not. p4est is built on the assumption that the coarse mesh is replicated on all processors, and deal.II inherits this assumption. If your coarse mesh has 1M cells, that may just barely so be tolerable, although it will likely lead to inefficient code in some places where you loop over all cells and almost all of them turn out to be artificial. But I suspect that you will be in serious trouble if your coarse mesh has 100M cells.

You should really try to come up with a coarser coarse mesh that you can then refine.

Best
 W.

--
------------------------------------------------------------------------
Wolfgang Bangerth          email:                 bange...@colostate.edu
                           www: http://www.math.colostate.edu/~bangerth/

--
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- You received this message because you are subscribed to the Google Groups "deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to