Pratik,
deal.II has the parallel::distributed::Triangulation class which
provides the functionality to partition and distribute the meshes using
p4est's partition and adaptive space filling techniques. According to
what I have understood from step-40, using the
parallel::distributed::Triangulation , one can automatically partition
the domain into equal (load balanced) parts and distribute the mesh to
the separate processes so that no single process needs to have the
entire mesh (except the initial coarse mesh). I would like to extend
this to have an overlapping decomposition such that each process has a
certain overlap with the neighboring processes and maybe impose boundary
conditions on these nodes.
I understand that in step-40 there are ghost cells which are actually
owned by the neighboring processes but serve some similar aspects. But,
as told here (
https://groups.google.com/forum/#!searchin/dealii/overlap|sort:date/dealii/e-V2ZaPed1c/WMsZGtT2wWkJ
) it seems you cannot control the width and I am not sure if one can
impose boundary conditions on them.
Using METIS, I could probably use the partMesh_Kway and then do a
breadth wise inclusion of the neighboring nodes based on the overlap,
but I am not sure how I can accomplish this using p4est in deal.II.
In short, using p4est, I would like to have an overlapping decomposition
on a parallel distributed case where I could impose boundary conditions
on the said overlapped boundary nodes. These overlapped nodes would be a
part of both the current process and the neighboring process.
Any suggestions or alternative recommendations would be really helpful.
You are correct that deal.II currently only allows one layer of ghost
cells around the locally owned region. I believe that this could be
changed, however, given that p4est (to the best of my knowledge) allows
to change this. It would be a bit of work figuring out where in deal.II
one would need to call the corresponding functions of p4est, but I
imagine that is feasible if you wanted to dig around a bit. (We'd be
happy to provide you with the respective pointers.)
The bigger problem is that with the approach you suggest, you would have
to enumerate degrees of freedom that are live on each processor
independently from the global numbering, so that you can build the
linear systems on each processors subdomain plus layers of ghost cells.
There is no functionality for this at the moment. I suspect that you
could build this as a simple map from global DoFs to local DoFs, though,
and so that would likely also be feasible.
I think the question I would ask you is why you want to do this? I know
that overlapping domain decomposition methods were popular in the 1990s
and early 2000s, primarily because it allowed to re-use existing,
sequential codes: each processor simply has to solve their own, local,
PDE, and all communication is restricted to exchanging boundary value
information. But we know today that (i) this does not scale very well to
large numbers of processors, and (ii) global space methods where you
solve one large linear system across many processors, as we do in
step-40 for example, is a much better method. In other words, the reason
why there is currently little support for overlapping DD methods in
deal.II is because as a community, we have recognized that these methods
are not as good as others that have been developed over the last 20 years.
Best
W.
--
------------------------------------------------------------------------
Wolfgang Bangerth email: [email protected]
www: http://www.math.colostate.edu/~bangerth/
--
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see
https://groups.google.com/d/forum/dealii?hl=en
---
You received this message because you are subscribed to the Google Groups "deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email
to [email protected].
For more options, visit https://groups.google.com/d/optout.