On Jan 12, 2009, at 10:05 AM, Roy Stogner wrote:

> Really, what we need is to properly parallelize MeshFunction, but I'm
> not sure what the right API is to do that with MPI.  If processor A
> wants to evaluate the function on a point in processor B's elements,
> processor B (which probably has a whole list of evaluations it needs
> itself) is going to need to be notified somehow to interrupt what it's
> doing and help A.

We had this capability in Sierra... the algorithm went something like  
this:

1.  Build bounding boxes on all processors that encompasses the  
elements held by that processor.
2.  Each processor communicates to every other processor it's bounding  
box.
3.  Each processor does a coarse search using the bounding boxes to  
group the points it's interested in by processor.
4.  The points most likely to fall on each processor are transmitted  
to each processor.
5.  A fine grained search on every processor is done to insure that  
the points actually do fall in an element that processor owns.  If it  
doesn't then a second guess is made at what processor it fits on and  
another round of communication happens.
6.  After every processor has the set of points to be evaluated that  
fall in it's elements... it then does all of the evaluations.
7.  The evaluations are then broadcast back to the original processor  
that needed the information.

Yes.... this is fairly involved.... and there might be a better way.   
There was quite a bit of code required to make all of this work (as  
you might imagine... this is Sierra of course!)... but it _did_ work.   
We used this capability in Encore frequently for our postprocessing  
needs.

Derek

------------------------------------------------------------------------------
This SF.net email is sponsored by:
SourcForge Community
SourceForge wants to tell your story.
http://p.sf.net/sfu/sf-spreadtheword
_______________________________________________
Libmesh-users mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/libmesh-users

Reply via email to