On Mon, 12 Jan 2009, Derek Gaston wrote: > On Jan 12, 2009, at 10:05 AM, Roy Stogner wrote: > >> Really, what we need is to properly parallelize MeshFunction, but I'm >> not sure what the right API is to do that with MPI. If processor A >> wants to evaluate the function on a point in processor B's elements, >> processor B (which probably has a whole list of evaluations it needs >> itself) is going to need to be notified somehow to interrupt what it's >> doing and help A. > > We had this capability in Sierra... the algorithm went something like this: > > 1. Build bounding boxes on all processors that encompasses the elements held > by that processor. > 2. Each processor communicates to every other processor it's bounding box. > 3. Each processor does a coarse search using the bounding boxes to group the > points it's interested in by processor. > 4. The points most likely to fall on each processor are transmitted to each > processor. > 5. A fine grained search on every processor is done to insure that the > points actually do fall in an element that processor owns. If it doesn't > then a second guess is made at what processor it fits on and another round of > communication happens. > 6. After every processor has the set of points to be evaluated that fall in > it's elements... it then does all of the evaluations. > 7. The evaluations are then broadcast back to the original processor that > needed the information. > > Yes.... this is fairly involved.... and there might be a better way.
Actually, I suspect this is the best way to do things (with MPI, at least). And the amount of work to be done for it isn't too scary after writing the ParallelMesh synchronization code. What's held me off from writing it is that it's not a parallelization of the old API, it would require a wholly new API, so it's not a drop-in fix for old code. On the other hand, maybe that's a reason to write it now: put in the new API, write the quick serial implementation for it, and mark the individual evaluation function as deprecated() to encourage users to switch. Then write the parallel implementation later. --- Roy ------------------------------------------------------------------------------ This SF.net email is sponsored by: SourcForge Community SourceForge wants to tell your story. http://p.sf.net/sfu/sf-spreadtheword _______________________________________________ Libmesh-users mailing list [email protected] https://lists.sourceforge.net/lists/listinfo/libmesh-users
