We would like this capability as well.

Right now, what we do is just solve both physics over _all_ of the
processors (one after another)... which isn't always ideal.  It would be
great to have more flexibility here - even if it comes at the cost of a bit
more work anywhere that we are doing actual parallel calls...

Derek

On Wed, Oct 24, 2012 at 6:53 AM, Kirk, Benjamin (JSC-EG311) <
benjamin.kir...@nasa.gov> wrote:

> Let me first describe the usage case I have before I get to the question,
> and what I think is a problem…
>
> I have two sets of physics, A & B, which are coupled through boundary data
> along their common boundary interface.  For a numbef of reasons, physics A
> and B are implemented in standalone codes.  In this case one or both
> happens to be a libMesh app.
>
> Now, I want to pass data between the two.  This is the genesis of my
> "would you be interested in a point cloud interpolation?" question some
> months ago.
>
> To pass data between the two, I of course want to use libMesh. And I'd
> like to to this all in one app instead of through file I/O or anything like
> that.
>
> In pseudocode, what I envision is
>
>
> LibMeshInit();
>
> PointCloudData shared_data;
>
> MPI_Comm_split(COMM_WORLD, SPLIT_COMM)
>
> PhysicsA physicsA(SPLIT_COMM,data);
> PhysicsB physicsB(SPLIT_COMM,data);
>
> while (time < tmax)
>   physicsA.advance(dt);
>   physicsB.advance(dt);
> done
>
>
> inside each advance method, each physics puts/gets data from the
> shared_data structure.  And they each run on a subset of the total number
> of processors.
>
> But the problem:  main() and physicsA (resp. physicsB) will coexist on
> some of the processors.  And recall at least one of them also uses libMesh.
>  As i see it now, all the extern data we have (especially the MPI
> communicator!) makes this impossible.  The only way to properly do it I can
> think of is to move all that extern stufff into the LibMeshInit object, and
> have it proliferate through the object tree - in particular make it into
> the mesh, equation systems, etc…
>
> Am I missing something?  This doesn't happen in Queso integration because
> on any one processor there is at most one libMesh app, right?
>
> This is a pretty valuable use case, and I'm inclined to fix it, but it
> could be a major change and I don't want to jump into it without thinking
> it through…
>
> -Ben
>
>
>
> ------------------------------------------------------------------------------
> Everyone hates slow websites. So do we.
> Make your web apps faster with AppDynamics
> Download AppDynamics Lite for free today:
> http://p.sf.net/sfu/appdyn_sfd2d_oct
> _______________________________________________
> Libmesh-devel mailing list
> Libmesh-devel@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/libmesh-devel
>
------------------------------------------------------------------------------
Everyone hates slow websites. So do we.
Make your web apps faster with AppDynamics
Download AppDynamics Lite for free today:
http://p.sf.net/sfu/appdyn_sfd2d_oct
_______________________________________________
Libmesh-devel mailing list
Libmesh-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/libmesh-devel

Reply via email to