> Multi-physics problems usually have physics with different length
> scales and different time scales. It is necessary to use appropriate
> meshes depending on the physics to resolve the evolution of solution
> and using a single mesh (union of all physics meshes) will lead to a
> very high DoF than needed and I'll literally be overkilling the
> problem. 

I don't disagree that the optimal mesh for each component would be great to
have, but I just hope you aren't underestimating the overhead involved in
seeking that composite mesh.  This is *especially* true in the transient
case.  When you add it all together the transient+adaptive+nonlinear+linear
nested looping really can get out of control.

My experience with a number of transient multiphysics problems has shown
this repeatedly.  The "optimal" mesh in a transient reactive problem is
elusive -- it will be different at each timestep!  Sure, the DOF count may
be lower, but when you roll it all together my tried and true approach is to
throw more mesh than you need at the current timestep but then go a while
before refining (and reallocating (and repartitioning (and projecting
...))).  I can almost guarantee much faster walltime with this approach.  If
the implicit system gets big you can run on a bigger machine, right? ;-)

-Ben


-------------------------------------------------------------------------
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2008.
http://clk.atdmt.com/MRT/go/vse0120000070mrt/direct/01/
_______________________________________________
Libmesh-users mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/libmesh-users

Reply via email to