On May 11, 2010, at 3:01 PM, Roy Stogner wrote:

> 
> 
> On Tue, 11 May 2010, David Knezevic wrote:
> 
>>>> So now I'm getting mesh motion, though if GMV is to be believed it's
>>>> only the outermost layer of elements that are moving. I'll have a
>>>> closer look at what's happening...
>>> 
>>> Thanks!
>> 
>> Oh, my mistake, I just needed to add system.mesh_position_set() after 
>> system.time_solver->advance_timestep(). It works nicely now, cool!
> 
> No, that's not your mistake, that's mine - mesh_position_set() ought
> to be handled transparently by the library.  The trick is trying to
> figure out how to do that most modularly.  Doing it in
> advance_timestep() is out because we want it to work right in steady
> solves too.  Doing it in FEMSystem::solve() is probably most
> reasonable; I'm just not 100% sure that won't mess up the way
> TwoStepTimeSolver works.
> 
> But hey, since ALE is in a known broken state anyway we might as well
> move it to "less broken".  I'll add the mesh_position_set() to SVN
> now.
> 
> 
> Next step is getting it working with MPI on SerialMesh.  If we
> construct a SERIAL vector in mesh_position_set, localize solution onto
> it, swap it with current_local_solution, and then swap it back and
> let it destruct when we're done, that ought to be a sufficient hack...
> 
> But what I'd rather do is fix it for SerialMesh and ParallelMesh at
> once.  Change mesh_position_set to work only on local active elements,
> do a Parallel::sync_dofobject_data_by_id to grab the new locations for
> every other node, then loop over nonlocal nodes to set them.  I ought
> to have some time to try this out next week.
> 
> 
> The final step is getting it working with Threads.  I suspect that the
> best way to do that is to actually construct copies of Elem and Node
> objects as they're being used for evaluations, and move the copies.
> This way you can have two neighboring elements moving independently on
> different threads.  This would require some extra hassle to handle
> neighboring elements if dof_map.use_coupled_neighbor_dofs(mesh) is
> true... but that could wait, and even that hassle would probably be
> exceeded by the hassle of doing locking properly and efficiently
> otherwise.
> 
> Perhaps I should have done the TimeSolver integration on global
> vectors rather than per-element, so we'd be moving the whole mesh at
> once rather than one element at a time.  The way I do it uses less
> memory and is faster for non-ALE codes, but it's definitely made the
> code more convoluted.  Premature optimization is the root of something
> something...

"ALL EVIL" - Knuth

> ---
> Roy
> 
> ------------------------------------------------------------------------------
> 
> _______________________________________________
> Libmesh-devel mailing list
> Libmesh-devel@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/libmesh-devel


------------------------------------------------------------------------------

_______________________________________________
Libmesh-devel mailing list
Libmesh-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/libmesh-devel

Reply via email to