Hi Roy,

So, I've added the multiple SCALARs stuff (see the patch), and that works fine. To do this I added a std::map member to DofMap to store the Order of each SCALAR.

Also, I added the SCALAR code from DofMap::dof_indices to DofMap::old_dof_indices, with n_old_dofs() in place of n_dofs().

However, I realized that a system with a SCALAR variable breaks on calling equation_systems.reinit() after doing (say) a uniform refinement, since vector projection calls fe_scalar_shape_2D.C. I haven't looked at this in detail yet, but I guess we should just map each SCALAR value in the old vector to the corresponding entry in the projected vector...

- Dave



Roy Stogner wrote:

On Fri, 24 Apr 2009, David Knezevic wrote:

Making it possible to add multiple SCALARs would not be difficult, I can take care of that. Getting it to work in parallel may be a bit trickier (at least for me). How about I take care of the first point and have another look at what would be required to get it going in parallel before you add the patch.

Hmm... add the multiple SCALARs stuff now if it's easy, add a test
that fails to libmesh_error() if DofMap sees a scalar with
n_processors() > 1, and then we'll add the patch from there.  I'm not
averse to adding in-progress code to SVN, I just don't want it to
fail silently if anyone not on the mailing list notices and tries out
the capability.
---
Roy


Attachment: SCALAR_patch.gz
Description: GNU Zip compressed data

------------------------------------------------------------------------------
Register Now & Save for Velocity, the Web Performance & Operations 
Conference from O'Reilly Media. Velocity features a full day of 
expert-led, hands-on workshops and two days of sessions from industry 
leaders in dedicated Performance & Operations tracks. Use code vel09scf 
and Save an extra 15% before 5/3. http://p.sf.net/sfu/velocityconf
_______________________________________________
Libmesh-devel mailing list
Libmesh-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/libmesh-devel

Reply via email to