On Oct 28, 2009, at 11:16 AM, Roy Stogner wrote: > If only for backwards compatibility's sake, I'd rather not remove it. > But since that seems to be a popular option, why not avoid caching it > and just make it an O(N) operation? Instead of looping over elements > and counting unique subdomains during prepare_for_use(), we could do > so during MeshBase::n_subdomains(). Then if nobody really uses that > function they don't incur any cost, but if someone does need it it's > there.
The one time I know that everyone's code calls n_subdomains() is in mesh->print_info().... so we would incur an O(N) hit for that function. I'm personally not worried by that (our current codes call that about once per timestep depending on adaptivitiy)... just thought I would point it out. >> 3. Arbitrary data (like a Parameters pointer) associated with >> elements is BAD. > > I hate "pointer-to-void" data, since the whole point of C++ is > supposed to be escaping that kind of low level stuff. > > But perhaps one day we might have a configure-time option associating > a "pointer-to-ElemData" with each Elem and/or a "pointer-to-NodeData" > on each Node. Then we make ElemData and NodeData both Abstract Base > classes, where we could add as many pure virtual methods as is > necessary to let the library handle them, e.g.: > >> Anytime you head down this path you get yourself into trouble with >> refinement and coarsening. The crux of the problem is >> understanding how to project the data down to the refined elements >> or up to a coarsened one. > > Creating pure virtual methods like ElemData::coarsen() (takes a vector > of Elem*, is responsible for fixing up their parent's data pointer) > and refine() (takes a parent Elem*, is responsible for fixing up the > childrens' data pointers) would solve this - the user literally > wouldn't be able to get their code to compile without having to make a > decision about how it should handle AMR/C. I'd add > serialize()/unserialize() methods too, to handle parallel > communication and I/O. This is where the problem starts. As soon as you start asking users to do the refinement / coarsening projections.... all hell breaks loose. 99% of the problems with adaptivity in that other "LARGE Framework" were in the _user_ defined arbitrary data projection schemes. A lot people (typically from a time when AMR/C didn't exist) wouldn't think about how to do those projections properly (or would just throw junk into the element data structure willy nilly) and adaptivity would give just blatantly wrong answers. And then on top of that you have the problems with repartitioning and serialization (which, requiring users to write that kind of code is fraught with problems as well....). I know it feels somewhat draconian to just disallow this capability all together... but I truly believe that doing so provides a saner environment for all. > But there's two reasons why I'm not planning on writing user data > support myself: > > You're absolutely right that users typically think they need this > functionality when they really don't. Subdomain ids are a better > solution for piecewise homogenous materials, and ExplicitSystem > variables are a better solution for spatially varying fields. > > I haven't figured out the best way to let multiple codes with user > data work together cleanly. User one writes an app or framework where > their ElemData subclass encapsulates pointers to overlapping elements > in a separate overset grid. User two writes an ElemData subclass > which encapsulates multiscale geometry data. User three wants to use > both at once. How? My ideas involve various combinations of > templates and multiple inheritance, and I usually behave warily about > the former and completely avoid the latter. This was also a problem in that other framework. libMesh is elegant in this regard. Since every piece of data is supported by a discretization space.... it gives us a LOT of power and flexibility when doing things like mesh to mesh projections and the aforementioned adaptivity projections... >> 4. There were statements to the effect of "subdomains don't change >> through refinement"... that might not actually be true. > > I think I said that the total number of subdomains isn't changed by > refinement... but I guess in the degnerate case your counterexample > applies to that too. It is possible... like with a thin layer that isn't represented by the original mesh at all... but comes into view once you do some adaptivity.... Derek ------------------------------------------------------------------------------ Come build with us! The BlackBerry(R) Developer Conference in SF, CA is the only developer event you need to attend this year. Jumpstart your developing skills, take BlackBerry mobile applications to market and stay ahead of the curve. Join us from November 9 - 12, 2009. Register now! http://p.sf.net/sfu/devconference _______________________________________________ Libmesh-users mailing list [email protected] https://lists.sourceforge.net/lists/listinfo/libmesh-users
