> But I suppose we can avoid that cost without killing the abstraction,
> if we're smart about it...
>
> If add_elem() inserted subdomain ids into a std::set, for example,
> then we'd just have to do a parallel union of that set during
> prepare_for_use, which is O(N_subdomains_per_proc * N_proc) instead of
> O(N_elem).  It would be cheap to keep the set around afterward, too,
> enabling O(log(N_subdomains)) MeshBase::have_subdomain(id) queries.
>
> Anyone doing element destruction would be in charge of un-screwing-up
> the subdomain count afterwards, but all our current delete_elem uses
> are in contexts like coarsening, all_tri, delete_remote_elems, and
> other such methods that won't change the subdomain count.
>
> I'd be tempted to use a vector<bool> instead of a set, but that could
> backfire nastily if someone decided that their two subdomains should
> be numbered "1" and "1000000000".
>
> Thoughts?
>   

This seems like a good idea. But also an element's subdomain ID can be 
changed at any time so we'd have to update the set any time 
elem->subdomain_id() is called...

But more to the point, why does MeshBase need a n_subdomains or 
have_subdomain_id function? It seems to me that it's unnecessary (users 
must have a priori knowledge of the subdomains since the user must've 
set the IDs), and keeping the MeshBase subdomain data in sync with the 
actual element subdomain IDs appears to be non-trivial.

- Dave

------------------------------------------------------------------------------
Come build with us! The BlackBerry(R) Developer Conference in SF, CA
is the only developer event you need to attend this year. Jumpstart your
developing skills, take BlackBerry mobile applications to market and stay 
ahead of the curve. Join us from November 9 - 12, 2009. Register now!
http://p.sf.net/sfu/devconference
_______________________________________________
Libmesh-users mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/libmesh-users

Reply via email to