Finally getting around to working on this... but it's actually much more
complicated than I thought.
My idea was to simply use something like:
counter + (std::numeric_limits<unsigned long int>::max() / n_processors())
* processor_id()
as the unique ID. Where "counter" is increased every time a DofObject gets
created.
Unfortunately that doesn't work.
Here's the problem: DofObject is NOT a ParallelObject and doesn't know what
processor_id it's assigned to when it's created. Therefore we are missing
two critical pieces of information that are needed for creating a unique ID
at the time a DofObject is created:
1. How many total processors there are.
2. What processor this DofObject is assigned to in the beginning
Not only that - but with ParallelMesh you have to be careful to assign the
same ID on every processor...
So - I started looking for where the right place was to assign a unique
ID... and it's looking like a callback to Mesh from the Partitioner ( in
partition() ) might not be a bad idea. Maybe something like virtual void
MeshBase::assign_unique_dof_ids().
In the case of SerialMesh this isn't a problem because it can just loop
over all DofObjects in the Mesh in the same order on every processor and
assign a unique ID using just a "serial" counter. If any new Elements are
added after partition() they will get the next number... etc. Easy.
However, for ParallelMesh things are not so easy. All processors can't
loop over all the objects in the very same way - so the scheme for
SerialMesh is out. The "counter" scheme (outlined at the beginning) seems
like a good idea - but there is still one problem: processor_id is assigned
to elements and nodes independently.... and in a two stage process (look in
Partitioner::partition() ).
I can definitely assign the unique_ids for the elements _just_ after
set_parent_processor_ids(mesh) in Partioner::partition(). The cool thing
is that redistribute() is called right after that which will cause that
unique_id to get packed up along with the rest of the Elem and sent
wherever it needs to go... so everything will work fine.
The bad part is that the Nodes _also_ get packed up and sent during
redistribute()... which means that if we set the unique_id on the nodes
_after_ redistribute() we'll have to go through a second phase of
renegotiation to make sure those are consistent across all processors.
However, it's not until _after_ redistribute that the processor_id() gets
set for nodes! So we can't set the unique_id for nodes before
redistribute...
Sigh.
Am I way off base here?
Derek
On Wed, Jul 10, 2013 at 11:49 PM, Roy Stogner <royst...@ices.utexas.edu>wrote:
>
> On Wed, 10 Jul 2013, Derek Gaston wrote:
>
> On Jul 10, 2013, at 11:05 PM, Roy Stogner <royst...@ices.utexas.edu>
>> wrote:
>>
>> I'd rather avoid adding a mandatory extra 8 bytes to every DofObject,
>>> but I don't see any insuperable obstacle to adding an optional 8
>>> bytes; this could even be run-time optional since we already have to
>>> have that variable-size _idx_buf in there. The only practical
>>> obstacle is that the _idx_buf code is *already* scary complicated.
>>>
>>
>> At this point I would just prefer a compile time option. We are going
>> to have code connected to this thing in quite a few places in MOOSE so
>> I would really like to be able to do elem->unique_id or some such.
>>
>
> Compile time option is okay with me unless others disagree. Just make
> it elem->unique_id() so that we can hide the underlying data in the
> index buffer and move to a runtime option sometime later.
>
>
> Are we at the point where I can gin something up and put in a pull
>> request and we can discuss from there?
>>
>
> Fine by me. There are still issues we haven't discussed but I can't
> think of any really hairy problems.
>
>
> I really don't think it's going to be much code to get started with.
>>
>
> Agreed. I also suspect your first crack is going to break my
> finally-passing-buildbot 64-bit-dof_id_type tests, but I'll fix that
> when we come to it. :-)
>
> You might want to strip out PackedElem and PackedNode while you're at
> it; that's been superceded by the Parallel::*packed_range stuff for a
> while now, and it's probably easier to delete the older code than to
> update it.
> ---
> Roy
>
------------------------------------------------------------------------------
Get your SQL database under version control now!
Version control is standard for application code, but databases havent
caught up. So what steps can you take to put your SQL databases under
version control? Why should you start doing it? Read more to find out.
http://pubads.g.doubleclick.net/gampad/clk?id=49501711&iu=/4140/ostg.clktrk
_______________________________________________
Libmesh-devel mailing list
Libmesh-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/libmesh-devel