(forked to libmesh-devel)

> On Thu, 15 Jan 2009, Tim Kroeger wrote:
> 
>> But there is another problem (things turn out to be more difficult than I
>> thought): In the ghost cell case, PETSc does not provide the appropriate
>> global-to-local mapping that would be required for e.g.
>> NumericVector::operator()(unsigned int).  I asked this in the petsc-users
>> list.  Jed Brown commented on this by arguing the natural thing would be that
>> libMesh's DofMap should work on local dof numbers.
> 
> Giving identical degrees of freedom different ids on different
> processors would not be natural for us, especially considering the
> changes we'd be forced to make to the DofMap (which would now need to
> talk to our numeric interfaces for the first time!), to our non-Petsc
> numeric interfaces, to code that stores non-DoF data associated with
> particular degrees of freedom...  I sympathize with their desire to
> have contiguous local dof numbers, but I think this would be too big a
> change for us right now.

What will be pretty easy is packing everything into
[0 n_local_dofs | 0 n_ghost_dofs]

We number the degrees of freedom such that they are contiguous by processor
block, so each processor can easily store
[first_local_dof ... last_local_dof) in the front of the vector, which is
done now.

>> (A local-to-global mapping seems to be provided by PETSc.)
> 
> Interesting - like a sparsity pattern for vectors.  Is it shared
> between multiple vectors?
> 
>> My idea would be that PetscVector creates the global-to-local mapping (for
>> the ghost cells only) itself (and stores it, e.g. as a std::map<unsigned int,
>> unsigned int>).  This should still save a lot of memory compared with the
>> serial vector version.
> 
> Sounds like the best we can do.  Maybe typedef the container to make
> it easier for us to play with map vs. hash_map performance for it.

I believe this should be handled by the send_list.  The send_list, minus the
local components, should be the remote ghost dofs we need.  Even better, it
is already sorted.  In any case, this should generally be a small number of
(global) integer indices (except after refinement/redistribution).  So, we
can take that part of the send list, and pack it into
[minimum_global_ghost_idx ... maximum_global_ghost_idx)
and then use a binary search to find the 'local' ghost dof index.

Main advantage here is (i) the data already exist, and (ii) the integer
index array will be small.  As Roy asks/suggests, though, this information
could *still* be shared among several vectors with the same partitioning if
desired.

BTW, I think I promised this a while ago, so I am delinquent in my delivery.
Roy, I'd be happy to help.  I think we should copy off NumericVector and
work on it in parallel (pun intended?) to the the existing implementation.
It should then be a drop-in replacement when we are done.

-Ben


------------------------------------------------------------------------------
This SF.net email is sponsored by:
SourcForge Community
SourceForge wants to tell your story.
http://p.sf.net/sfu/sf-spreadtheword
_______________________________________________
Libmesh-devel mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/libmesh-devel

Reply via email to