On Wed, Dec 10, 2014 at 7:32 PM, Garth N. Wells <[email protected]> wrote:

>
>
> On Wed, 10 Dec, 2014 at 6:23 PM, Johan Hake <[email protected]> wrote:
>
>>
>>  The concept of ‘local_size’ is a bit vague (the PETSc doc cautions the
>>>>>>> use of VecGetLocalSize), e.g. should it contain ghost values or not?
>>>>>>>
>>>>>>
>>>>>> To me it seems like VecGetLocalSize only returns the local dofs.
>>>>>>
>>>>>
>>>>> It returns PETSc's concept of the local size. It breaks abstractions
>>>>> because it makes an assumption on the underlying storage which is not
>>>>> necessarily valid for all implementations.
>>>>>
>>>>
>>>> What do you mean with different implementations​? Different usage?
>>>>
>>>
>>> Implementations. The 'feature' of PETSc, Tpetra, etc is that they
>>> abstract away details of the internal representation/storage. For example,
>>> the concept of 'local size' is different for Tpetra and PETSc.
>>>
>>
>> ​Ok, so your concept of implementation is between PETSc and Trilinos, not
>> within present DOLFIN, as Tpetra is not in DOLFIN, yet. You had me pretty
>> confused for a while there.
>>
>
> We're adding Tpetra/Trilinos back at the moment.


​Nice!​

 But the local_range is always fixed (as long as we do not repartition),
>>>> right?
>>>>
>>>
>>> We don't have a consistent concept of local_range. We could go with 'all
>>> values present on a process'. This would include ghosts/shared entries.
>>>
>>
>> ​I thought we had and that GenericTensor::local_range, and
>> GenericDofMap::ownership_range gave these values.
>>
>
> Yes. These were added early, and it's not longer clear (a) what they
> really mean; and (b) whether or not they are desirable.


​Ok...

 ​I am talking about present implementation of local_values in dolfin and
>> not any future definitions. I am aiming of implementing a feature for
>> present pre 1.5 release ;)
>>
>>  If so we can assume that only dofs within the local range can be set, at
>>>> least from the numpy interface. These dofs are set using local numbering by
>>>> the method set_local. Aren't, at least for now, ghost values stored on top
>>>> of the local dof range, so if we keep our indices below
>>>> local_range[1]-local_range[0] we should be fine?
>>>>
>>>>  You need to first get the ghosted vector by VecGhostGetLocalForm and
>>>>>> then call VecGetSize on that to also get the size of the local + ghosted
>>>>>> vector. It also seems like local_size is the same as
>>>>>> local_range[1]-local_range[0] regardless of the size of the local dofs 
>>>>>> and
>>>>>> the present of any ghosted dofs.
>>>>>>
>>>>>> Can you give an example where the ghost dofs are set using set_local?
>>>>>>
>>>>>>
>>>>> During assembly. set_local(const double*, std::size_t, const
>>>>> dolfin::la_index*) takes local indices - the actual data can reside
>>>>> elsewhere.
>>>>>
>>>>
>>>> ​So local size will vary during assemble?​
>>>>
>>>>  When a vector is created, a local-to-global map for the vector is set.
>>>>>
>>>>  easy way for providing the local size,
>>>> And then fixing the local_size?
>>>>
>>>
>>>
>>> No, because the local-to-global map can contain off-process entries. The
>>> local indices are in [0, n), but the target entry may reside on a different
>>> process, e.g. [0, m) entries on the process, and [m, n) on another process.
>>>
>>
>> Sure.
>>
>>  My understanding of Tepetra is that it doesn’t have a concept of
>>>>>>> ‘ownership’, i.e. vectors entries can be stored on more than one process
>>>>>>> and are updated via a function call. No one process is designated as the
>>>>>>> ‘owner’.
>>>>>>>
>>>>>>
>>>>>> ​So you have shared dofs instead of dedicated owned and ghosted dofs?
>>>>>>
>>>>>
>>>>> Yes.
>>>>>
>>>>>  That will of course make things more complicated...
>>>>>>
>>>>>
>>>>> In terms of interfacing to NumPy, yes. At a lower level in DOLFIN I
>>>>> think it will makes things simpler.
>>>>>
>>>>
>>>> ​Ok, are we going to change dolfin's concept of owing a dofs (think
>>>> what we do in the dofmap) when tepetra will be added?
>>>>
>>>
>>> This is a different topic because it's not pure linear algebra - we can
>>> decide how dof ownership is defined.
>>>
>>> I'm leaning towards 'local' meaning all available entries on a process,
>>> which in cases will mean duplication for shared/ghost values.
>>>
>>
>> ​Yeah, I figured that. I do not care. As long as it is transparent for
>> the user and we provides means to figure out the local indices of the local
>> ghost value.
>>
>
> Forgetting the pure linear algebra abstraction for a moment, I think a
> user would expect a local NumPy array to hold values for all the dofs for
> the part of the mesh that is on a process. This would point to considering
> all values (including ghosts) that are on a process.
>
>  Then a user can set vector values using v[indices], which eventually just
>> calls GenericVector::set_local.
>>
>
> Bear in mind we have two GenericVector::set_local functions.


Well there are three actually. But I am talking about:

set_local(const double*, std::size_t, const dolfin::la_index*);

as this is the only one we can use to provide our own indices.


>  We also need to provide a robust way of figuring out the local_size,
>> (what ever that means) via for example GenericVector::local_size.
>>
>
> Yes. We might need to add a new function.
>

​Ok. But if anything will go into 1.5 release I need an abstraction now.
One solution is to rely on local_size for now. If that limit it to only
setting local non ghosted dofs for the moment, at least I am fine (no
changes from the present numpy interface at least). We can then extend the
usage of numpy access when tepetra has been added. ​

​Johan​




>
> Garth
>
>  Johan​
>>
>>
>>
>
_______________________________________________
fenics mailing list
[email protected]
http://fenicsproject.org/mailman/listinfo/fenics

Reply via email to