On Mon, Mar 9, 2015 at 7:02 PM, Anders Logg <[email protected]> wrote:
> My interpretation was to template over a type that supports operator[] and
> size() in which case we could send in both std::vector and dolfin::array
> (which would be able to wrap a raw pointer), but perhaps that extra wrapping
> could be costly.
>

We have dolfin::ArrayView, which is super lightweight and supports
operator[] and size().

Garth

> --
> Anders
>
>
> mån 9 mars 2015 kl 19:57 skrev Garth N. Wells <[email protected]>:
>>
>> On Mon, Mar 9, 2015 at 6:51 PM, Anders Logg <[email protected]> wrote:
>> > What would that look like?
>>
>> Just template over the container type and use operator[] to access.
>>
>> Could you give an example? Could it be made to
>> > support both std::vector and raw pointers?
>>
>> Yes, but we wouldn't have access to all the features of a std::vector
>> and we'd still need to pass the array size.
>>
>> Or are you thinking about using
>> > dolfin::Array?
>>
>> I hadn't considered using DOLFIN types in the interface, but
>> dolfin::Array and dolfin::ArrayView could be used.
>>
>> If so, that might be a good idea actually to have a
>> > consistent interface.
>> >
>>
>> I think templates at this level could/would be good, but we'd have to
>> check that the need to template in DOLFIN doesn't go viral.
>>
>> Garth
>>
>> > --
>> > Anders
>> >
>> >
>> > mån 9 mars 2015 kl 19:49 skrev Garth N. Wells <[email protected]>:
>> >>
>> >> On Mon, Mar 9, 2015 at 6:39 PM, Anders Logg <[email protected]>
>> >> wrote:
>> >> > This was the reason we decided to use foo* in the initial version of
>> >> > the
>> >> > UFC
>> >> > interface. There was likely also a suspicion that it might be more
>> >> > efficient
>> >> > than std::vector but even if that suspicion was wrong, the other
>> >> > argument
>> >> > still holds: it gives maximal flexibility for the user of the
>> >> > interface
>> >> > (DOLFIN).
>> >> >
>> >> > Another point would be to say that DOLFIN is the only user of the
>> >> > interface
>> >> > so we might as well use whatever is most convenient from the DOLFIN
>> >> > side,
>> >> > but that would lead to a messy mix of containers.
>> >> >
>> >>
>> >> Not necessarily if the interface is templated. There is merit to
>> >> making it templated, e.g. support for std::complex or support for
>> >> Eigen data structures.
>> >>
>> >> Garth
>> >>
>> >>
>> >> > --
>> >> > Anders
>> >> >
>> >> >
>> >> > mån 9 mars 2015 kl 19:26 skrev Garth N. Wells <[email protected]>:
>> >> >>
>> >> >> I'm not a fan of foo*, but I think it's the better choice. The
>> >> >> upside
>> >> >> is that it places few constraints on the storage container on the
>> >> >> DOLFIN side. The downsides are that it's less safe, the function
>> >> >> called can't do the resizing for us, and we need an extra function
>> >> >> to
>> >> >> return the size of the array.
>> >> >>
>> >> >> Garth
>> >> >>
>> >> >> On Mon, Mar 9, 2015 at 1:34 PM, Martin Sandve Alnæs
>> >> >> <[email protected]>
>> >> >> wrote:
>> >> >> > The overhead occurs when the caller must construct or resize a
>> >> >> > vector.
>> >> >> > If
>> >> >> > that must happen in a function called from a loop, the penalty is
>> >> >> > significant.
>> >> >> >
>> >> >> > 9. mar. 2015 14.05 skrev "Anders Logg" <[email protected]>:
>> >> >> >
>> >> >> >> Another inconsistency is use of std::vector<foo> vs foo*.
>> >> >> >>
>> >> >> >> Did we make a decision on which to use?
>> >> >> >>
>> >> >> >> Here the issue is not so much speed of access (my tests indicated
>> >> >> >> they
>> >> >> >> are
>> >> >> >> equally fast - as they should be) when compiling with
>> >> >> >> optimization,
>> >> >> >> but
>> >> >> >> how
>> >> >> >> easily these datastructures can be filled in from the DOLFIN
>> >> >> >> side.
>> >> >> >> There's
>> >> >> >> now way I know of to pass raw pointer data on as a std:vector,
>> >> >> >> but
>> >> >> >> the
>> >> >> >> opposite is possible.
>> >> >> >>
>> >> >> >> --
>> >> >> >> Anders
>> >> >> >>
>> >> >> >>
>> >> >> >> fre 6 mars 2015 kl 14:51 skrev Martin Sandve Alnæs
>> >> >> >> <[email protected]>:
>> >> >> >>>
>> >> >> >>> Lets go for all in flat packed arrays then.
>> >> >> >>>
>> >> >> >>> UFC can easily provide the expected offsets into packed input
>> >> >> >>> arrays:
>> >> >> >>>
>> >> >> >>> vector<size_t> coefficient_dof_offsets(); // indexed by
>> >> >> >>> coefficient
>> >> >> >>> id
>> >> >> >>> vector<size_t> coordinate_dof_offsets(); // indexed by domain id
>> >> >> >>>
>> >> >> >>> Martin
>> >> >> >>>
>> >> >> >>>
>> >> >> >>> On 6 March 2015 at 13:39, Garth N. Wells <[email protected]>
>> >> >> >>> wrote:
>> >> >> >>>>
>> >> >> >>>> On Fri, Mar 6, 2015 at 12:02 PM, Martin Sandve Alnæs
>> >> >> >>>> <[email protected]> wrote:
>> >> >> >>>> > Is that the case for vertex coordinates? (Or coordinate dofs
>> >> >> >>>> > in
>> >> >> >>>> > a
>> >> >> >>>> > more
>> >> >> >>>> > generic parameterized geometry setting).
>> >> >> >>>> >
>> >> >> >>>> > Then using double ** coordinate_dofs, like for w, could be a
>> >> >> >>>> > good
>> >> >> >>>> > choice
>> >> >> >>>> > allowing for an arbitrary number of cells. In particular
>> >> >> >>>> > because
>> >> >> >>>> > the
>> >> >> >>>> > cells
>> >> >> >>>> > can have a different number of coordinate dofs so the packed
>> >> >> >>>> > array
>> >> >> >>>> > is
>> >> >> >>>> > not
>> >> >> >>>> > rectangular, same as with w.
>> >> >> >>>> >
>> >> >> >>>>
>> >> >> >>>> I think it's still better to flatten the data and pass an
>> >> >> >>>> integer
>> >> >> >>>> array that points into the flat array. I expect that this would
>> >> >> >>>> be
>> >> >> >>>> easier to vectorise, and if necessary to pad data.
>> >> >> >>>>
>> >> >> >>>> > We can also easily generate ufc code for packing and
>> >> >> >>>> > unpacking.
>> >> >> >>>> >
>> >> >> >>>>
>> >> >> >>>> A possible problem with this is that UFC doesn't know how the
>> >> >> >>>> data
>> >> >> >>>> is
>> >> >> >>>> stored in DOLFIN so won't be able to apply certain
>> >> >> >>>> optimisations.
>> >> >> >>>>
>> >> >> >>>> Garth
>> >> >> >>>>
>> >> >> >>>> > Martin
>> >> >> >>>> >
>> >> >> >>>> > 6. mar. 2015 12.13 skrev "Anders Logg"
>> >> >> >>>> > <[email protected]>:
>> >> >> >>>> >
>> >> >> >>>> >> An additional point is that run-time performance may also be
>> >> >> >>>> >> affected
>> >> >> >>>> >> by
>> >> >> >>>> >> needing to copy stuff into flattened arrays on the DOLFIN
>> >> >> >>>> >> side
>> >> >> >>>> >> so
>> >> >> >>>> >> in
>> >> >> >>>> >> some
>> >> >> >>>> >> cases flattening may not be the most effecient option.
>> >> >> >>>> >>
>> >> >> >>>> >> --
>> >> >> >>>> >> Anders
>> >> >> >>>> >>
>> >> >> >>>> >>
>> >> >> >>>> >> fre 6 mars 2015 kl 11:52 skrev Garth N. Wells
>> >> >> >>>> >> <[email protected]>:
>> >> >> >>>> >>>
>> >> >> >>>> >>> On Fri, Mar 6, 2015 at 10:38 AM, Anders Logg
>> >> >> >>>> >>> <[email protected]>
>> >> >> >>>> >>> wrote:
>> >> >> >>>> >>> > For the code generation we should pick the signature that
>> >> >> >>>> >>> > is
>> >> >> >>>> >>> > most
>> >> >> >>>> >>> > efficient
>> >> >> >>>> >>> > (from a run-time point of view) so testing is needed.
>> >> >> >>>> >>> > When
>> >> >> >>>> >>> > this
>> >> >> >>>> >>> > was
>> >> >> >>>> >>> > last
>> >> >> >>>> >>> > brought up (Cambridge Jan two years ago) I made some
>> >> >> >>>> >>> > rudimentary
>> >> >> >>>> >>> > tests
>> >> >> >>>> >>> > - see
>> >> >> >>>> >>> > attachment that indicated flattening is good.
>> >> >> >>>> >>> >
>> >> >> >>>> >>> > Regarding the custom_integral interface, we need to use
>> >> >> >>>> >>> > one
>> >> >> >>>> >>> > flattened
>> >> >> >>>> >>> > array
>> >> >> >>>> >>> > instead of two cells (as for interior_facet_integral)
>> >> >> >>>> >>> > since
>> >> >> >>>> >>> > there
>> >> >> >>>> >>> > can
>> >> >> >>>> >>> > be
>> >> >> >>>> >>> > more than two cells (perhaps hundreds...).
>> >> >> >>>> >>> >
>> >> >> >>>> >>>
>> >> >> >>>> >>> I agree that at this level runtime performance should be
>> >> >> >>>> >>> the
>> >> >> >>>> >>> priority.
>> >> >> >>>> >>> All testing I've ever done points to flattened array being
>> >> >> >>>> >>> better.
>> >> >> >>>> >>> We
>> >> >> >>>> >>> can add helper code on the DOLFIN side to ease populating
>> >> >> >>>> >>> the
>> >> >> >>>> >>> arrays.
>> >> >> >>>> >>>
>> >> >> >>>> >>> Garth
>> >> >> >>>> >>>
>> >> >> >>>> >>>
>> >> >> >>>> >>> > --
>> >> >> >>>> >>> > Anders
>> >> >> >>>> >>> >
>> >> >> >>>> >>> >
>> >> >> >>>> >>> >
>> >> >> >>>> >>> > tors 5 mars 2015 kl 15:38 skrev Martin Sandve Alnæs
>> >> >> >>>> >>> > <[email protected]>:
>> >> >> >>>> >>> >
>> >> >> >>>> >>> >> The tabulate_tensor signatures are inconsistent in how
>> >> >> >>>> >>> >> the
>> >> >> >>>> >>> >> different arguments are treated in the face of multiple
>> >> >> >>>> >>> >> cells.
>> >> >> >>>> >>> >>
>> >> >> >>>> >>> >> In the interior_facet_integral, there are explicitly
>> >> >> >>>> >>> >> named
>> >> >> >>>> >>> >> arguments
>> >> >> >>>> >>> >> vertex_coordinates_0 and a vertex_coordinates_1, while
>> >> >> >>>> >>> >> in
>> >> >> >>>> >>> >> custom_integral, a single flat vertex_coordinates array
>> >> >> >>>> >>> >> is
>> >> >> >>>> >>> >> used
>> >> >> >>>> >>> >> with coordinates from two cells being packed into that
>> >> >> >>>> >>> >> array
>> >> >> >>>> >>> >> in
>> >> >> >>>> >>> >> the MultiMeshAssembler.
>> >> >> >>>> >>> >>
>> >> >> >>>> >>> >> In all tabulate_tensor signatures, the dofs are passed
>> >> >> >>>> >>> >> in a
>> >> >> >>>> >>> >> single
>> >> >> >>>> >>> >> "double**w" where the first dimension is the function id
>> >> >> >>>> >>> >> and
>> >> >> >>>> >>> >> the
>> >> >> >>>> >>> >> second
>> >> >> >>>> >>> >> is the dof numbering with dofs from two cells in
>> >> >> >>>> >>> >> intererior_facet_integral
>> >> >> >>>> >>> >> packed contiguously.
>> >> >> >>>> >>> >>
>> >> >> >>>> >>> >> I don't intend to go through everything and make it
>> >> >> >>>> >>> >> consistent
>> >> >> >>>> >>> >> in
>> >> >> >>>> >>> >> one
>> >> >> >>>> >>> >> go,
>> >> >> >>>> >>> >> but I think that for changes that will happen in the
>> >> >> >>>> >>> >> near
>> >> >> >>>> >>> >> and
>> >> >> >>>> >>> >> far
>> >> >> >>>> >>> >> future
>> >> >> >>>> >>> >> we
>> >> >> >>>> >>> >> should aim for a single philisophy and move towards that
>> >> >> >>>> >>> >> when
>> >> >> >>>> >>> >> we
>> >> >> >>>> >>> >> add something new or modify something old.
>> >> >> >>>> >>> >>
>> >> >> >>>> >>> >> From the code generation perspective I think it doesn't
>> >> >> >>>> >>> >> matter
>> >> >> >>>> >>> >> a
>> >> >> >>>> >>> >> lot,
>> >> >> >>>> >>> >> it's more important to keep the dolfin side clean and
>> >> >> >>>> >>> >> easy
>> >> >> >>>> >>> >> to
>> >> >> >>>> >>> >> edit.
>> >> >> >>>> >>> >> Packing every array flat keeps the ufc signatures
>> >> >> >>>> >>> >> flexible
>> >> >> >>>> >>> >> but
>> >> >> >>>> >>> >> moves
>> >> >> >>>> >>> >> complexity over to documentation and conventions. The
>> >> >> >>>> >>> >> implementation
>> >> >> >>>> >>> >> in dolfin may or may not be more complex because flat
>> >> >> >>>> >>> >> arrays
>> >> >> >>>> >>> >> are
>> >> >> >>>> >>> >> easy
>> >> >> >>>> >>> >> to create and copy but harder to populate with more
>> >> >> >>>> >>> >> manual
>> >> >> >>>> >>> >> indexing
>> >> >> >>>> >>> >> perhaps.
>> >> >> >>>> >>> >> This can also be a question of performance, we should
>> >> >> >>>> >>> >> avoid
>> >> >> >>>> >>> >> unnecessary work in the inner loops of assemblers.
>> >> >> >>>> >>> >>
>> >> >> >>>> >>> >> Here are the candidates with dimensions in comments
>> >> >> >>>> >>> >> (consts
>> >> >> >>>> >>> >> removed
>> >> >> >>>> >>> >> for
>> >> >> >>>> >>> >> clarity):
>> >> >> >>>> >>> >>
>> >> >> >>>> >>> >> // element tensor(s)
>> >> >> >>>> >>> >> double* A // [sum of packed element tensor size for each
>> >> >> >>>> >>> >> domain]
>> >> >> >>>> >>> >>
>> >> >> >>>> >>> >> // dofs of coefficient functions
>> >> >> >>>> >>> >> (num_dofs_for_this_coefficient
>> >> >> >>>> >>> >> varies)
>> >> >> >>>> >>> >> double ** w //
>> >> >> >>>> >>> >>
>> >> >> >>>> >>> >>
>> >> >> >>>> >>> >> [num_coefficients][num_dofs_for_this_coefficient*num_domains]
>> >> >> >>>> >>> >>
>> >> >> >>>> >>> >> // coordinates of cell vertices (should also be
>> >> >> >>>> >>> >> generalized
>> >> >> >>>> >>> >> to
>> >> >> >>>> >>> >> coordinate_dofs)
>> >> >> >>>> >>> >> double* vertex_coordinates //
>> >> >> >>>> >>> >> [num_domains*num_cell_vertices*gdim]
>> >> >> >>>> >>> >> double* vertex_coordinates_0 // [num_cell_vertices*gdim]
>> >> >> >>>> >>> >> double* vertex_coordinates_1 // ditto
>> >> >> >>>> >>> >>
>> >> >> >>>> >>> >> // quadrature rules
>> >> >> >>>> >>> >> double* quadrature_points // [num_points]
>> >> >> >>>> >>> >> double* quadrature_weights // [num_points*gdim]
>> >> >> >>>> >>> >>
>> >> >> >>>> >>> >> // geometric quantities
>> >> >> >>>> >>> >> double* facet_normals // [num_points*gdim]?
>> >> >> >>>> >>> >>
>> >> >> >>>> >>> >> Martin
>> >> >> >>>> >>> >
>> >> >> >>>> >>> >
>> >> >> >>>> >>> > _______________________________________________
>> >> >> >>>> >>> > fenics mailing list
>> >> >> >>>> >>> > [email protected]
>> >> >> >>>> >>> > http://fenicsproject.org/mailman/listinfo/fenics
>> >> >> >>>> >>> >
>> >> >> >>>> >>> _______________________________________________
>> >> >> >>>> >>> fenics mailing list
>> >> >> >>>> >>> [email protected]
>> >> >> >>>> >>> http://fenicsproject.org/mailman/listinfo/fenics
>> >> >> >>>> >>
>> >> >> >>>> >>
>> >> >> >>>> >> _______________________________________________
>> >> >> >>>> >> fenics mailing list
>> >> >> >>>> >> [email protected]
>> >> >> >>>> >> http://fenicsproject.org/mailman/listinfo/fenics
>> >> >> >>>> >>
>> >> >> >>>> >
>> >> >> _______________________________________________
>> >> >> fenics mailing list
>> >> >> [email protected]
>> >> >> http://fenicsproject.org/mailman/listinfo/fenics
>> >> _______________________________________________
>> >> fenics mailing list
>> >> [email protected]
>> >> http://fenicsproject.org/mailman/listinfo/fenics
>> _______________________________________________
>> fenics mailing list
>> [email protected]
>> http://fenicsproject.org/mailman/listinfo/fenics
_______________________________________________
fenics mailing list
[email protected]
http://fenicsproject.org/mailman/listinfo/fenics

Reply via email to