Another inconsistency is use of std::vector<foo> vs foo*.

Did we make a decision on which to use?

Here the issue is not so much speed of access (my tests indicated they are
equally fast - as they should be) when compiling with optimization, but how
easily these datastructures can be filled in from the DOLFIN side. There's
now way I know of to pass raw pointer data on as a std:vector, but the
opposite is possible.

--
Anders


fre 6 mars 2015 kl 14:51 skrev Martin Sandve Alnæs <[email protected]>:

>  Lets go for all in flat packed arrays then.
>
>  UFC can easily provide the expected offsets into packed input arrays:
>
>  vector<size_t> coefficient_dof_offsets(); // indexed by coefficient id
>  vector<size_t> coordinate_dof_offsets(); // indexed by domain id
>
>  Martin
>
>
>  On 6 March 2015 at 13:39, Garth N. Wells <[email protected]> wrote:
>
>> On Fri, Mar 6, 2015 at 12:02 PM, Martin Sandve Alnæs <[email protected]>
>> wrote:
>> > Is that the case for vertex coordinates? (Or coordinate dofs in a more
>> > generic parameterized geometry setting).
>> >
>> > Then using double ** coordinate_dofs, like for w, could be a good choice
>> > allowing for an arbitrary number of cells. In particular because the
>> cells
>> > can have a different number of coordinate dofs so the packed array is
>> not
>> > rectangular, same as with w.
>> >
>>
>> I think it's still better to flatten the data and pass an integer
>> array that points into the flat array. I expect that this would be
>> easier to vectorise, and if necessary to pad data.
>>
>> > We can also easily generate ufc code for packing and unpacking.
>> >
>>
>> A possible problem with this is that UFC doesn't know how the data is
>> stored in DOLFIN so won't be able to apply certain optimisations.
>>
>> Garth
>>
>> > Martin
>> >
>> > 6. mar. 2015 12.13 skrev "Anders Logg" <[email protected]>:
>> >
>> >> An additional point is that run-time performance may also be affected
>> by
>> >> needing to copy stuff into flattened arrays on the DOLFIN side so in
>> some
>> >> cases flattening may not be the most effecient option.
>> >>
>> >> --
>> >> Anders
>> >>
>> >>
>> >> fre 6 mars 2015 kl 11:52 skrev Garth N. Wells <[email protected]>:
>> >>>
>> >>> On Fri, Mar 6, 2015 at 10:38 AM, Anders Logg <[email protected]>
>> >>> wrote:
>> >>> > For the code generation we should pick the signature that is most
>> >>> > efficient
>> >>> > (from a run-time point of view) so testing is needed. When this was
>> >>> > last
>> >>> > brought up (Cambridge Jan two years ago) I made some rudimentary
>> tests
>> >>> > - see
>> >>> > attachment that indicated flattening is good.
>> >>> >
>> >>> > Regarding the custom_integral interface, we need to use one
>> flattened
>> >>> > array
>> >>> > instead of two cells (as for interior_facet_integral) since there
>> can
>> >>> > be
>> >>> > more than two cells (perhaps hundreds...).
>> >>> >
>> >>>
>> >>> I agree that at this level runtime performance should be the priority.
>> >>> All testing I've ever done points to flattened array being better. We
>> >>> can add helper code on the DOLFIN side to ease populating the arrays.
>> >>>
>> >>> Garth
>> >>>
>> >>>
>> >>> > --
>> >>> > Anders
>> >>> >
>> >>> >
>> >>> >
>> >>> > tors 5 mars 2015 kl 15:38 skrev Martin Sandve Alnæs
>> >>> > <[email protected]>:
>> >>> >
>> >>> >> The tabulate_tensor signatures are inconsistent in how the
>> >>> >> different arguments are treated in the face of multiple cells.
>> >>> >>
>> >>> >> In the interior_facet_integral, there are explicitly named
>> arguments
>> >>> >> vertex_coordinates_0 and a vertex_coordinates_1, while in
>> >>> >> custom_integral, a single flat vertex_coordinates array is used
>> >>> >> with coordinates from two cells being packed into that array in
>> >>> >> the MultiMeshAssembler.
>> >>> >>
>> >>> >> In all tabulate_tensor signatures, the dofs are passed in a single
>> >>> >> "double**w" where the first dimension is the function id and the
>> >>> >> second
>> >>> >> is the dof numbering with dofs from two cells in
>> >>> >> intererior_facet_integral
>> >>> >> packed contiguously.
>> >>> >>
>> >>> >> I don't intend to go through everything and make it consistent in
>> one
>> >>> >> go,
>> >>> >> but I think that for changes that will happen in the near and far
>> >>> >> future
>> >>> >> we
>> >>> >> should aim for a single philisophy and move towards that when we
>> >>> >> add something new or modify something old.
>> >>> >>
>> >>> >> From the code generation perspective I think it doesn't matter a
>> lot,
>> >>> >> it's more important to keep the dolfin side clean and easy to edit.
>> >>> >> Packing every array flat keeps the ufc signatures flexible but
>> moves
>> >>> >> complexity over to documentation and conventions. The
>> implementation
>> >>> >> in dolfin may or may not be more complex because flat arrays are
>> easy
>> >>> >> to create and copy but harder to populate with more manual indexing
>> >>> >> perhaps.
>> >>> >> This can also be a question of performance, we should avoid
>> >>> >> unnecessary work in the inner loops of assemblers.
>> >>> >>
>> >>> >> Here are the candidates with dimensions in comments (consts removed
>> >>> >> for
>> >>> >> clarity):
>> >>> >>
>> >>> >> // element tensor(s)
>> >>> >> double* A // [sum of packed element tensor size for each domain]
>> >>> >>
>> >>> >> // dofs of coefficient functions (num_dofs_for_this_coefficient
>> >>> >> varies)
>> >>> >> double ** w //
>> >>> >> [num_coefficients][num_dofs_for_this_coefficient*num_domains]
>> >>> >>
>> >>> >> // coordinates of cell vertices (should also be generalized to
>> >>> >> coordinate_dofs)
>> >>> >> double* vertex_coordinates // [num_domains*num_cell_vertices*gdim]
>> >>> >> double* vertex_coordinates_0 // [num_cell_vertices*gdim]
>> >>> >> double* vertex_coordinates_1 // ditto
>> >>> >>
>> >>> >> // quadrature rules
>> >>> >> double* quadrature_points // [num_points]
>> >>> >> double* quadrature_weights // [num_points*gdim]
>> >>> >>
>> >>> >> // geometric quantities
>> >>> >> double* facet_normals // [num_points*gdim]?
>> >>> >>
>> >>> >> Martin
>> >>> >
>> >>> >
>> >>> > _______________________________________________
>> >>> > fenics mailing list
>> >>> > [email protected]
>> >>> > http://fenicsproject.org/mailman/listinfo/fenics
>> >>> >
>> >>> _______________________________________________
>> >>> fenics mailing list
>> >>> [email protected]
>> >>> http://fenicsproject.org/mailman/listinfo/fenics
>> >>
>> >>
>> >> _______________________________________________
>> >> fenics mailing list
>> >> [email protected]
>> >> http://fenicsproject.org/mailman/listinfo/fenics
>> >>
>> >
>>
>
_______________________________________________
fenics mailing list
[email protected]
http://fenicsproject.org/mailman/listinfo/fenics

Reply via email to