On Fri, 15 Aug 2014 09:07:30 +0200
Kristian Ølgaard <[email protected]> wrote:

> On 14 August 2014 11:09, Martin Sandve Alnæs <[email protected]>
> wrote:
> 
> > On 14 August 2014 10:38, Garth N. Wells <[email protected]> wrote:
> >
> >> On Tue, 12 Aug, 2014 at 7:51 PM, Nico Schlömer
> >> <[email protected]> wrote:
> >>
> >>> Hi all,
> >>>
> >>> recently, a discussion [1] came up about a long-standing weirdness
> >>> with Dolfin `Expression`s:
> >>> ```
> >>> from dolfin import *
> >>> mesh = UnitIntervalMesh(1)
> >>> e1 = Expression("x[0] * x[0]")
> >>> print assemble(e1 * dx(mesh))
> >>> ```
> >>> This looks like it's integrating the function x^2 over the
> >>> interval [0, 1], but the result – surprisingly to some –, is 0.5
> >>> instead of 0.33333. The reason for this behavior is that `e1`
> >>> gets interpreted as a linear function by the quadrature. (The
> >>> optional argument `degree` of `Expression` defaults to 1.)
> >>>
> >>
> >>
> >> This is a good example of where the current approach breaks down
> >> and needs to be fixed.
> >>
> >> At present, FFC uses some heuristics to interpolate an Expression
> >> as to not mess up the order of the method. It does this by looking
> >> at the polynomial order of the test and trial functions. In the
> >> above case there are no test or trial functions, so it should
> >> throw an error rather than do something that is completely
> >> arbitrary.
> >
> >
> >
> > Just to inform the discussion:
> > For technical implementation reasons these heuristics are now
> > located in UFL preprocessing, more specifically in
> > ufl/algorithms/compute_form_data.py:_auto_select_degree, which
> > looks like this:
> >
> >     # Use max degree of all elements, at least 1 (to work with
> > Lagrange elements)
> >     return max({ e.degree() for e in elements } - { None } | { 1 })
> >
> > here elements include all elements (including subelements of mixed
> > elements) of both test/trial and coefficient functions.
> >
> >
> >
> >  While most would agree that this is surprising and that probably
> >>> numerous FEniCS codes suffer from bugs related to this, it is
> >>> unclear what we should best do about this.
> >>>
> >>> The options include:
> >>>
> >>>  * improve the documentation
> >>>  * add a log message/warning when `Expression` interpolation
> >>> happens
> >>>  * add an extra mandatory argument to `Expression`, specifying
> >>>     - the element or
> >>>     - the function space;
> >>>
> >>
> >> This used to be the case. A concern fed back to us was that this
> >> complicates code. My opinion at this moment is that the argument
> >> should be mandatory. I like explicit, and it would a allow cleaner
> >> design of some components, e.g. UFL.
> >>
> >>
> >>   * introduce functions that evaluate pointwise ("Expressions") to
> >>> UFL/UFC/FCC and let dolfin Expression inherit from those;
> >>>
> >>
> >> Not sure what you mean above.
> >>
> >
> > That was my wording from the issue discussion: basically
> > introducing to UFL the concept of functions without a finite
> > element, that can be evaluated at arbitrary points. The "quadrature
> > element" approach is flawed and something else is needed (see
> > below).
> >
> >
> >
> >> In the absence of an argument indicating the finite element type,
> >> I think most users would expect an Expression to be evaluated at
> >> quadrature points. I'd have to look back at logs (which I don't
> >> want to do) to see how developed FFC quadrature representation was
> >> at the time the design decisions w.r.t. Expressions was made. If
> >> Expressions are evaluated at quadrature points by default, then
> >> most DOLFIN demos would not be able to use the FFC tensor
> >> contraction representation. This isn't an argument either way,
> >> just an observation.
> >
> >
> > History is past, no reason to dig that deep. I would argue it's
> > sensible to require expressions with a degree if a user wants to
> > make tensor representation possible. In auto-representation, we can
> > look at all functions and skip tensor representation if a function
> > without a finite element is found. We can document this in
> > Expression docstring: providing the degree allows for optimizations
> > in assembly.
> >
> >
> > If Expressions are evaluated at quadrature points by default, FFC
> >> could/should throw an error if it has no information to determine
> >> automatically a quadrature scheme, e.g. for a functional.
> >
> >
> > With this notation for integration degree (there's an issue for
> > this in ULF):
> >
> >   M = expression*dx(degree=3)
> >
> > that will still be easy to use.
> >
> >
> >
> >>>  * ...
> >>>
> >>> Arguments in favor of adding a mandatory argument include
> >>>   * The Python Zen: "Explicit is better than implicit." -- It's
> >>> just too easy to make a mistake here.
> >>>
> >>> Arguments for leaving the API as it is include:
> >>>   * The user cannot expect to have arbitrary Expressions integrate
> >>> exactly. Why should he/she expect it for any function?
> >>>
> >>> What do you think about this? Does the above `assemble()` result
> >>> surprise you at all? Do you think we should force user code to be
> >>> explicit here or would you think that adding documentation is
> >>> sufficient?
> >>>
> >>
> >> I favour (a) explicit provision of the element/function space; or
> >> (b) evaluation at quadrature points with errors for cases where no
> >> data is available for deciding on a sensible quadrature scheme.
> >> Using quadrature points would fix some other awkward issues, like
> >> specifying boundary conditions on polygon faces which 'creep'
> >> around corners is subdomains are not marked.
> >>
> >
> >
> > I agree with both (a) and (b).
> >
> 
> I vote for a), there's no easy way to determine that "x[0]*x[0]"
> should be represented by a quadratic function.

Yes, I think that only sensible way using b) is that polynomial degree
of this 'pointwise expression' is still prescribed.

Jan

> Defining an expression using a quadrature element will result in the
> expression being evaluated at those points in tabulate_tensor().
> 
> 
> >
> > The problem with the quadrature element approach is that a
> > quadrature element has a fixed set of quadrature points, and cannot
> > be used both in a cell and on a facet, where a "boundary quadrature
> > element" is needed. It's possible that previous quadrature point
> > evaluation efforts have stranded on these complications. What's
> > needed for evaluation of expressions at quadrature points in
> > generated code is one of two approaches:
> >
> 
> Exactly, it's even a bug on Launchpad:
> https://bugs.launchpad.net/ffc/+bug/488661
> 
> 
> >
> > 1) Let the integral object return quadrature points at run time
> > such that the expression can be evaluated in those and the values
> > passed into tabulate_tensor as dofs of a "quadrature element".
> > or
> > 2) Let tabulate_tensor take a vector of ufc::function pointers (or
> > a new ufc::expression interface) and make calls through those. For
> > efficiency, ufc::expression can have a vectorized eval version that
> > gets all quadrature points.
> >
> > The latter will pave the road for an apply operator in UFL, e.g.
> >   f = Expression(...)
> >   M = apply(f, u**2)*dx
> > meaning
> >   \int_Omega f(u^2) dx
> >
> >
> If option 2) will enable more features in UFL to be implemented
> easily I think that should be our choice unless there are performance
> issues to consider related to implementing 1 and 2 given that the
> points are not fixed at compile time.
> 
> Kristian
> 
> 
> 
> 
> >
> > Martin
> >
> > _______________________________________________
> > fenics mailing list
> > [email protected]
> > http://fenicsproject.org/mailman/listinfo/fenics
> >
> >

_______________________________________________
fenics mailing list
[email protected]
http://fenicsproject.org/mailman/listinfo/fenics

Reply via email to