I object to changing definitions based on that it would work out nicely for
one particular equation. The current definition yields a scalar jump for
both scalar and vector valued quantities, and the definition was chosen for
a reason. I'm pretty sure it's in use. Adding a tensor_jump on the other
hand wouldn't break any older programs.

Maybe Kristian has an opinion here, cc to get his attention.

Martin
9. juni 2014 20:16 skrev "Anders Logg" <[email protected]> følgende:

> On Mon, Jun 09, 2014 at 11:30:09AM +0200, Jan Blechta wrote:
> > On Mon, 9 Jun 2014 11:10:12 +0200
> > Anders Logg <[email protected]> wrote:
> >
> > > For vector elements, the jump() operator in UFL is defined as follows:
> > >
> > >   dot(v('+'), n('+')) + dot(v('-'), n('-'))
> > >
> > > I'd like to argue that it should instead be implemented like so:
> > >
> > >   outer(v('+'), n('+')) + outer(v('-'), n('-'))
> >
> > This inconsistency has been already encountered by users
> > http://fenicsproject.org/qa/359/discontinuous-galerkin-jump-operators
>
> Interesting! I hadn't noticed.
>
> Are there any objections to changing this definition in UFL?
>
> --
> Anders
> _______________________________________________
> fenics mailing list
> [email protected]
> http://fenicsproject.org/mailman/listinfo/fenics
>
_______________________________________________
fenics mailing list
[email protected]
http://fenicsproject.org/mailman/listinfo/fenics

Reply via email to