On 03/02/2018 03:36 AM, 曾元圆 wrote:
Hi, nowadays I'm learning about the goal oriented error estimator and I
read the tutorial of step-14 (http://www.dealii.org/developer/doxygen/deal.II/step_14.html.) But I'm confused about some problems and hope you can help me with these: 1. When deriving the error with respect to the functional, why must we change J(e)=a(e,z) to J(e)=a(e,z-z_h) ? I know the dual solution z must be
approximated in a richer space than the primal solution, otherwise J(e)
will be 0. But why not just solve the dual problem in a richer space
without subtracting its interpolation to the primal space? I didn't see the
necessity to introduce z_h into the formula.

You are correct: the values you get from both of these formulas are exactly
the same. So it is not *necessary* to introduce z_h if you are interested in
computing the *error*.

We introduce the interpolant because z-z_h is a quantity that is only large
where the dual solution is rough, where z may be large also in other places.
Doing so ensures that the error estimator is *localized*: It is large exactly
where the primal and dual solutions are rough, i.e., where we expect the error
to be caused. As mentioned above, if you sum the contributions of all cells,
you will get the same value whether you introduce z_h or not, but the
contribution of each cell is going to be different. If you want the
contributions of each cell to serve as a good mesh refinement criterion, then
it turns out that you need to introduce z_h.


2. Why must we change J(e)=∑(f+△ uh,z-zh)-(∂uh,z-zh) to J(e)=∑(f+△ uh,z-zh)-(1/2[∂uh],z-zh)? Is it just an implementation consideration for saving computational effort?

For the same kind of reason. For a smooth (primal) solution, the term
  (∂uh,z-zh)
may be large because the normal derivative of the primal solution may simply be what it is -- think of, for example, a linear exact solution u that can be perfectly approximated by u_h. So if you leave this term as is, this would suggest that the error on this cell is large. But that's wrong -- the error is actually quite small because you can approximate linear functions well.

On the other hand, if you do the rewrite (which again leaves the *sum* over all cells the same, but change the contributions of each cell), then
  (1/2[∂uh],z-zh)
is going to be small because while the normal derivative of u_h may be large, the *jump* of the normal derivative is small if the solution is linear or nearly so.

Another way of seeing this is to think of both of the terms in J(e) as
  residual times dual weight.
The residuals here are f+△ uh and 1/2[∂uh]. You want to define these residuals in such a way that they are zero for the exact solution. That is true for these two residuals: f+△ u is zero because 'u' satisfies the equation, and 1/2[∂u] is zero because the solutions of the Laplace equation have continuous gradients (if f is smooth enough).

On the other hand, the term ∂u is not zero, even for the exact solution.


Can this kind of rewriting be generally adopted in other kind of
problems(e.g in advection problem where the face integrals in J(e) relies
on the upstream information)?

Yes.

Best
 W.

--
------------------------------------------------------------------------
Wolfgang Bangerth          email:                 bange...@colostate.edu
                           www: http://www.math.colostate.edu/~bangerth/

--
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- You received this message because you are subscribed to the Google Groups "deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to