Thank you so much Bangerth!

Now I understand why we need to rewrite the error formula on a cell as 
residual times dual weight. But I'm still a little confused with the reason 
why we must introduce z_h. 
Just as you mentioned, if we introduce z_h, then z-z_h is a quantity that 
is only large where the dual solution is rough. But why do we need to care 
about the accuracy of z here? I think the only  thing we need to care about 
is the value of z on that cell, because z is a quantity that represents how 
important the residual on that cell is.
  
My understanding is: now the dual_weight z-z_h does not only represent how 
important the residual on a certain cell is, but also tells us some 
information about how good the dual solution on that cell is. But another 
problem is, does z-z_h still has the same tendency as z? If not, how z-z_h 
can represent the importance of a certan cell as z can?
 
I'm not sure if my understanding is correct. I tried to run the code using 
only z as dual_weights, and I found the result almost the same as that 
using z-z_h.

Finally, I am certainly glad to submit patches to deal.II and make my own 
contribution. But I didn't fork deal.ii on my github account yet, and this 
is relatively a small issue, so I will be glad if you can do it for the 
moment. 

在 2018年3月4日星期日 UTC+8上午1:42:27,Wolfgang Bangerth写道:
>
> On 03/02/2018 03:36 AM, 曾元圆 wrote: 
> > Hi, nowadays I'm learning about the goal oriented error estimator and I 
> > read the tutorial of step-14 
> > (http://www.dealii.org/developer/doxygen/deal.II/step_14.html.) But I'm 
> > confused about some problems and hope you can help me with these: 1. 
> When 
> > deriving the error with respect to the functional, why must we change 
> > J(e)=a(e,z) to J(e)=a(e,z-z_h) ? I know the dual solution z must be 
> > approximated in a richer space than the primal solution, otherwise J(e) 
> > will be 0. But why not just solve the dual problem in a richer space 
> > without subtracting its interpolation to the primal space? I didn't see 
> the 
> > necessity to introduce z_h into the formula. 
>
> You are correct: the values you get from both of these formulas are 
> exactly 
> the same. So it is not *necessary* to introduce z_h if you are interested 
> in 
> computing the *error*. 
>
> We introduce the interpolant because z-z_h is a quantity that is only 
> large 
> where the dual solution is rough, where z may be large also in other 
> places. 
> Doing so ensures that the error estimator is *localized*: It is large 
> exactly 
> where the primal and dual solutions are rough, i.e., where we expect the 
> error 
> to be caused. As mentioned above, if you sum the contributions of all 
> cells, 
> you will get the same value whether you introduce z_h or not, but the 
> contribution of each cell is going to be different. If you want the 
> contributions of each cell to serve as a good mesh refinement criterion, 
> then 
> it turns out that you need to introduce z_h. 
>
>
> > 2. Why must we change J(e)=∑(f+△ uh,z-zh)-(∂uh,z-zh) to J(e)=∑(f+△ 
> > uh,z-zh)-(1/2[∂uh],z-zh)? Is it just an implementation consideration for 
> > saving computational effort? 
>
> For the same kind of reason. For a smooth (primal) solution, the term 
>    (∂uh,z-zh) 
> may be large because the normal derivative of the primal solution may 
> simply 
> be what it is -- think of, for example, a linear exact solution u that can 
> be 
> perfectly approximated by u_h. So if you leave this term as is, this would 
> suggest that the error on this cell is large. But that's wrong -- the 
> error is 
> actually quite small because you can approximate linear functions well. 
>
> On the other hand, if you do the rewrite (which again leaves the *sum* 
> over 
> all cells the same, but change the contributions of each cell), then 
>    (1/2[∂uh],z-zh) 
> is going to be small because while the normal derivative of u_h may be 
> large, 
> the *jump* of the normal derivative is small if the solution is linear or 
> nearly so. 
>
> Another way of seeing this is to think of both of the terms in J(e) as 
>    residual times dual weight. 
> The residuals here are f+△ uh and 1/2[∂uh]. You want to define these 
> residuals 
> in such a way that they are zero for the exact solution. That is true for 
> these two residuals: f+△ u is zero because 'u' satisfies the equation, and 
> 1/2[∂u] is zero because the solutions of the Laplace equation have 
> continuous 
> gradients (if f is smooth enough). 
>
> On the other hand, the term ∂u is not zero, even for the exact solution. 
>
>
> > Can this kind of rewriting be generally adopted in other kind of 
> > problems(e.g in advection problem where the face integrals in J(e) 
> relies 
> > on the upstream information)? 
>
> Yes. 
>
> Best 
>   W. 
>
> -- 
> ------------------------------------------------------------------------ 
> Wolfgang Bangerth          email:                 bang...@colostate.edu 
> <javascript:> 
>                             www: http://www.math.colostate.edu/~bangerth/ 
>
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to