[ 
https://issues.apache.org/jira/browse/MATH-705?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13147240#comment-13147240
 ] 

Luc Maisonobe edited comment on MATH-705 at 11/9/11 7:58 PM:
-------------------------------------------------------------

The reason for this strange behavior is that g function evaluations are based 
on the integrator-specific interpolator.

Each integration method has its specific algorithm and preserve a rich internal 
data set. From this data set, it is possible to build an interpolator which is 
specific to the integrator and in fact share part of the data set (they 
reference the same arrays). So integrator and interpolator are tightly bound 
together.

For embedded Runge-Kutta methods like Dormand-Prince 8(5,3), this data set 
corresponds to one state vector value and several state vector derivatives 
sampled throughout the step. When the step is accepted after error estimation, 
the state value is set to the value at end of step and the interpolator is 
called. So the equations of the interpolator are written in such a way 
interpolation is backward: we start from the end state and roll back to 
beginning of step. This explains why when we roll all the way back to step 
start, we may find a state that is not exactly the one we started from, due to 
both the integration order and interpolation order.

For Gragg-Bulirsch-Stoer, the data set corresponds to one state vector value 
and derivatives at several orders, all taken at step middle point. When the 
step is accepted after error estimation, the interpolator is called before the 
state value is set to the value at end of step and. So the equations of the 
interpolator are written in such a way interpolation is forward: we start from 
the start state and go on towards end of step. This explains why when we go all 
the way to step end, we may find a state that is not exactly the one that will 
be used for next step, due to both the integration order and interpolation 
order.

So one integrator type is more consistent at step start and has more error at 
step end, while the other integrator has a reversed behavior.

In any case, the interpolation that is used (and in fact the integration data 
set it is based upon) are not error free. The error is related to step size.

We could perhaps rewrite some interpolators by preserving both start state 
s(t[k]) and end state s(t[k+1]) and switching between two hal model as follows:
  i(t) = s(t[k])   + forwardModel(t[k], t)    if t <= (t[k] + t[k+1])/2
and
  i(t) = s(t[k+1]) + backwardModel(t, t[k+1]) if t > (t[k] + t[k+1])/2

This would make interpolator more consistent with integrator at both step start 
and step end and perhaps reduce this problem. This would however not be 
perfect, as it will introduce a small error at junction point. I'm not sure if 
it would be easy or not, we would have to review all interpolators and all 
integrators for that. All models are polynomial ones.

Note that the problem should not appear for Adams methods (when they will be 
considered validated ...), because in this case, it is the interpolator that is 
built first and the integrator is in fact an application of the interpolator at 
step end! So interpolator and integrator are by definition always perfectly 
consistent with each other.

What do you think ?

Should we let this problem alone and consider we are in the grey zone of 
expected numerical inaccuracy due to integration/interpolation orders or should 
we attempt the two half-models trick ?

                
      was (Author: luc):
    The reason for this strange behavior is that g function evaluations are 
based on the integrator-specific interpolator.

Each integration method has its specific algorithm and preserve a rich internal 
data set. From this data set, it is possible to build an interpolator which is 
specific to the integrator and in fact share part of the data set (they 
reference the same arrays). So integrator and interpolator are tightly bound 
together.

For embedded Runge-Kutta methods like Dormand-Print 8(5,3), this data set 
corresponds to one state vector value and several state vector derivatives 
sampled throughout the step. When the step is accepted after error estimation, 
the state value is set to the value at end of step and the interpolator is 
called. So the equations of the interpolator are written in such a way 
interpolation is backward: we start from the end state and roll back to 
beginning of step. This explains why when we roll all the way back to step 
start, we may find a state that is not exactly the one we started from, due to 
both the integration order and interpolation order.

For Gragg-Bulirsch-Stoer, the data set corresponds to one state vector value 
and derivatives at several orders, all taken at step middle point. When the 
step is accepted after error estimation, the interpolator is called before the 
state value is set to the value at end of step and. So the equations of the 
interpolator are written in such a way interpolation is forward: we start from 
the start state and go on towards end of step. This explains why when we go all 
the way to step end, we may find a state that is not exactly the one that will 
be used for next step, due to both the integration order and interpolation 
order.

So one integrator type is more consistent at step start and has more error at 
step end, while the other integrator has a reversed behavior.

In any case, the interpolation that is used (and in fact the integration data 
set it is based upon) are not error free. The error is related to step size.

We could perhaps rewrite some interpolators by preserving both start state 
s(t[k]) and end state s(t[k+1]) and switching between two hal model as follows:
  i(t) = s(t[k])   + forwardModel(t[k], t)    if t <= (t[k] + t[k+1])/2
and
  i(t) = s(t[k+1]) + backwardModel(t, t[k+1]) if t > (t[k] + t[k+1])/2

This would make interpolator more consistent with integrator at both step start 
and step end and perhaps reduce this problem. This would however not be 
perfect, as it will introduce a small error at junction point. I'm not sure if 
it would be easy or not, we would have to review all interpolators and all 
integrators for that. All models are polynomial ones.

Note that the problem should not appear for Adams methods (when they will be 
considered validated ...), because in this case, it is the interpolator that is 
built first and the integrator is in fact an application of the interpolator at 
step end! So interpolator and integrator are by definition always perfectly 
consistent with each other.

What do you think ?

Should we let this problem alone and consider we are in the grey zone of 
expected numerical inaccuracy due to integration/interpolation orders or should 
we attempt the two half-models trick ?

                  
> DormandPrince853 integrator leads to revisiting of state events
> ---------------------------------------------------------------
>
>                 Key: MATH-705
>                 URL: https://issues.apache.org/jira/browse/MATH-705
>             Project: Commons Math
>          Issue Type: Bug
>    Affects Versions: 3.0
>         Environment: Commons Math trunk, Java 6, Linux
>            Reporter: Dennis Hendriks
>         Attachments: ReappearingEventTest.java, ReappearingEventTest.out
>
>
> See the attached ReappearingEventTest.java. It has two unit tests, which use 
> either the DormandPrince853 or the GraggBulirschStoer integrator, on the same 
> ODE problem. It is a problem starting at time 6.0, with 7 variables, and 1 
> state event. The state event was previously detected at time 6.0, which is 
> why I start there now. I provide and end time of 10.0. Since I start at the 
> state event, I expect to integrate all the way to the end (10.0). For the 
> GraggBulirschStoer this is what happens (see attached 
> ReappearingEventTest.out). For the DormandPrince853Integerator, it detects a 
> state event and stops integration at 6.000000000000002.
> I think the problem becomes clear by looking at the output in 
> ReappearingEventTest.out, in particular these lines:
> {noformat}
> computeDerivatives: t=6.0                  y=[2.0                 , 2.0       
>           , 2.0                 , 4.0                 , 2.0                 , 
> 7.0                 , 15.0                ]
> (...)
> g                 : t=6.0                  y=[1.9999999999999996  , 
> 1.9999999999999996  , 1.9999999999999996  , 4.0                 , 
> 1.9999999999999996  , 7.0                 , 14.999999999999998  ]
> (...)
> final result      : t=6.000000000000002    y=[2.0000000000000013  , 
> 2.0000000000000013  , 2.0000000000000013  , 4.000000000000002   , 
> 2.0000000000000013  , 7.000000000000002   , 15.0                ]
> {noformat}
> The initial value of the last variable in y, the one that the state event 
> refers to, is 15.0. However, the first time it is given to the g function, 
> the value is 14.999999999999998. This value is less than 15, and more 
> importantly, it is a value from the past (as all functions are increasing), 
> *before* the state event. This makes that the state event re-appears 
> immediately, and integration stops at 6.000000000000002 because of the 
> detected state event.
> I find it puzzling that for the DormandPrince853Integerator the y array that 
> is given to the first evaluation of the g function, has different values than 
> the y array that is the input to the problem. For GraggBulirschStoer is can 
> be seen that the y arrays have identical values.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

Reply via email to