On Mon, 2004-07-12 at 23:54, Bruce Jackson wrote:

> In my experience, almost all real-time simulation facilities use first-order linear 
> interpolation (equivalent to your polynomial/1 example) for piloted simulations.
> 
Yes, although the increasing capabilities of computational hardware are
likely to change this aspect in future.  Our interest is not only in
real-time simulation, but the specific model which caused us to consider
this requirement includes a fairly sparsely-tabulated function which the
manufacturer specifies as linear in one variable and third order in
three other variables.  We have been using this sort of model with
linear interpolation in simulations running significantly faster than
real time and expect that our fairly unexceptional hardware will allow
us to continue running faster than real time if we implement low order
polynomial interpolation as well.

> But, this standard is intended to be used by more than just real-time 
> man-in-the-loop simulations; my personal interest is for flight control design and 
> stability analysis of the same vehicle models. We have often seen a need for 
> smoother interpolations (generally cubic splines) to improve the fidelity of the 
> linearized models that are extracted to do control design.
> 
Yes.  We're also looking at using it for performance modelling, and it
seems to suit that application well.  However, as some of our
comparative performance data is fairly sparse, the higher order
capability is important there also (trying to avoid V-shaped drag
polars).

> I guess my concern is, does this interpolation specification need to be captured 
> with the model? It would seem to me that the specific application of the model 
> dictates how smooth to make the tabular non-linear data. So, the flight control 
> designer might put a smooth curve through every table, while the real-time folks 
> stick with 1st order for speed.
> 
The choice of tabulation interval, or position of tabulated points for
ungridded data, can be affected by the order of the interpolation
expected.  The interpolation and tabulation processes are therefore
inter-related and I think should both be defined in the XML dataset.  If
the same dataset is used for different applications, the attributes
related to interpolation can be used or ignored as each modeller
prefers, but inclusion of the attributes at least lets the modeller see
the assumptions on which the data compilation was based.

> However, to show our willingness to accept thoughtful suggestions, I'd be glad to 
> add this to the next DTD. Would you prefer the interpolationType to be a closed list:
> 
>     interpolationType (poly | cspline | legendre) #IMPLIED
> 
> or an open-ended CDATA with implementation details spelled out in the reference 
> manual?
> 
I'm still experimenting with how best to do this.  The closed list could
be used for a first cut, but we might need to change it as we progress. 
We might also choose to add some other basis functions (much later). 
I'll have to think about the details of the spec, but you're right that
some additional details on the fitting technique will be required,
particularly for higher order functions.  As we progress and determine
what works best for us, I'll develop the relevant pieces of the spec and
post it to the list for comments and suggestions.

> Also, a complete spec should (IMHO) give additional details on how to fit a 
> polynomial through the data. Just specifying the data and giving the order of the 
> fit does not guarantee the same polynomial coefficients will be generated, I don't 
> think, even for piece-wise polynomials.  How about Legendre basis functions - what 
> information is required to fit these?
> 
> We are aware of some efforts to encode an entire non-linear function using a single 
> multi-order polynomial, but that's not what you're suggesting, are you?
> 
I'm not suggesting that, but it might be possible to define such a
function using this technique.  The main benefit of using such a
function will not be a reduction in the number of values tabulated in
the dataset, but the ability to obtain continuous derivatives over the
function space.

> Let's discuss this further. I think text-based information exchange may rapidly 
> become somewhat limiting :)
> 
I agree, but the DSTO guys involved (Geoff Brian and Jan Drobik) didn't
immediately see the cost-benefit advantage of a trip for me to Langley.

Dan


Reply via email to