Re: proposed addition to DAVE-ML

2004-07-12 Thread Dan Newman
On Mon, 2004-07-12 at 23:54, Bruce Jackson wrote:

> In my experience, almost all real-time simulation facilities use first-order linear 
> interpolation (equivalent to your polynomial/1 example) for piloted simulations.
> 
Yes, although the increasing capabilities of computational hardware are
likely to change this aspect in future.  Our interest is not only in
real-time simulation, but the specific model which caused us to consider
this requirement includes a fairly sparsely-tabulated function which the
manufacturer specifies as linear in one variable and third order in
three other variables.  We have been using this sort of model with
linear interpolation in simulations running significantly faster than
real time and expect that our fairly unexceptional hardware will allow
us to continue running faster than real time if we implement low order
polynomial interpolation as well.

> But, this standard is intended to be used by more than just real-time 
> man-in-the-loop simulations; my personal interest is for flight control design and 
> stability analysis of the same vehicle models. We have often seen a need for 
> smoother interpolations (generally cubic splines) to improve the fidelity of the 
> linearized models that are extracted to do control design.
> 
Yes.  We're also looking at using it for performance modelling, and it
seems to suit that application well.  However, as some of our
comparative performance data is fairly sparse, the higher order
capability is important there also (trying to avoid V-shaped drag
polars).

> I guess my concern is, does this interpolation specification need to be captured 
> with the model? It would seem to me that the specific application of the model 
> dictates how smooth to make the tabular non-linear data. So, the flight control 
> designer might put a smooth curve through every table, while the real-time folks 
> stick with 1st order for speed.
> 
The choice of tabulation interval, or position of tabulated points for
ungridded data, can be affected by the order of the interpolation
expected.  The interpolation and tabulation processes are therefore
inter-related and I think should both be defined in the XML dataset.  If
the same dataset is used for different applications, the attributes
related to interpolation can be used or ignored as each modeller
prefers, but inclusion of the attributes at least lets the modeller see
the assumptions on which the data compilation was based.

> However, to show our willingness to accept thoughtful suggestions, I'd be glad to 
> add this to the next DTD. Would you prefer the interpolationType to be a closed list:
> 
> interpolationType (poly | cspline | legendre) #IMPLIED
> 
> or an open-ended CDATA with implementation details spelled out in the reference 
> manual?
> 
I'm still experimenting with how best to do this.  The closed list could
be used for a first cut, but we might need to change it as we progress. 
We might also choose to add some other basis functions (much later). 
I'll have to think about the details of the spec, but you're right that
some additional details on the fitting technique will be required,
particularly for higher order functions.  As we progress and determine
what works best for us, I'll develop the relevant pieces of the spec and
post it to the list for comments and suggestions.

> Also, a complete spec should (IMHO) give additional details on how to fit a 
> polynomial through the data. Just specifying the data and giving the order of the 
> fit does not guarantee the same polynomial coefficients will be generated, I don't 
> think, even for piece-wise polynomials.  How about Legendre basis functions - what 
> information is required to fit these?
> 
> We are aware of some efforts to encode an entire non-linear function using a single 
> multi-order polynomial, but that's not what you're suggesting, are you?
> 
I'm not suggesting that, but it might be possible to define such a
function using this technique.  The main benefit of using such a
function will not be a reduction in the number of values tabulated in
the dataset, but the ability to obtain continuous derivatives over the
function space.

> Let's discuss this further. I think text-based information exchange may rapidly 
> become somewhat limiting :)
> 
I agree, but the DSTO guys involved (Geoff Brian and Jan Drobik) didn't
immediately see the cost-benefit advantage of a trip for me to Langley.

Dan




Re: proposed addition to DAVE-ML

2004-07-12 Thread Bruce Jackson
At 2:29 PM +1000 7/12/04, Dan Newman wrote:
>I'm working with the Australian DSTO Flight Systems Branch on develoment
>of generic aircraft flight models.  We are basing the datasets for this
>work on the DAVE-ML DTD, with reasonable success so far.  Our datasets
>are used by loading directly into a DOM encapsulated in a C++ class
>which performs function evaluations based on the XML dataset and returns
>the results to the calling function.
>
>However, our work to date indicates an addition to the DTD would be
>useful.  I propose that the independentVarRef element be modified as
>shown below:
>
>
> varID IDREF   #REQUIRED
> min   CDATA   #IMPLIED
> max   CDATA   #IMPLIED
> extrapolate   (neither | min | max | both) #IMPLIED
> interpolationType  CDATA #IMPLIED
> interpolationOrder CDATA #IMPLIED
>>
>
>The justifications for this change are:
>
>For tabulated data in any form, the type of interpolation/extrapolation
>applicable should be dependent on the data, not on the software in use
>to interpret the data.  It is therefore appropriate to include
>interpolation instructions in the XML dataset. 
>
>A variable may be interpolated differently in different functions within
>the same dataset.  Interpolation instructions therefore belong in the
>independentVarRef rather than in the variableDef.  An alternative would
>be to include the interpolation instruction in the bpRef attributes for
>each function, if that was thought preferable.
>
>Examples of the most common entries for these attributes are:
>
>interpolationType="polynomial"
>interpolationOrder="1"
>
>which results in linear interpolation in the relevant degree of freedom,
>with continuity of the function, but not of its derivatives, across the
>breakpoints.  Setting order to "0" allows discrete values, while higher
>order can be chosen if required for rapidly-changing data.
>
>Alternative interpolation types can represent different methods, such as
>"c-spline", or different choice of basis functions, such as "legendre".
>
>In future, it might also be useful to add further options to the
>"extrapolate" attribute to determine whether the extrapolating basis
>function is the same as that used for interpolation.
>
>We await with interest any comments or suggestions.

Dan,

This is a good idea we could easily adopt if there is sufficient interest.

In my experience, almost all real-time simulation facilities use first-order linear 
interpolation (equivalent to your polynomial/1 example) for piloted simulations.

But, this standard is intended to be used by more than just real-time man-in-the-loop 
simulations; my personal interest is for flight control design and stability analysis 
of the same vehicle models. We have often seen a need for smoother interpolations 
(generally cubic splines) to improve the fidelity of the linearized models that are 
extracted to do control design.

I guess my concern is, does this interpolation specification need to be captured with 
the model? It would seem to me that the specific application of the model dictates how 
smooth to make the tabular non-linear data. So, the flight control designer might put 
a smooth curve through every table, while the real-time folks stick with 1st order for 
speed.

However, to show our willingness to accept thoughtful suggestions, I'd be glad to add 
this to the next DTD. Would you prefer the interpolationType to be a closed list:

interpolationType (poly | cspline | legendre) #IMPLIED

or an open-ended CDATA with implementation details spelled out in the reference manual?

Also, a complete spec should (IMHO) give additional details on how to fit a polynomial 
through the data. Just specifying the data and giving the order of the fit does not 
guarantee the same polynomial coefficients will be generated, I don't think, even for 
piece-wise polynomials.  How about Legendre basis functions - what information is 
required to fit these?

We are aware of some efforts to encode an entire non-linear function using a single 
multi-order polynomial, but that's not what you're suggesting, are you?

Let's discuss this further. I think text-based information exchange may rapidly become 
somewhat limiting :)

-- Bruce
-- 
Bruce Jackson mailto:[EMAIL PROTECTED] Dynamics and Control Branch
18C West Taylor Street MS 132 Airborne Systems Competency
NASA Langley Research Center  Hampton, Virginia 23681
More info about DAVE-ML: 
Simulation standards discussion listserv mailto:[EMAIL PROTECTED]