hi Urs,

On 6/23/14 11:36 AM, Urs Heckmann wrote:
On 23.06.2014, at 16:37, robert bristow-johnson<r...@audioimagination.com>  
wrote:

because it was claimed that a finite (and small) number of iterations was 
sufficient.
Well, to be precise, all I claimed was an *average* of 2 iterations for a given 
purpose, and with given means to optimise (e.g. vector registers). I did so to 
underline that an implementation for real time use is possible. I had no 
intention of saying that any finite (and small) number of iterations was 
sufficient in any arbitrary case and condition - I can only speak about the 
models that we have implemented and observed.

okay, so the above consequentially brings us back to the original issue:

On 6/22/14 1:20 PM, Urs Heckmann wrote:
On 22.06.2014, at 19:04, robert bristow-johnson<r...@audioimagination.com>  
wrote:
On 6/22/14 7:11 AM, Urs Heckmann wrote:

2. Get the computer to crunch numbers by iteratively predicting, evaluating and 
refining values using the actual non-linear equations until a solution is found.
perhaps in analysis.  i would hate to see such iterative processing in sample processing 
code.  (it's also one reason i stay away from terminology like "zero-delay 
feedback" in a discrete-time system.)
We're doing this a lot. It shifts the problem from implementation to 
optimisiation.

now, i know that real-time algorithms that run native have to deal with the I/O latency issues of the Mac or PC, and i am not sure (nor am concerned) about how you guys deal with that and the operating system. i know Apple deals with it with audio units, and i seem to remember that this was a fright with the PC and the Windoze OS. but there is no physical reason it can't be dealt with in either platform, i just dunno the details.

but this i *do* know about *any* hardware realization of a real-time processing algorithm: you must place a outer maximum limit on the processing (the worst-case), *even* if the *average* processing time is what is salient. the average processing time becomes the salient measure when buffering is used, but buffering introduces delay (if you buffer both input and output, the delay is two block lengths).

in a non-real-time application, we might not worry about how many iterations of the processing loop are needed to converge acceptably to a consistent output value. you just run the code, wait a few seconds if necessary (in the olden days, we might get a cup of coffee or something), and get your results. but in a real-time process, whether it's buffered or not, you *must* put a lid on the number of iterations. so "about the models that [you] have implemented and observed" if it's an "an implementation for real time use", you *must* put a maximum number of iterations on the loop, otherwise suffer the risk of a hiccup in your live real-time processing that might not sound very friendly. this is a normal and basic issue about doing live, real-time DSP for *any* application, not just for audio.

then, i will go back to my original point about hating "to see such iterative processing in [live, real-time] processing code." to make it safe, you must impose a finite and known limit in the number of iterations. then, in worst case, you may as well just always do it for those number of iterations. so your normal case is also the worst case, and your processing cycle budget for the algorithm is known and you've made allocation for it and no hiccups will occur for that reason.

then, if you are running this iteration a known number of times, you can unroll it into linear code *exactly* as i have said. i cannot fathom the problem you have said you're having about this:

On 6/23/14 4:45 AM, Urs Heckmann wrote:
On 23.06.2014, at 06:37, robert bristow-johnson<r...@audioimagination.com>  
wrote:

...
Regarding the iterative method, unrolling like you did

   y0 = y[n-1]
   y1 = g * ( x[n] - tanh( y0 ) ) + s
   y2 = g * ( x[n] - tanh( y1 ) ) + s
   y3 = g * ( x[n] - tanh( y2 ) ) + s
   y[n] = y3
is *not* what I described in general.

it *is* precisely equivalent to the example you were describing with one more iteration than you were saying was necessary.

  It's a subset won't ever converge in 3 iterations :^)

Thing is, while starting with y[n-1] is a possibility, it's not the only one 
and in my experience it's hardly ever a good one.

okay, so 3 iterations is not enough for the worst case. then increase it to 4 or 5 or 10. fine.

so y[n-1] is not a good initial guess. fine. then set y0 to zero. (then it is explicit that this is a function *only* of x[n] with g and s as parameters of the function, not the argument.)

my points are,

1. that you *must* for live and real-time operation, get a grip on the number of iterations that will work for all possible inputs.

2. that if there is a solid, fixed, and finite maximum number of iterations needed, that the iterated process can be rolled out into some "linear" code (code with a beginning and an end).

3. and this code does *not* use the output y[n] (which is what it is computing) as an input. it has *only* past outputs and the current and past inputs as possible arguments. no zero-delay assumptions on anything other than the current input sample, x[n].

4. finally, since ultimately the process maps an x[n] to a y[n], and in this 
example, it's *only* that (i.e. a memoryless mapping), then why not, offline, 
do your iterative thing to define the net function and implement that net 
function in a manner that is:

  4a) computationally more efficient, like table-lookup with a very big table.

  4b) or in a manner that allows some theoretical understanding of the nature
      of the function to allow one to determine what oversampling ratio is
      needed to keep aliasing at bay.  for me, normally that's a finite-order
      polynomial, but maybe you'll figger something else out.  maybe you'll
      implement it as that finite-order polynomial or maybe not (like you'll
      use table lookup or even the iterative loop with a maximum on the number
      of iterations).  but at least you'll have an idea what you need to do
      to sufficiently upsample.

this is getting us back to the original central issue i've been having in this thread.  the 
"zero-delay feedback" is an ancillary issue and Andy's "trapezoid rule 
integration" to implement filters is another ancillary issue.


--

r b-j                  r...@audioimagination.com

"Imagination is more important than knowledge."



--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp

Reply via email to