On Fri, Feb 03, 2012 at 12:35:07PM +0100, Dave Long wrote:
> >    main(a,b){for(;;)putchar(b+=16+(a+=(b&256?1:-1)*getchar()-a/
> >512)/1024);}
> 
> after writing some python (vide infra), here's my interpretation of
> that line:
>       b is the local oscillator, which produces a triangle wave ramping
> between (0-255)
>       a is the feedback control
>       the +(b&256?1:-1) term chops the input[0,1] from getchar(), but at
> only half of b's frequency (hence the doubled output)
>       the -a/512 term produces a "leaky integrator" to low-pass the feedback

Oh, indeed.  I should have explained it to that level in my mail, as I
did with a slightly earlier version at
<http://stackoverflow.com/questions/39485/software-phase-locked-loop-example-code-needed/9118612#9118612>.

I should additionally point out that the ratio between 16 and 256
determines the default time between switches, i.e. 16 cycles, for a
period of 32 cycles, which is 250Hz at 8000 samples per second.

The questions I have a hard time with are things like:

- What's its capture range?  That is, how close in frequency to the
  oscillator's current frequency does the target signal need to be in
  order to be detected?  Camenzind's book makes me think that the answer
  is that the beat frequency needs to be within the band accepted by the
  low-pass filter on the feedback path, but clearly it depends on the
  amplitude of the signal being picked up.  Which brings me to:
- What's the rolloff frequency of the simple a+=x-a/512 IIR filter?
  Clearly it has to be in the neighborhood of 1/(512 samples) (8 Hz).
  To me this suggests a capture range of around 250-8 to 250+8 Hz, a Q
  of about 16.
- What's its lock range?  How far will it follow a signal once it locks
  onto it?
- How do the capture range and lock range interact with noise?  Clearly
  more noise will cause the frequency to wander around more at random,
  which seems like it should increase the capture range and decrease the
  lock range.
- How do these performance parameters vary with the input parameters 512
  (the low-pass filter integration time parameter) and 1024 (the ratio
  between the phase-detector output and the variation it produces in
  angular frequency)?  This is complicated further by the fact that the
  new samples aren't divided by 512 when they are low-pass filtered, so
  changing 512 also changes the absolute magnitude of `a` for a given
  amplitude and phase offset.  I think `a`'s maximum possible value at
  the moment should be something like 512*255, i.e. 131072-512, which
  suggests that the possible range of angular frequencies for `b` should
  be between 16-127 and 16+127; but in practice it doesn't seem to vary
  nearly that much.
  
> [0] this is a Horowitz & Hill type I (dot product) phase comparison;
> the python below uses an edge-based lead-lag[2] (type II) phase
> comparison.  H&H think that the problem most people have with PLL's
> is attempting to "cut & try" as we have both done, instead of
> calculating the feedback loop properly; they point out that this is
> somewhat trickier than usual because one measures phase but controls
> frequency, hence the feedback loop is already lagging by at least a
> quarter-wave.

Sounds like I should read Horowitz & Hill in addition to Camenzind.  And
maybe write a program to graph input parameters vs. output parameters by
testing various combinations of PLL parameters: if [cut-and-try can
compete with calculation, as Bret Victor advocates
lately](http://worrydream.com/KillMath/), it can only do so if repeated
a large number of times and graphed or otherwise analyzed.

> [1] cf lock-in amplification, a related hack.  Could dreaming be an
> epiphenomenon of an evolved "lock-in amplification" for learning (in
> which the brain's inputs are chopped between "reality" while awake
> and "dreams" while asleep) in order to cope with the fact that the
> SNR of reality often leaves much to be desired?

The page I found about LIAs makes them sound like they're just
what I'm doing in the longer C pitch-detection program: taking a
low-pass-filtered "dot product" between the input and a second
90°-offset waveform generated by the oscillator.

> [2] presumably one could make simple adjustments to "zeroph" to deal
> with noisier (read "real") input.  cf o-scope trigger parameters

Hmm, that's a good analogy.  I should finish reading my oscilloscope
book.  I seem to remember from experiment that there isn't a universal
set of oscilloscope trigger parameters that works for every signal.

It seems like zero-crossings will always mostly be due to the
largest-amplitude signal, regardless of whether it's in the frequency
band you're looking for or not?  Which I suppose means your capture band
is effectively everything from DC up to the Nyquist frequency.

> # cdelta: -ve in lead (0-1), +ve in lag (1-0), 0 when matched (0-0) or (1-1)
> # cprime: accumulate signal for each sample that lag or lead is present
> # compare: Horowitz & Hill type II phase detector (sum all lags and/
> or leads)
> cdelta  = lambda (i,j):   j-i
> cprime  = lambda ds,ll:   ds+[ds[-1] + cdelta(ll)]
> compare = lambda ref,sig: sum(reduce(cprime,zip(zeroph(ref),zeroph(sig)),[0]))

Here I feel like I must do type inference: cdelta takes a tuple of
two numbers, so ll must be a tuple of two numbers, and then upon
encountering the zeroph arguments I finally understand the cdelta
comment!  This makes me wonder if Darius's (and Bob Martin's) rule of
thumb that callers should be presented before callees, top-down, is
sensible.

Upon further examination of the code, I conclude that reduce() passes
its last argument (and return values of the reduction function) as the
first argument to the reduction function, and so we are actually
computing the partial sums of the lead-lag function, which tells us at
each point how many more cycles have started on one function than the
other.  (I guess how many more have started on sig than on ref.)  

This is probably a good example of Guido's observation about the
readability of reduce on non-associative arguments: almost always, an
explicit loop would be easier to read.

I think this is the kind of PLL that Massalin, around p.99 of her
dissertation, calls a "FLL", for "frequency-locked loop".

> # odelta: calculate the adjustment by comparing lead/lag of cycle starts
> # oprime: scan-update the local oscillator value
> odelta  = lambda osc, sig: compare(wav(osc),wav(sig))/4096.0

This looks like a bit of cheating --- for every "sample", you generate a
whole new signal as if that oscillation were continued for 1024 samples?
Maybe I didn't understand.

Kragen
-- 
To unsubscribe: http://lists.canonical.org/mailman/listinfo/kragen-discuss

Reply via email to