In message <20091102234609.161535d7dnp7b...@webmail.pdx.edu> you wrote:
> It was suggested off the list that I consider a decimation
> filter to downsample the sample rate. This sounds
> reasonable, but I implemented the iir filter at a 100 Hz
> sample rate and still saw a significant offset (500 lsb),
> with many of the same fixed point scaling problems
> (resolution less than 16 bits). I would guess that I could
> get close by cutting the sample rate to 10 Hz. I think
> that leads me to pre-warping the coefficients since 2*PI
> is not << 10Hz, but that's just math.

I'm not sure why you're seeing so much DC offset; that
sounds like it could be just an implementation bug?  In my
limited experience the errors we're worried about show up
more often as saturation of the filter...

> I 'think' the decimation filter needs to remove content
> above 5 Hz (nyquist), then cut the samples to 10
> Hz. Wouldn't that be about a 1Hz filter anyway?

Yeah, you're going to have to decimate iteratively, I think.
At least 1000->100->10->1, and maybe more aggressively than
that.

(I know you're being careful about Hz vs SPS, but it's
sometimes easier to talk about this stuff when these
units are distinguished.)

You can't really low-pass filter a 1000 SPS signal at 5Hz
and then decimate to 10 SPS, for several reasons.  First of
all, the transition band is going to mess you up, so you
should probably plan on not decimating quite so
aggressively.  Second of all, the filter you need for that
is still almost impossible to build.

Instead, if you filter-and-decimate in successive stages,
each stage can be pretty accurate.  The price you pay is
that filtering each stage costs.

> I think the FIR structure has the most promise, but I have no  
> experience with those, especially on a fixed point machine.

Any FIR to directly achieve the result you want is likely to
be very, very long.  Think 1000 taps.

> Quoting Barton C Massey <b...@cs.pdx.edu>:
> 
> > Caveat: I have no idea what I'm talking about.  Act
> > accordingly.
> 
> I have no idea how to interpret this response :) This
> could be sarcasm, and you *do* know what you are talking
> about, but I simply don't understand. This could be truth,
> and I should go another route.

Somewhere in between.  My professional DSP days are many
many years behind me now, and so I hate to pose as an
expert.  OTOH, I did have professional DSP days once.

> > You can't do a 1Hz cutoff against a 1000Hz sampling rate
> > with any sane single-stage design, AFAIK.
> 
> I'm not sure what you mean by a single stage... I think of
> a stage as a memory element when using an IIR filter, in
> which case I think I'm using 3 stages.

No, I'm talking here not about poles or taps, but about a
different structure altogether: iterative downsampling by
filter-and-decimate.  Think F->D->F->..->D->F where F is
some low-pass filter with a reasonable cutoff (not 0.1% :-)
and D is a decimation stage.

> > The usual plan would be to downsample the signal in a
> > series of stages; maybe 5 downsample-by-four stages,
> > which would get you to about 1Hz very nicely.
> 
> Again I'm not up to speed on your terminology, and my
> googling is not clearing things up. I see some stuff on
> downsampling, but I've thought of this in the context of
> throwing out data, not combining it. Perhaps more akin to
> down-conversion.

Downsampling is, as far as I know, the preferred name in DSP
for downconversion via filter-and-decimate.  The term
downconversion is used more in the analog world, especially
in RF, and often refers to approaches like downconversion
mixing.  (Mixing turns out to normally be a bad idea in the
DSP world; it's expensive and there are better tricks
available.)

The key idea of using filter-and-decimate downsampling in
this application is that digital filters really hate having
the transition band so far from the sampling rate.  By
reducing the sampling rate, we move the filter transition
band up to a better place.

Note that the data you "throw out" in a downsampling step is
hopefully some of the same data your direct anti-aliasing
filter would have thrown out anyhow.  Remember, once you
know the maximum frequency in your digital signal, you can
downsample to that frequency just by simple decimation; you
don't "throw out" data in the decimation, as it could all be
reconstructed by interpolating the result back to the
original sample rate and then low-pass filtering again.

> > (It also tells you that as a rule of thumb you'd
> > need about a 10-pole Butterworth to do this directly; ...
> 
> Did you mean 10 *tap* Butterworth? I was originally only
> trying to get 2 poles, so 10 poles would be overkill?!?!

Yeah, I meant a 10 tap Butterworth.  If you're going to try
to build a digital antialiasing filter with 0.1% cutoff
bandwidth, 10 taps is probably insufficient.

Think of it this way: any FIR filter that's going to retain
1SPS components in a signal at 1000SPS is going to have to
work on a window at least 1000 samples long; otherwise
there's not enough history to know what the signal was
previously doing.  An IIR filter tries to get around this by
sort of using the output as a fulcrum to try to leverage the
input to the right place.  However, the more leverage you
get, the more that numerical and signal errors will screw up
the result.  0.1% is an overwhelmingly aggressive bit of
leveraging.

> I'll have to show off (Tuesday) an integrating device I'm
> running now.  It has zero offset, and filter-like
> response.

What it doesn't have, I suspect, is 0.1% cutoff
bandwidth. :-)  I'll look forward to seeing it!

    Bart

_______________________________________________
psas-avionics mailing list
psas-avionics@lists.psas.pdx.edu
http://lists.psas.pdx.edu/mailman/listinfo/psas-avionics

Reply via email to