Hi Cinaed,

On 08/16/2017 04:14 AM, Cinaed Simson wrote:
I would replace the 2 serial rational resamplers with one, namely,

   (8/1)*(400000/614400)=(8/1)*400/614.4) = 5.208333333333333/1

or

   15.625/3

That is, interpolate by 15.625 decimate by 3.
By which you mean "interpolate by 15625 and decimate by 3000", because a rational resampler can only sample up and down by integer amounts" :)

If I'm not doing this wrong in my head, that is equivalent to interpolating by 125, decimating by 24; that is, in fact, what the rational resampler will do internally if you parameterize it with interp=15625, decim=3000, but it's usually a good idea to find the minimum prime factors necessary to express the quotient, just so one sees how bad the computational effort will be.

The larger of the two numbers (125,24) will dictate how sharp the anti-aliasing or anti-imaging filter in the resampler must be, and that sharpness is what makes the filter long, and potentially hard to do in real time. 1/125 isn't really the nicest of all (nyquist rate-relative) transition widths, but it should still be quite manageable.

In extreme cases, and I don't think this is one just yet, but I haven't done the math or tested it, you'd actually switch from using a rational resampler to using an arbitrary resampler:

While rational resamplers are based on the idea that there's a rational ratio between in- and output rate, so that you can just interpolate and decimate by integer factors, other resamplers do exist. Essentially, they are based on the idea that since the input signal, as Nyquist tells us, must be band-limited, there's at least a hypothetical "theoretical" continuous signal that is equivalent to the digital signal. Find a formula for that continuous signal, calculate arbitrary signal values between the original sample points, and you can do *any* resampling ratio, including ∛2, e, π⁄2, whatever, any real number.

If you look at the description closely, "calculate arbitrary … between original sample points", those are interpolators. There's different approaches to interpolation functions; one would be to have a bank of filters that delay the input signal by a fraction of the sample period, and then pick (and if necessary, somehow combine) the outputs closest to the new sampling point we actually need. That's what the PFB arbitrary resamplers do – take the two delayed versions "closest" to that fractional delay we need, linearly interpolate¹.

Of course, this comes at the cost of continuously having to feed a whole bank of filters with the input signal. And that's a large cost. But before someone implements a 1023/1000-rational resampler with thousands of taps, having maybe 64 filters to have enough fractionally delayed versions of the input signal to keep the jitter tolerably low isn't all that bad.

Best regards,
Marcus

¹ one could be smarter than linear, but I didn't do the math whether that is a good idea. Didn't read any papers, but IIRC, harris' book only does the math for linear interpolation; intuitively, sinc interpolation would be "correcter", but there was beauty in the zeros of the sinc²(t) transform of the triangle.

_______________________________________________
Discuss-gnuradio mailing list
Discuss-gnuradio@gnu.org
https://lists.gnu.org/mailman/listinfo/discuss-gnuradio

Reply via email to