Hi,

I'm comparing two cases that, in theory (at least in my understanding
of things), should yield the same result but don't :)

1) I'm sending data to the USRP at sample_rate N with
master_clock_rate N and the ADI has FIR 4x, HB 2x,2x,2x
2) I'm sending data to the USRP at sample_rate N with
master_clock_rate 2*N and the ADI has FIR 2x, HB 2x,2x,2x
    In this second case I'm also using a patched FPGA image that
essentially disables the half band filters of the FPGA so that it
inserts 0 between each sample instead of doing the half band.

In both case, the FIR from the ADI is programmed with my own taps (the
same in both case).

Now I would expect the result to be the same ... since essentially the
only thing that changes is that either the ADI is doing the 4x
interpolation or the FPGA is doing 2x and then the ADI is doing 2x,
but in both case the signal ends up at the same rate and filtered by
the same taps and the final DAC rate is also the same.

(i.e. in both case the samples before the FIR filtering are  "a 0 0 0
b 0 0 0 c 0 0 0 d 0 0 0 ...". Yes I know that's probably not how it's
internally done in the ADI but that should be mathematically
equivalent to this).

But turns out the results are not the same. Not a catastrophic
difference but 0.5% ~ 1% EVM change.

What I'm wondering is if changing the master_clock_rate has an
influence on other parts of the system that I'm not seeing and that
could explain this. At first I thought maybe the analog filters are
configured based on the master_clock_rate, but calling
get_tx_bandwidth shows 56 MHz full bandwidth in both cases.

Any theory ?


Cheers,

    Sylvain Munaut

_______________________________________________
USRP-users mailing list
USRP-users@lists.ettus.com
http://lists.ettus.com/mailman/listinfo/usrp-users_lists.ettus.com

Reply via email to