Hi, Yes, it is quite intensive. You can try increasing the decimation/interpolation, if you see that the CPU is not keeping up with the load. However, you have to be careful about the frequency offset between the transmitter and the receiver as you might get too large of the misalignment when you use narrow channels. 434MHz for one device is not necessarily 434MHz for the other device. I use a spectrum analyzer to figure that difference out.
Cheers, Veljko 2010/6/16 rick rick <[email protected]>: > Hi > > I have some questions and observation on OFDM based transceiver. > > I adopted the modified code provided by Veljko Pejovic (link: > www.cs.ucsb.edu/~veljko/downloads/ofdm_example.tar.gz). After trying > different combinations of parameters, I managed to get benchmark to work > with most package received correctly. However, it turned out the CPU usage > was very high, almost 90% for receiver. Here is the specs: USRP2, WBX, > Ubuntu 10.04, GNU radio 3.3, Intel core 2 duo 2.66GHz, ram 2G. Can anyone > share the insight why the OFDM receiver is so computationally intensive? (-f > 434M fft-length=128, occupied-tones=80, -d/-i 32, cp-length=32.) > > In addition, I also tried to build a new tunnel.py based on the revised OFDM > codes. I got continuous Timeout (received preamble but not the entire data > frame) and SSS... (missing packets), evenif only one USRP was turned on. > However, supposedly it should not receive any signals. Does anyone > successfully build an OFDM based tunnel? > > An > > _______________________________________________ > Discuss-gnuradio mailing list > [email protected] > http://lists.gnu.org/mailman/listinfo/discuss-gnuradio > > _______________________________________________ Discuss-gnuradio mailing list [email protected] http://lists.gnu.org/mailman/listinfo/discuss-gnuradio
