Dear all,

I've been chasing the cause of some nasty artefacts in my spectra recently. The 
first turned out to be a known bug in the Xilinx compiler, activated when the 
BRAM_sharing optimisation in the FFT is checked, however the artefacts I'm 
seeing since seem to be activated by the RAM blocks in the delay_bram block 
being configured for "Speed", and not the default "Area".

Below are links to two spectra. They were both recorded from the Lovell 
telescope's L-band receiver while the telescope was pointing at the zenith. The 
logic used in both cases is identical, with the exception that the delay_brams 
in the first spectra are configured for Speed, and in the second they are 
configured for Area (default). All operating parameters were the same in both 
cases.

https://dl.dropboxusercontent.com/u/38103354/figure_1.png
https://dl.dropboxusercontent.com/u/38103354/r13a_libs_18-09_2207.PNG

Aside from how ugly the first spectrum looks (there are signals in there which 
simply don't exist), even the accumulated power is different. The vertical 
scale is logarithmic so this is a factor 100 or so higher with the brams 
configured for Speed. I don't understand how they could be so different, nor 
why configuring the brams for Speed should make any difference at all, not 
least a difference of this magnitude.

The most worrying thing for me now is that configuring the brams for Speed was 
key to getting my larger designs (16k and 32k channels) to meet timing. If I 
need to reconfig for Area, I'm going to have to brawl with Par and PlanAhead 
all over again... It actually took a few goes to get just that 4k channel 
design to compile.

I'd appreciate ideas/suggestions!

Thanks
Michael


Reply via email to