Just a quick note really. Thinking that I ought to try to optimise the  
_sbc_analyze_eight()/_sbc_analyze_four() fns as these are called lots,  
I re-wrote them to use DSP intrinsics, dual-MACs, etc.

I hasten to add that my code doesn't actually work, I get what I  
understand is saturated output - loud white noise. So there's still  
some sort of problem in there (not unexpected for the first run after  
conversion). The real issue is that this has done nothing in  
particular to the overall decode time.

So either my optimisations are rubbish (quite possible, but afaict  
I've done what needs to be done) or the code is being held up  
somewhere else. So, this is where the problem comes - from SPRU376A,  
the TMS320C55x DSP Programmer's Guide (Rev. A), ch3.3, it says to use  
the clock() fn to time individual parts of the code.

I've tried this, and the function always returns the same value (each  
16bit part (high and low) of the 32bit integer = -1792 (when output  
with %d for signed short; the dbg() fn can only output 16bit values)).

Any clues? I'm not sure how to go about timing the code without some  
sort of clock, other than to just look at the code and guess at what  
might be optimisable.

Cheers,


Simon

_______________________________________________
maemo-developers mailing list
maemo-developers@maemo.org
https://lists.maemo.org/mailman/listinfo/maemo-developers

Reply via email to