Hi all,
Looking into this once again (I've had this problem for 10 years or more
now) I just found out that gcc has a -ffast-math flag that prevents denormals
for slowing the code down, as long as the CPU has SSE instructions. I
don't know if the geode does or not, though!
On linux, at any
Looking into this once again (I've had this problem for 10 years or more
now) I just found out that gcc has a -ffast-math flag that prevents
denormals for slowing the code down, as long as the CPU has SSE
instructions. I don't know if the geode does or not, though!
according to wikipedia not
Yep, I remember trying the MXCSR thing once and it not working, but I
forget what processor it was on. If indeed it's equivalent to using this
gcc flag, I'm happier usin the compiler flag because it keeps the code
cleaner.
OTOH, it sounds like non-SSE processors will always need code to check
On Fri, Oct 31, 2008 at 04:27:08PM -0400, Bill Gribble wrote:
I have a patch of medium complexity, with a handful of instruments~ and
a bunch of sequencing and arranging-type message handling. On my speedy
Intel laptop it has no problem and barely notches the CPU usage.
However, when I run
On Sun, 2008-11-02 at 22:07 +, [EMAIL PROTECTED] wrote:
pardon, but what the word 'denormals' means? ..never heard it
http://en.wikipedia.org/wiki/Denormal
roman
___
Der frühe Vogel fängt den Wurm. Hier
I have a patch of medium complexity, with a handful of instruments~ and
a bunch of sequencing and arranging-type message handling. On my speedy
Intel laptop it has no problem and barely notches the CPU usage.
However, when I run this patch on my teeny Geode-based UMPC it pegs CPU
at 100%.
I'm
A useful debug trick, to make sure denornals are the problem,
is to inject a wee bit of noise into the path and see if it
speeds up. My experiences of them in the past is that they
take a while st show up, maybe many seconds or even minutes
after the patch seems silent (rather than being present