Christopher Barker wrote: > Tim Hochberg wrote: > >> I've been told that operations on NANs are slow because they aren't >> always implemented in the FPU hardware. Instead they are trapped and >> implemented software or firmware or something or other. >> > > which still doesn't make sense -- doesn't ANY operation with a NaN > return NaN? how hard could that be? > Well, it's not free. You still have to recognize a particular bit pattern corresponding to QNaN and then copy the result in that case. All that probably burns more silicon for a case that most people don't care about. Do NaNs make Half Life run any faster? No I didn't think so.
I suspect that instead they just look for the case where the exponent is all 1s, in which case the result is either +-Inf, SNaN or QNaN and then drop into software to handle those infrequent cases. > I share in the disappointment that NaN's are not first class citizens in > common hardware.... > It would be nice to see IEEE 754 supported more faithfully, but there seems to be more of a problem on the software side than on the hardware side in practice. I still can't help feeling that using NaNs for mask values would be abuse even if it weren't slow and eventually it would reach around and bite you someplace tender. -tim ------------------------------------------------------------------------- Using Tomcat but need to do more? Need to support web services, security? Get stuff done quickly with pre-integrated technology to make your job easier Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 _______________________________________________ Numpy-discussion mailing list Numpy-discussion@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/numpy-discussion