Okay, so this was a dumb question ;-) I guess it's all in the IEEE math representation and how printf deals with this on any given platform. The compiler does the math correctly, it just represents a quiet NaN in a different way to gcc. Changed the code that used it (cricket) to recognise -1.#INF as nan and problem went away.
----- Original Message ----- From: "Eastbarn" <[EMAIL PROTECTED]> To: <[email protected]> Sent: Monday, July 19, 2004 5:08 PM Subject: [rrd-users] NaN representation on win32 All, hope someone can help me with a win32 newbie question. Alternatively send me across to the developer mailing list if I'm in the wrong place ;-) I've compiled a snapshot version of 1.1 on win32 using VC6 (w2k, activestate perl 5.8) . The representation that I'm seeing for NaN is rather different to my previous experience (Solaris/Linux). I'm seeing NaN represented as "-1.#IND" (the float representation for quiet NaN), rather than the usual NaN by the perl shared bindings. This is causing me a few problems further down the line. Can anyone point me in the right direction - math never was my best subject. Thanks in advance .... Jonathan ________________________________________________ Message sent using UebiMiau 2.7.2 -- Unsubscribe mailto:[EMAIL PROTECTED] Help mailto:[EMAIL PROTECTED] Archive http://www.ee.ethz.ch/~slist/rrd-users WebAdmin http://www.ee.ethz.ch/~slist/lsg2.cgi -- Unsubscribe mailto:[EMAIL PROTECTED] Help mailto:[EMAIL PROTECTED] Archive http://www.ee.ethz.ch/~slist/rrd-users WebAdmin http://www.ee.ethz.ch/~slist/lsg2.cgi
