>
> Assuming you're talking about simple binary, I think your output
> should be a tad bigger.  I get 21,987.
>
My mistake, I guess "double" is different than decimal.  Sorry.

--
[email protected]
http://www.kernel-panic.org/cgi-bin/mailman/listinfo/kplug-lpsg



Yep, probably can't say anything like that without qualifying it in
retrospect. Double precision floating point number in C# on WindowsXP.....

I'd say this almost one of the only times where basic C would be the best.
C# (like java) won't let you cast, and PERL requires all this complication,
whereas since you're really on your own hardware either way, you could just
read the value in binary as a void * and cast to whatever precision value
you want. I ran into this same problem after writing the code for a protocol
on a microcontroller, and then writing the testbench/GUI with C#, you can't
just *cast* a value the way you want to, even though it's the one case where
you really want none of that safety. Of course now you need a bunch of
if/then statments to check the values before to make sure your not doing an
illegal cast, and we're back to where we started. Ahh....gcc.....


-T

p.s. "decimal" type actually exists in some places as a true base 10
representation like cobol based accounting software where you can't just
round all those decimal place inaccuracies created by floating point
notation away....


--
-Thomas Gal
http://www.enigmatecha.com/thomasgal.html
--
[email protected]
http://www.kernel-panic.org/cgi-bin/mailman/listinfo/kplug-lpsg

Reply via email to