Changes by Eric Snow ericsnowcurren...@gmail.com:
--
nosy: +eric.snow
___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue1580
___
___
Python-bugs-list
Ole Laursen o...@iola.dk added the comment:
Just came across this bug, I don't want to reopen this or anything, but
regarding the SSE2 code I couldn't help thinking that why can't you just detect
the presence of SSE2 when the interpreter starts up and then switch
implementations based on
Mark Dickinson dicki...@gmail.com added the comment:
[Raymond]
Is there a way to use SSE when available and x86 when it's not.
I guess it's possible in theory, but I don't know of any way to do this in
practice. I suppose one could trap the SIGILL generated by the attempted
use of an SSE2
Antoine Pitrou pit...@free.fr added the comment:
Hello folks,
IIUC, autoconf tries to enable SSE2 by default without asking. Isn't it
a problem for people distributing Python binaries (e.g. Linux vendors)
and expecting these binaries to work on legacy systems even though the
system on which the
Mark Dickinson dicki...@gmail.com added the comment:
Yes, I think you're right.
Perhaps the SSE2 support should be turned into an --enable-sse2 configure
option, that's disabled by default? One problem with this is that I don't
know how to enable SSE2 instructions for compilers other than
Mark Dickinson dicki...@gmail.com added the comment:
Perhaps better to drop the SSE2 bits completely. Anybody who
actually wants SSE2 instructions in their binary can do a
CC=gcc -msse2 -mfpmath=sse configure ...
Unless there are objections, I'll drop everything involving SSE2 from
the
Mark Dickinson dicki...@gmail.com added the comment:
SSE2 detection and flags removed in r71723. We'll see how the buildbots
fare...
--
___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue1580
Antoine Pitrou pit...@free.fr added the comment:
Is there a way to use SSE when available and x86 when it's not.
Probably, but I don't think there is any point doing so. The main
benefit of SSE2 is to get higher performance on floating point intensive
code, which no pure Python code could be
Raymond Hettinger rhettin...@users.sourceforge.net added the comment:
The advantage is accuracy. No double rounding. This will also help the
math.fsum() function that is also susceptible to double rounding.
--
___
Python tracker
Mark Dickinson dicki...@gmail.com added the comment:
Closing this. There are a few known problems remaining, but they've all
got their own issue numbers: see issue 5780, issue 4482.
--
resolution: - accepted
stage: - committed/rejected
status: open - closed
Mark Dickinson dicki...@gmail.com added the comment:
The py3k-short-float-repr branch has been merged to py3k in two parts:
r71663 is mostly concerned with the inclusion of David Gay's code into the
core, and the necessary floating-point fixups to allow Gay's code to be
used (SSE2 detection,
Eric Smith e...@trueblade.com added the comment:
My changes on the py3k-short-float-repr branch include:
- Create a new function PyOS_double_to_string. This will replace
PyOS_ascii_formatd. All existing internal uses of PyOS_ascii_formatd
follow this pattern: printf into a buffer to build up
Mark Dickinson dicki...@gmail.com added the comment:
Changing target Python versions.
I'll upload a patchset to Rietveld sometime soon (later today, I hope).
--
versions: +Python 3.1 -Python 2.6, Python 3.0
___
Python tracker rep...@bugs.python.org
Mark Dickinson dicki...@gmail.com added the comment:
I've uploaded the current version to Rietveld:
http://codereview.appspot.com/33084/show
--
___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue1580
Mark Dickinson dicki...@gmail.com added the comment:
The Rietveld patch set doesn't show the three new files, which are:
Python/dtoa.c
Include/dtoa.h
Lib/test/formatfloat_testcases.txt
--
___
Python tracker rep...@bugs.python.org
Mark Dickinson dicki...@gmail.com added the comment:
So work on the py3k-short-float-repr branch is nearing completion, and
we (Eric and I) would like to get approval for merging these changes
into the py3k branch before this month's beta.
A proposal: I propose that the short float
Mark Dickinson dicki...@gmail.com added the comment:
Those three missing files have now been added to Rietveld.
Just for reference, in case anyone else encounters this: the reason those
files were missing from the initial upload was that after I svn merge'd
from py3k-short-float-repr to py3k,
Guido van Rossum gu...@python.org added the comment:
On Tue, Apr 7, 2009 at 3:10 AM, Mark Dickinson rep...@bugs.python.org wrote:
A proposal: I propose that the short float representation should be
considered an implementation detail for CPython, not a requirement for
Python the language.
Mark Dickinson dicki...@gmail.com added the comment:
Historically, we've had a stronger requirement: if you print repr(x)
and ship that string to a different machine, float() of that string
returns the same value, assuming both systems use the same internal FP
representation (e.g. IEEE).
Jared Grubb pyt...@jaredgrubb.com added the comment:
I think ANY attempt to rely on eval(repr(x))==x is asking for trouble,
and it should probably be removed from the docs.
Example: The following C code can vary *even* on a IEEE 754 platform,
even in two places in the same source file (so same
Mark Dickinson dicki...@gmail.com added the comment:
I think ANY attempt to rely on eval(repr(x))==x is asking for trouble,
and it should probably be removed from the docs.
I disagree. I've read the paper you refer to; nevertheless, it's still
perfectly possible to guarantee eval(repr(x))
Jared Grubb pyt...@jaredgrubb.com added the comment:
The process that you describe in msg85741 is a way of ensuring
memcmp(x, y, sizeof(x))==0, and it's portable and safe and is the
Right Thing that we all want and expect. But that's not x==y, as that
Sun paper explains. It's close, but not
Changes by Mark Dickinson dicki...@gmail.com:
--
nosy: +eric.smith
___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue1580
___
___
Python-bugs-list
Mark Dickinson dicki...@gmail.com added the comment:
Eric and I have set up a branch of py3k for work on this issue. URL for
(read-only) checkout is:
http://svn.python.org/projects/python/branches/py3k-short-float-repr
--
___
Python tracker
Mark Dickinson dicki...@gmail.com added the comment:
Would it be acceptable to use shorter float repr only on big-endian and
little-endian IEEE 754 platforms, and use the full 17-digit repr on other
platforms? This would greatly simplify the adaptation and testing of
Gay's code.
Notable
Guido van Rossum gu...@python.org added the comment:
Sounds good to me.
___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue1580
___
___
Python-bugs-list mailing
Raymond Hettinger rhettin...@users.sourceforge.net added the comment:
+1 on the fallback strategy for platforms we don't know how to handle.
___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue1580
___
Noam Raphael noamr...@gmail.com added the comment:
Do you mean msg58966?
I'm sorry, I still don't understand what's the problem with returning
f_15(x) if eval(f_15(x)) == x and otherwise returning f_17(x). You said
(msg69232) that you don't care if float(repr(x)) == x isn't
cross-platform.
Guido van Rossum gu...@python.org added the comment:
I changed my mind on the cross-platform requirement.
___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue1580
___
Noam Raphael noamr...@gmail.com added the comment:
I'm sorry, but it seems to me that the conclusion of the discussion in
2008 is that the algorithm should simply use the system's
binary-to-decimal routine, and if the result is like 123.456, round it
to 15 digits after the 0, check if the result
Guido van Rossum gu...@python.org added the comment:
I tried that, and it was more subtle than that in corner cases.
Another argument against it is that on Windows the system input routine
doesn't correctly round unless 17 digits of precision are given. One of
Tim Peters's responses should
Mark Dickinson dicki...@gmail.com added the comment:
I'd be interested in working with Preston on adapting David Gay's code.
(I'm interested in looking at this anyway, but I'd much prefer to do it
in collaboration with someone else.)
It would be nice to get something working before the 3.1
Changes by Skip Montanaro s...@pobox.com:
--
nosy: -skip.montanaro
___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue1580
___
___
Python-bugs-list
Mark Dickinson dicki...@gmail.com added the comment:
The GNU library's float-string routines are based on David Gay's.
Therefore you can compare those to Gay's originals
Sounds reasonable.
(which accounts for the extreme length and complexity of Gay's code).
Looking at the code, I'm
Tim Peters tim.pet...@gmail.com added the comment:
Mark, extreme complexity is relative to what's possible if you don't
care about speed; e.g., if you use only bigint operations very
straightforwardly, correct rounding amounts to a dozen lines of
obviously correct Python code.
Mark Dickinson dicki...@gmail.com added the comment:
So is it worth trying to come up with a patch for this? (Where this =
making David Gay's code for strtod and dtoa usable from Python.)
___
Python tracker rep...@bugs.python.org
Tim Peters tim.pet...@gmail.com added the comment:
Is it worth it? To whom ;-) ? It was discussed several times before on
various Python mailing lists, and nobody was willing to sign up for the
considerable effort required (both to update Gay's code and to fight
with shifting platform quirks
Guido van Rossum gu...@python.org added the comment:
On Thu, Feb 26, 2009 at 1:01 PM, Tim Peters rep...@bugs.python.org wrote:
Is it worth it? To whom ;-) ? It was discussed several times before on
various Python mailing lists, and nobody was willing to sign up for the
considerable effort
Tim Peters tim.pet...@gmail.com added the comment:
Huh. I didn't see Preston volunteer to do anything here ;-)
One bit of software engineering for whoever does sign on: nothing kills
porting a language to a new platform faster than needing to get an
obscure but core subsystem working. So
Preston Briggs prest...@google.com added the comment:
This all started with email to Guido that y'all didn't see,
wherein I wondered if Python was interested in such a thing.
Guido said: Sure, in principle, please see the discussion associated
with this change.
I probably don't have all the
Preston Briggs prest...@google.com added the comment:
In all this discussion, it seems that we have not discussed the
possibility of adapting David Gay's code, dtoa.c, which nicely handles
both halves of the problem. It's also free and has been well exercised
over the years.
It's available
Mark Dickinson dicki...@gmail.com added the comment:
It'd probably have to be touched up a bit.
This may be an understatement. :-)
In the first 50 lines of the 3897-line dtoa.c file, I see this warning:
/* On a machine with IEEE extended-precision registers, it is
* necessary to specify
Preston Briggs prest...@google.com added the comment:
It'd probably have to be touched up a bit.
This may be an understatement. :-)
Probably so. Nevertheless, it's got to be easier
than approaching the problem from scratch.
And considering that this discussion has been
going on for over a
Mark Dickinson dicki...@gmail.com added the comment:
I would consider compiling the library with flags appropriate to forcing
64-bit IEEE arithmetic if possible.
Using the right compiler flags is only half the battle, though. You
should really be setting the rounding precision dynamically:
Raymond Hettinger rhettin...@users.sourceforge.net added the comment:
Even if someone devoted the time to get possibly get this right, it
would be somewhat difficult to maintain.
___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue1580
Guido van Rossum gu...@python.org added the comment:
What maintenance issues are you anticipating?
___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue1580
___
Raymond Hettinger rhettin...@users.sourceforge.net added the comment:
Gay's code is 3800+ lines and includes many ifdef paths that we need to
get right. Mark points out that the code itself needs additional work.
The discussions so far also get into setting compiler flags on
different systems
Tim Peters tim.pet...@gmail.com added the comment:
The GNU library's float-string routines are based on David Gay's.
Therefore you can compare those to Gay's originals to see how much
effort was required to make them mostly portable, and can look at the
history of those to get some feel for the
Mark Dickinson [EMAIL PROTECTED] added the comment:
Mildly off-topic: it seems that currently eval(repr(x)) == x isn't
always true, anyway. On OS X 10.5.4/Intel, I get:
x = (2**52-1)*2.**(-1074)
x
2.2250738585072009e-308
y = eval(repr(x))
y
2.2250738585072014e-308
x == y
False
This is
Tim Peters [EMAIL PROTECTED] added the comment:
About (2**52-1)*2.**(-1074): same outcome under Cygwin 2.5.1, which is
presumably based on David Gay's perfect rounding code. Cool ;-)
Under the native Windows 2.5.1:
x = (2**52-1)*2.**(-1074)
x
2.2250738585072009e-308
y = eval(repr(x))
y
Mark Dickinson [EMAIL PROTECTED] added the comment:
[Tim]
If you think using 16 (when possible) will stop complaints, think again
;-) For example, ...
Aha! But using *15* digits would be enough to eliminate all 1, 2, 3, 4,
..., 15 digit 'surprises', wouldn't it?! 16 digits doesn't quite
Mark Dickinson [EMAIL PROTECTED] added the comment:
Here's the 'proof' that 15 digits should be enough:
Suppose that x is a positive (for simplicity) real number that's exactly
representable as a decimal with = 15 digits. We'd like to know that
'%.15g' % (nearest_float_to_x) recovers x.
Mark Dickinson [EMAIL PROTECTED] added the comment:
For what it's worth, I'm -0.1 (or should that be -0.10001?) on
this change. It seems better to leave the problems caused by binary
floating-point out in the open than try to partially hide them, and the
proposed change just
Guido van Rossum [EMAIL PROTECTED] added the comment:
Here's a fixed patch, float2.diff. (The previous one tasted of an
earlier attempt.)
Added file: http://bugs.python.org/file10840/float2.diff
___
Python tracker [EMAIL PROTECTED]
Guido van Rossum [EMAIL PROTECTED] added the comment:
I'd like to reopen this. I'm still in favor of something like to this
algorithm:
def float_repr(x):
s = %.16g % x
if float(s) != x:
s = %.17g % x
s1 = s
if s1.startswith('-'):
s1 = s[1:]
if s1.isdigit():
s += '.0' #
Tim Peters [EMAIL PROTECTED] added the comment:
If you think using 16 (when possible) will stop complaints, think again
;-) For example,
for x in 0.07, 0.56:
... putatively_improved_repr = %.16g % x
... assert float(putatively_improved_repr) == x
... print putatively_improved_repr
Guido van Rossum [EMAIL PROTECTED] added the comment:
That is truly maddening! :-(
I guess Noam's proposal to return str(x) if float(str(x)) == x makes
more sense then. I don't really care as much about 1.234567890123 vs.
1.234567890122 as I care about 1.2345 vs. 1.2344.
(This
Changes by Mark Dickinson [EMAIL PROTECTED]:
--
nosy: +marketdickinson
___
Python tracker [EMAIL PROTECTED]
http://bugs.python.org/issue1580
___
___
Python-bugs-list
Changes by Alexandre Vassalotti [EMAIL PROTECTED]:
--
nosy: +alexandre.vassalotti
___
Python tracker [EMAIL PROTECTED]
http://bugs.python.org/issue1580
___
___
Amaury Forgeot d'Arc added the comment:
If someone has a more recent version of MS's compiler,
I'd be interested to know what this does:
Visual Studio 2008 Express Edition gives the same results:
['1024', '1024', '1024', '1024', '1024', '1024', '1024.0001']
(Tested release and debug
Tim Peters added the comment:
If someone has a more recent version of MS's compiler, I'd be interested
to know what this does:
inc = 2.0**-43
base = 1024.0
xs = ([base + i*inc for i in range(-4, 0)] +
[base] +
[base + 2*i*inc for i in (1, 2)])
print xs
print [%.16g % x for x in xs]
Noam Raphael added the comment:
I think that we can give up float(repr(x)) == x across different
platforms, since we don't guarantee something more basic: We don't
guarantee that the same program doing only floating point operations
will produce the same results across different 754 platforms,
Christian Heimes added the comment:
Tim Peters wrote:
This has nothing to do with what will or won't satisfy me, either. I'm
happy with what Python currently does, which is to rely on #3 above.
That's explainable (what's hard about understanding %.17g?), and
relies only on what the 754
Raymond Hettinger added the comment:
ISTM shorter repr's are inherently misleading and will make it more
harder to diagnose why 1.1 * 3 != 3.3 or why round(1.0 % 0.1, 1) == 0.1
which is *very* far-off of what you might expect.
The 17 digit representation is useful in that it suggests where
Noam Raphael added the comment:
2007/12/18, Raymond Hettinger [EMAIL PROTECTED]:
The 17 digit representation is useful in that it suggests where the
problem lies. In contrast, showing two numbers with reprs of different
lengths will strongly suggest that the shorter one is exactly
Raymond Hettinger added the comment:
Right, there are plenty of exceptions to the suggestion of exactness.
Still, I find the current behavior to be more helpful than not
(especially when trying to explain the examples I gave in the previous
post).
I'm concerned that the tone of the recent
Noam Raphael added the comment:
About the educational problem. If someone is puzzled by 1.1*3 !=
3.3, you could always use '%50f' % 1.1 instead of repr(1.1). I don't
think that trying to teach people that floating points don't always do
what they expect them to do is a good reason to print
Guido van Rossum added the comment:
[Tim: when I said bugs I just meant non-correct rounding. Sorry.]
On the educational issue: it's still embarrassingly easy to run into
situations where *arithmetic* using floats produces educational
results. Simplest case I could find quickly: 0.1+0.2 !=
Skip Montanaro added the comment:
Guido ... trying to explain why two numbers both print the same but
Guido compare unequal ...
This is not a Python-specific issue. The notion of limited precision was
pounded into our heads in the numerical analysis class I took in college,
1980-ish. I'm
Tim Peters added the comment:
Guido, right, for that to work reliably, double-str and str-double
must both round correctly on the platform doing the repr(), and
str-double must round correctly on the platform reading the string.
It's quite easy to understand why at a high level: a simple (but
Noam Raphael added the comment:
Ok, I think I have a solution!
We don't really need always the shortest decimal representation. We just
want that for most floats which have a nice decimal representation, that
representation will be used.
Why not do something like that:
def newrepr(f):
r
Guido van Rossum added the comment:
This is what I was thinking of before, although I'd use %.16g%f and
%.17g%f instead of str(f) and repr(f), and I'd use float() instead
of eval().
I suspect that it doesn't satisfy Tim Peters though, because this may
depend on a rounding bug in the local
Tim Peters added the comment:
It's not a question of bugs. Call the machine writing the string W and
the machine reading the string R. Then there are 4 ways R can get back
the double W started with when using the suggested algorithm:
1. W and R are the same machine. This is the way that's
Noam Raphael added the comment:
2007/12/13, Guido van Rossum [EMAIL PROTECTED]:
Ok, so if I understand correctly, the ideal thing would be to
implement decimal to binary conversion by ourselves. This would make
str - float conversion do the same thing on all platforms, and would
make
Noam Raphael added the comment:
Ok, so if I understand correctly, the ideal thing would be to
implement decimal to binary conversion by ourselves. This would make
str - float conversion do the same thing on all platforms, and would
make repr(1.1)=='1.1'. This would also allow us to define
Guido van Rossum added the comment:
Ok, so if I understand correctly, the ideal thing would be to
implement decimal to binary conversion by ourselves. This would make
str - float conversion do the same thing on all platforms, and would
make repr(1.1)=='1.1'. This would also allow us to
Noam Raphael added the comment:
The Tcl code can be fonund here:
http://tcl.cvs.sourceforge.net/tcl/tcl/generic/tclStrToD.c?view=markup
What Tim says gives another reason for using that code - it means that
currently, the compilation of the same source code on two platforms can
result in a code
Christian Heimes added the comment:
It's really a shame. It was a nice idea ...
Could we at least use the new formating for str(float) and the display
of floats? In Python 2.6 floats are not displayed with repr(). They seem
to use yet another hook.
repr(11./5)
'2.2'
11./5
2.2002
Noam Raphael added the comment:
I think that for str(), the current method is better - using the new
repr() method will make str(1.1*3) == '3.3003', instead of
'3.3'. (The repr is right - you can check, and 1.1*3 != 3.3. But for
str() purposes it's fine.)
But I actually think that
Noam Raphael added the comment:
If I think about it some more, why not get rid of all the float
platform-dependencies and define how +inf, -inf and nan behave?
I think that it means:
* inf and -inf are legitimate floats just like any other float.
Perhaps there should be a builtin Inf, or at
Noam Raphael added the comment:
That's right, but the standard also defines that 0.0/0 - nan, and
1.0/0 - inf, but instead we raise an exception. It's just that in
Python, every object is expected to be equal to itself. Otherwise, how
can I check if a number is nan?
Christian Heimes added the comment:
Noam Raphael wrote:
* nan is an object of type float, which behaves like None, that is:
nan == nan is true, but nan nan and nan 3 will raise an
exception.
No, that's not correct. The standard defines that nan is always unequal
to nan.
False
float(inf)
Christian Heimes added the comment:
I propose that we add three singletons to the float implementation:
PyFloat_NaN
PyFloat_Inf
PyFloat_NegInf
The singletons are returned from PyFloat_FromString() for nan, inf
and -inf. The other PyFloat_ method must return the singletons, too.
It's easy to
Guido van Rossum added the comment:
(1) Despite Tim's grave language, I don't think we'll need to write our
own correctly-rounding float input routine. We can just say that Python
won't work correctly unless your float input routine is rounding
correctly; a unittest should detect whether this
Christian Heimes added the comment:
Guido van Rossum wrote:
(1a) Perhaps it's better to only do this for Python 3.0, which has a
smaller set of platforms to support.
+1
Does Python depend on a working, valid and non-broken IEEE 754 floating
point arithmetic? Could we state the Python's float
Guido van Rossum added the comment:
(1a) Perhaps it's better to only do this for Python 3.0, which has a
smaller set of platforms to support.
+1
Does Python depend on a working, valid and non-broken IEEE 754 floating
point arithmetic? Could we state the Python's float type depends on
Raymond Hettinger added the comment:
Of course the latter isn't guaranteed to help for
non-IEEE-754 platforms -- some platforms don't have
NaNs at all!
ISTM, that years of toying with Infs and Nans has not
yielded a portable, workable solution. I'm concerned
that further efforts will
Tim Peters added the comment:
[Raymond]
...
NaNs in particular are a really
difficult case because our equality testing routines
have a fast path where identity implies equality.
Works as intended in 2.5; this is Windows output:
1.#INF
nan = inf - inf
nan # really is a NaN
-1.#IND
nan
Tim Peters added the comment:
[Guido]
... We can just say that Python
won't work correctly unless your float input routine is rounding
correctly; a unittest should detect whether this is the case.
Sorry, but that's intractable. Correct rounding is a property that
needs to be proved, not
Christian Heimes added the comment:
Guido van Rossum wrote:
No, traditionally Python has just used whatever C's double provides.
There are some places that benefit from IEEE 754, but few that require
it (dunno about optional extension modules).
I asked Thomas Wouter about IEEE 754:
I
Guido van Rossum added the comment:
Do you know of any system that supports Python and floats but doesn't
have IEEE 753 semantics?
(Assuming you meant 754.)
I'm pretty sure the VAX doesn't have IEEE FP, and it used to run Unix
and Python. Ditto for Crays -- unsure if we still support that
Guido van Rossum added the comment:
Correct rounding is a property that needs to be proved, not tested.
I take it your position is that this can never be done 100% correctly so
it shouldn't go in? That's disappointing, because the stream of
complaints that round is broken won't stop (we had
Tim Peters added the comment:
[Guido]
I take it your position is that this can never be done 100% correctly
No. David Gay's code is believed to be 100% correctly-rounded and is
also reasonably fast in most cases. I don't know of any other open
string-float code that achieves both ( expect
Guido van Rossum added the comment:
I'd be willing to require eval(repr(x)) == x only for platforms whose
float input routine is correctly rounding. That would make the current
patch acceptable I believe -- but I believe you think there's a better
way in that case too? What way is that?
Also,
Noam Raphael added the comment:
If I understand correctly, there are two main concerns: speed and
portability. I think that they are both not that terrible.
How about this:
* For IEEE-754 hardware, we implement decimal/binary conversions, and
define the exact behaviour of floats.
* For
Guido van Rossum added the comment:
Sounds okay, except that I think that for some folks (e.g. numeric
Python users) I/O speed *does* matter, as their matrices are very
large, and their disks and networks are very fast.
__
Tracker [EMAIL PROTECTED]
Noam Raphael added the comment:
If I were in that situation I would prefer to store the binary
representation. But if someone really needs to store decimal floats,
we can add a method fast_repr which always calculates 17 decimal
digits.
Decimal to binary conversion, in any case, shouldn't be
Guido van Rossum added the comment:
If I were in that situation I would prefer to store the binary
representation. But if someone really needs to store decimal floats,
we can add a method fast_repr which always calculates 17 decimal
digits.
They can just use %.17g % x
Decimal to binary
Guido van Rossum added the comment:
I've tracked my problem to the GCC optimizer. The default optimizer
setting is -O3. When I edit the Makefile to change this to -O1 or -O0
and recompile (only) doubledigits.c, repr(1e5) starts returning
'10.0' again. -O2 behaves the same as -O3.
Now, don't
Guido van Rossum added the comment:
I like this; but I don't have time for a complete thourough review.
Maybe Tim can lend a hand?
If Tim has no time, I propose that if it works correctly without leaks
on at least Windows, OSX and Linux, we check it in, and worry about more
review later.
1 - 100 of 111 matches
Mail list logo