On Apr 07, 2009 at 02:10AM, Steven D'Aprano st...@pearwood.info wrote:
On the other hand, I'm with Guido when he wrote it is certainly not
right to choose speed over correctness. This is especially a problem
for floating point optimizations, and I urge Cesare to be conservative
in any f.p.
On 07/04/2009, at 7:27 AM, Guido van Rossum wrote:
On Mon, Apr 6, 2009 at 7:28 AM, Cesare Di Mauro
cesare.dima...@a-tono.com wrote:
The Language Reference says nothing about the effects of code
optimizations.
I think it's a very good thing, because we can do some work here
with constant
2009/4/7 Cesare Di Mauro cesare.dima...@a-tono.com:
The principle that I followed on doing constant folding was: do what Python
will do without constant folding enabled.
So if Python will generate
LOAD_CONST 1
LOAD_CONST 2
BINARY_ADD
the constant folding code will simply
Cesare The only difference at this time is regards invalid operations,
Cesare which will raise exceptions at compile time, not at running
Cesare time.
Cesare So if you write:
Cesare a = 1 / 0
Cesare an exception will be raised at compile time.
I think I have to call
In data 07 aprile 2009 alle ore 17:19:25, s...@pobox.com ha scritto:
Cesare The only difference at this time is regards invalid operations,
Cesare which will raise exceptions at compile time, not at running
Cesare time.
Cesare So if you write:
Cesare a = 1 / 0
Well I'm sorry Cesare but this is unacceptable. As Skip points out
there is plenty of code that relies on this. Also, consider what
problem you are trying to solve here. What is the benefit to the
user of moving this error to compile time? I cannot see any.
--Guido
On Tue, Apr 7, 2009 at 8:19
On Tue, Apr 7, 2009 06:25PM, Guido van Rossum wrote:
Well I'm sorry Cesare but this is unacceptable. As Skip points out
there is plenty of code that relies on this.
Guido, as I already said, in the final code the normal Python behaviour
will be kept, and the stricter one will be enabled solely
On Tue, Apr 7, 2009 at 9:46 AM, Cesare Di Mauro
cesare.dima...@a-tono.com wrote:
On Tue, Apr 7, 2009 06:25PM, Guido van Rossum wrote:
Well I'm sorry Cesare but this is unacceptable. As Skip points out
there is plenty of code that relies on this.
Guido, as I already said, in the final code the
On Tue, Apr 7, 2009 07:22PM, Guido van Rossum wrote:
In my experience it's better to discover a bug at compile time rather
than
at running time.
That's my point though, which you seem to be ignoring: if the user
explicitly writes 1/0 it is not likely to be a bug. That's very
different than
On 7 Apr 2009, at 11:59, Alexandru Moșoi wrote:
Not necessarily. For example C/C++ doesn't define the order of the
operations inside an expression (and AFAIK neither Python) and
therefore folding 2 * 3 is OK whether b is an integer or an arbitrary
object with mul operator overloaded. Moreover
On Tue, Apr 7, 2009 at 8:59 PM, Alexandru Moșoi brtz...@gmail.com wrote:
Not necessarily. For example C/C++ doesn't define the order of the
operations inside an expression (and AFAIK neither Python) and
therefore folding 2 * 3 is OK whether b is an integer or an arbitrary
object with mul
From: Cesare Di Mauro cesare.dima...@a-tono.com
So if Python will generate
LOAD_CONST 1
LOAD_CONST 2
BINARY_ADD
the constant folding code will simply replace them with a single
LOAD_CONST 3
When working with such kind of optimizations, the temptation is to
apply them at
Cesare Di Mauro wrote:
On Tue, Apr 7, 2009 07:22PM, Guido van Rossum wrote:
In my experience it's better to discover a bug at compile time rather
than
at running time.
That's my point though, which you seem to be ignoring: if the user
explicitly writes 1/0 it is not likely to be a bug. That's
Alexandru Moșoi wrote:
From: Cesare Di Mauro cesare.dima...@a-tono.com
So if Python will generate
LOAD_CONST 1
LOAD_CONST 2
BINARY_ADD
the constant folding code will simply replace them with a single
LOAD_CONST 3
When working with such kind of optimizations, the temptation is
On Mar 29, 2009 at 05:36PM, Guido van Rossum gu...@python.org wrote:
- Issue #5593: code like 1e16+2. is optimized away and its result stored
as
a constant (again), but the result can vary slightly depending on the
internal
FPU precision.
I would just not bother constant folding
Cesare Di Mauro cesare.dimauro at a-tono.com writes:
def f(): return ['a', ('b', 'c')] * (1 + 2 * 3)
[...]
With proper constant folding code, both functions can be reduced
to a single LOAD_CONST and a RETURN_VALUE (or, definitely, by
a single instruction at all with an advanced peephole
Cesare At this time with Python 2.6.1 we have these results:
Cesare def f(): return 1 + 2 * 3 + 4j
...
Cesare def f(): return ['a', ('b', 'c')] * (1 + 2 * 3)
Guido can certainly correct me if I'm wrong, but I believe the main point of
his message was that you aren't going to
On Lun, Apr 6, 2009 16:43, Antoine Pitrou wrote:
Cesare Di Mauro cesare.dimauro at a-tono.com writes:
def f(): return ['a', ('b', 'c')] * (1 + 2 * 3)
[...]
With proper constant folding code, both functions can be reduced
to a single LOAD_CONST and a RETURN_VALUE (or, definitely, by
a single
On Mon, Apr 6, 2009 18:57, s...@pobox.com wrote:
Cesare At this time with Python 2.6.1 we have these results:
Cesare def f(): return 1 + 2 * 3 + 4j
...
Cesare def f(): return ['a', ('b', 'c')] * (1 + 2 * 3)
Guido can certainly correct me if I'm wrong, but I believe the main
[Antoine]
- Issue #5593: code like 1e16+2. is optimized away and its result stored
as
a constant (again), but the result can vary slightly depending on the internal
FPU precision.
[Guido]
I would just not bother constant folding involving FP, or only if the
values involved have an exact
+1 for removing constant folding for floats (besides conversion
of -literal). There are just too many things to worry about:
FPU rounding mode and precision, floating-point signals and flags,
effect of compiler flags, and the potential benefit seems small.
If you're talking about the
On Mon, Apr 6, 2009 at 9:05 PM, Raymond Hettinger pyt...@rcn.com wrote:
The code for the lsum() recipe is more readable with a line like:
exp = long(mant * 2.0 ** 53)
than with
exp = long(mant * 9007199254740992.0)
It would be ashamed if code written like the former suddenly
started
On Mon, Apr 6, 2009 at 7:28 AM, Cesare Di Mauro
cesare.dima...@a-tono.com wrote:
The Language Reference says nothing about the effects of code optimizations.
I think it's a very good thing, because we can do some work here with constant
folding.
Unfortunately the language reference is not the
On Mon, Apr 6, 2009 at 1:22 PM, Mark Dickinson dicki...@gmail.com wrote:
On Mon, Apr 6, 2009 at 9:05 PM, Raymond Hettinger pyt...@rcn.com wrote:
The code for the lsum() recipe is more readable with a line like:
exp = long(mant * 2.0 ** 53)
than with
exp = long(mant * 9007199254740992.0)
On Mon, Apr 6, 2009 at 2:22 PM, Mark Dickinson dicki...@gmail.com wrote:
Well, I'd say that the obvious solution here is to compute
the constant 2.0**53 just once, somewhere outside the
inner loop. In any case, that value would probably be better
written as 2.0**DBL_MANT_DIG (or something
On Mon, Apr 6, 2009 at 5:10 PM, Steven D'Aprano st...@pearwood.info wrote:
On Tue, 7 Apr 2009 07:27:29 am Guido van Rossum wrote:
Unfortunately the language reference is not the only thing we have to
worry about. Unlike languages like C++, where compiler writers have
the moral right to modify
Hello,
There are a couple of ancillary portability concerns due to optimizations which
store system-dependent results of operations between constants in pyc files:
- Issue #5057: code like '\U00012345'[0] is optimized away and its result stored
as a constant in the pyc file, but the result
On Sun, Mar 29, 2009 at 9:42 AM, Antoine Pitrou solip...@pitrou.net wrote:
There are a couple of ancillary portability concerns due to optimizations
which
store system-dependent results of operations between constants in pyc files:
- Issue #5057: code like '\U00012345'[0] is optimized away
28 matches
Mail list logo