Raymond Hettinger wrote:
Armin Rigo wrote:
[...]
At the moment, I'm trying to, but 2.5 HEAD keeps failing mysteriously on
the tests I try to time, and even going into an infinite loop consuming
all my memory - since the NFS sprint. Am I allowed to be grumpy here,
and repeat that speed should not
Anthony Baxter [EMAIL PROTECTED] writes:
On Friday 02 June 2006 02:21, Jack Diederich wrote:
The CCP Games CEO said they have trouble retaining talent from more
moderate latitudes for this reason. 18 hours of daylight makes
them a bit goofy and when the Winter Solstice rolls around they are
On Wed, May 31, 2006 at 09:10:47PM -0400, Tim Peters wrote:
[Martin Blais]
I'm still looking for a benchmark that is not amazingly uninformative
and crappy. I've been looking around all day, I even looked under the
bed, I cannot find it. I've also been looking around all day as well,
[Martin Blais]
I'm still looking for a benchmark that is not amazingly uninformative
and crappy. I've been looking around all day, I even looked under the
bed, I cannot find it. I've also been looking around all day as well,
even looked for it shooting out of the Iceland geysirs, of all
Hi Fredrik,
On Tue, May 30, 2006 at 07:48:50AM +0200, Fredrik Lundh wrote:
since abc.find(, 0) == 0, I would have thought that a program that
searched for an empty string in a loop wouldn't get anywhere at all.
Indeed. And when this bug was found in the program in question, a
natural fix was
Tim Peters wrote:
abc.count(, 100)
1
uabc.count(, 100)
1
which is the same as
abc[100:].count()
1
abc.find(, 100)
100
uabc.find(, 100)
100
today, although the idea that find() can return an index that doesn't
exist in the string is particularly jarring. Since we also have:
On 5/29/06, Raymond Hettinger [EMAIL PROTECTED] wrote:
If it is really 0.5%, then we're fine. Just remember that PyStone is an
amazingly uninformative and crappy benchmark.
I'm still looking for a benchmark that is not amazingly uninformative
and crappy. I've been looking around all day, I
Hi all,
I've finally come around to writing a patch that stops dict lookup from
eating all exceptions that occur during lookup, like rare bugs in user
__eq__() methods. After another 2-hours long debugging session that
turned out to be caused by that, I had a lot of motivation.
From: Armin Rigo [EMAIL PROTECTED]
I've finally come around to writing a patch that stops dict lookup from
eating all exceptions that occur during lookup, like rare bugs in user
__eq__() methods.
Is there a performance impact?
Raymond
___
On 5/29/06, Armin Rigo [EMAIL PROTECTED] wrote:
Hi all,
I've finally come around to writing a patch that stops dict lookup from
eating all exceptions that occur during lookup, like rare bugs in user
__eq__() methods. After another 2-hours long debugging session that
turned out to be caused
Hi Guido,
On Mon, May 29, 2006 at 12:34:30PM -0700, Guido van Rossum wrote:
+1, as long as (as you seem to imply) PyDict_GetItem() still swallows
all exceptions.
Yes.
Fixing PyDict_GetItem() is a py3k issue, I think. Until then, there
are way too many uses. I wouldn't be surprised if after
Hi Raymond,
On Mon, May 29, 2006 at 12:20:44PM -0700, Raymond Hettinger wrote:
I've finally come around to writing a patch that stops dict lookup from
eating all exceptions that occur during lookup, like rare bugs in user
__eq__() methods.
Is there a performance impact?
I believe that
Armin Rigo wrote:
As it turns out, I measured only 0.5% performance loss in Pystone.
umm. does Pystone even call lookdict?
/F
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe:
On Mon, May 29, 2006 at 12:20:44PM -0700, Raymond Hettinger wrote:
I've finally come around to writing a patch that stops dict lookup from
eating all exceptions that occur during lookup, like rare bugs in user
__eq__() methods.
Is there a performance impact?
I believe that this patch is
Hi Raymond,
On Mon, May 29, 2006 at 02:02:25PM -0700, Raymond Hettinger wrote:
Please run some better benchmarks and do more extensive assessments on the
performance impact.
At the moment, I'm trying to, but 2.5 HEAD keeps failing mysteriously on
the tests I try to time, and even going into
Armin Rigo wrote:
At the moment, I'm trying to, but 2.5 HEAD keeps failing mysteriously on
the tests I try to time, and even going into an infinite loop consuming
all my memory - since the NFS sprint. Am I allowed to be grumpy here,
and repeat that speed should not be used to justify bugs?
Re-hi,
On Mon, May 29, 2006 at 11:34:28PM +0200, Armin Rigo wrote:
At the moment, I'm trying to, but 2.5 HEAD keeps failing mysteriously on
the tests I try to time, and even going into an infinite loop consuming
all my memory
Ah, it's a corner case of str.find() whose behavior just changed.
Armin Rigo wrote:
Ah, it's a corner case of str.find() whose behavior just changed.
Previously, 'abc'.find('', 100) would return -1, and now it returns 100.
Just to confuse matters, the same test with unicode returns 100, and has
always done so in the past. (Oh well, one of these again...)
Hi Fredrik,
On Tue, May 30, 2006 at 12:01:46AM +0200, Fredrik Lundh wrote:
not unless you can produce some code. unfounded accusations don't
belong on this list (it's not like the sprinters didn't test the code on
a whole bunch of platforms), and neither does lousy benchmarks (why are
you
Armin Rigo wrote:
Hi Raymond,
On Mon, May 29, 2006 at 02:02:25PM -0700, Raymond Hettinger wrote:
Please run some better benchmarks and do more extensive assessments on the
performance impact.
At the moment, I'm trying to, but 2.5 HEAD keeps failing mysteriously on
the
Hi Fredrik,
On Tue, May 30, 2006 at 12:23:04AM +0200, Fredrik Lundh wrote:
well, the empty string is a valid substring of all possible strings
(there are no null strings in Python). you get the same behaviour
from slicing, the in operator, replace (this was discussed on the
list last
Raymond Hettinger wrote:
If it is really 0.5%, then we're fine. Just remember that PyStone is an
amazingly uninformative and crappy benchmark.
Since Armin seems to not like having to justify his patch with any
performance testing, I wrote a handful of dict insertion exercises and
could find
Fredrik Lundh wrote:
well, the empty string is a valid substring of all possible strings
(there are no null strings in Python). you get the same behaviour
from slicing, the in operator, replace (this was discussed on the
list last week), count, etc.
Although Tim pointed out that
On 5/29/06, Greg Ewing [EMAIL PROTECTED] wrote:
Fredrik Lundh wrote:
well, the empty string is a valid substring of all possible strings
(there are no null strings in Python). you get the same behaviour
from slicing, the in operator, replace (this was discussed on the
list last week),
[Greg Ewing]
Although Tim pointed out that replace() only regards
n+1 empty strings as existing in a string of lenth
n. So for consistency, find() should only find them
in those places, too.
[Guido]
And abc.count() should return 4.
And it does, but too much context was missing in Greg's
[Armin Rigo]
...
...
Am I allowed to be grumpy here, and repeat that speed should not be
used to justify bugs?
As a matter of fact, you are. OTOH, nobody at the sprint made that
argument, so nobody actually feels shame on that count :-)
I apologize for the insufficiently reviewed
Guido van Rossum wrote:
well, the empty string is a valid substring of all possible strings
(there are no null strings in Python). you get the same behaviour
from slicing, the in operator, replace (this was discussed on the
list last week), count, etc.
Although Tim pointed out that
Armin Rigo wrote:
I know this. These corner cases are debatable and different answers
could be seen as correct, as I think is the case for find(). My point
was different: I was worrying that the recent change in str.find() would
needlessly send existing and working programs into infinite
28 matches
Mail list logo