Re: [Python-Dev] test_unicode failure on MIPS

2006-06-02 Thread Fredrik Lundh
Neal Norwitz wrote: Any ideas? this is a recent change, so it looks like the box simply didn't get around to rebuild the unicodeobject module. (I'm beginning to wonder if I didn't forget to add some header file dependencies somewhere during the stringlib refactoring, but none of the other

[Python-Dev] valgrind report

2006-06-02 Thread Neal Norwitz
Looks pretty good, except for 1 cjk problem: test_codecencodings_jp Invalid read of size 1 at 0x110AEBC3: shift_jis_2004_decode (_codecs_jp.c:642) by 0xBFCBDB7: mbidecoder_decode (multibytecodec.c:839) Address 0xAEC376B is 0 bytes after a block of size 3 alloc'd at 0x4A19B7E:

Re: [Python-Dev] test_unicode failure on MIPS

2006-06-02 Thread Neal Norwitz
On 6/1/06, Fredrik Lundh [EMAIL PROTECTED] wrote: Neal Norwitz wrote: Any ideas? this is a recent change, so it looks like the box simply didn't get around to rebuild the unicodeobject module. That shouldn't be. make distclean should be called (it was make clean until recently). However,

Re: [Python-Dev] Python Benchmarks

2006-06-02 Thread M.-A. Lemburg
Fredrik Lundh wrote: M.-A. Lemburg wrote: Seriously, I've been using and running pybench for years and even though tweaks to the interpreter do sometimes result in speedups or slow-downs where you wouldn't expect them (due to the interpreter using the Python objects), they are reproducable

Re: [Python-Dev] Python Benchmarks

2006-06-02 Thread Fredrik Lundh
M.-A. Lemburg wrote: Of course, but then changes to try-except logic can interfere with the performance of setting up method calls. This is what pybench then uncovers. I think the only thing PyBench has uncovered is that you're convinced that it's always right, and everybody else is always

Re: [Python-Dev] Python Benchmarks

2006-06-02 Thread M.-A. Lemburg
Fredrik Lundh wrote: M.-A. Lemburg wrote: Of course, but then changes to try-except logic can interfere with the performance of setting up method calls. This is what pybench then uncovers. I think the only thing PyBench has uncovered is that you're convinced that it's always right, and

Re: [Python-Dev] Python Benchmarks

2006-06-02 Thread Andrew Dalke
M.-A. Lemburg: The approach pybench is using is as follows: ... The calibration step is run multiple times and is used to calculate an average test overhead time. One of the changes that occured during the sprint was to change this algorithm to use the best time rather than the average.

Re: [Python-Dev] Removing Mac OS 9 cruft

2006-06-02 Thread Guido van Rossum
Just and Jack have confirmed that you can throw away everything except possibly Demo/*. (Just even speculated that some cruft may have been accidentally revived by the cvs - svn transition?) --Guido On 6/1/06, Neal Norwitz [EMAIL PROTECTED] wrote: I was about to remove Mac/IDE scripts, but it

Re: [Python-Dev] Python Benchmarks

2006-06-02 Thread M.-A. Lemburg
Andrew Dalke wrote: M.-A. Lemburg: The approach pybench is using is as follows: ... The calibration step is run multiple times and is used to calculate an average test overhead time. One of the changes that occured during the sprint was to change this algorithm to use the best time

Re: [Python-Dev] Let's stop eating exceptions in dict lookup

2006-06-02 Thread Michael Hudson
Anthony Baxter [EMAIL PROTECTED] writes: On Friday 02 June 2006 02:21, Jack Diederich wrote: The CCP Games CEO said they have trouble retaining talent from more moderate latitudes for this reason. 18 hours of daylight makes them a bit goofy and when the Winter Solstice rolls around they are

Re: [Python-Dev] Python Benchmarks

2006-06-02 Thread Fredrik Lundh
M.-A. Lemburg wrote: I believe that using wall-clock timers for benchmarking is not a good approach due to the high noise level. Process time timers typically have a lower resolution, but give a better picture of the actual run-time of your code and also don't exhibit as much noise as the

Re: [Python-Dev] Python Benchmarks

2006-06-02 Thread M.-A. Lemburg
M.-A. Lemburg wrote: That's why the timers being used by pybench will become a parameter that you can then select to adapt pybench it to the OS your running pybench on. Wasn't that decision a consequence of the problems found during the sprint? It's a consequence of a discussion I had with

Re: [Python-Dev] Python Benchmarks

2006-06-02 Thread Michael Chermside
Marc-Andre Lemburg writes: Using the minimum looks like the way to go for calibration. I wonder whether the same is true for the actual tests; since you're looking for the expected run-time, the minimum may not necessarily be the choice. No, you're not looking for the expected run-time. The

Re: [Python-Dev] test_ctypes failures on ppc64 debian

2006-06-02 Thread Tim Peters
[Thomas Heller] test_ctypes fails on the ppc64 machine. I don't have access to such a machine myself, so I would have to do some trial and error, or try to print some diagnostic information. This should not be done in the trunk, so the question is: can the buildbots build branches? Yes.

Re: [Python-Dev] test_ctypes failures on ppc64 debian

2006-06-02 Thread Thomas Heller
Tim Peters wrote: [Thomas Heller] test_ctypes fails on the ppc64 machine. I don't have access to such a machine myself, so I would have to do some trial and error, or try to print some diagnostic information. This should not be done in the trunk, so the question is: can the buildbots

Re: [Python-Dev] SF patch #1473257: Add a gi_code attr to generators

2006-06-02 Thread Brett Cannon
On 6/1/06, Guido van Rossum [EMAIL PROTECTED] wrote: On 6/1/06, Phillip J. Eby [EMAIL PROTECTED] wrote: I didn't know it was assigned to me.I guess SF doesn't send any notifications, and neither did Georg, so your email is the very first time that I've heard of it.This is a longstanding SF bug.

Re: [Python-Dev] Python Benchmarks

2006-06-02 Thread Terry Reedy
M.-A. Lemburg [EMAIL PROTECTED] wrote in message news:[EMAIL PROTECTED] Granted, I hit a couple of web pages while doing this and my spam filter processed my mailbox in the background... Hardly a setting in which to run comparison tests, seems to me. Using the minimum looks like the way to

Re: [Python-Dev] Python Benchmarks

2006-06-02 Thread Fredrik Lundh
Terry Reedy wrote: But even better, the way to go to run comparison timings is to use a system with as little other stuff going on as possible. For Windows, this means rebooting in safe mode, waiting until the system is quiescent, and then run the timing test with *nothing* else active

Re: [Python-Dev] Python Benchmarks

2006-06-02 Thread Fredrik Lundh
M.-A. Lemburg wrote: I just had an idea: if we could get each test to run inside a single time slice assigned by the OS scheduler, then we could benefit from the better resolution of the hardware timers while still keeping the noise to a minimum. I suppose this could be achieved by: *

Re: [Python-Dev] Python Benchmarks

2006-06-02 Thread M.-A. Lemburg
M.-A. Lemburg wrote: That's why the timers being used by pybench will become a parameter that you can then select to adapt pybench it to the OS your running pybench on. Wasn't that decision a consequence of the problems found during the sprint? It's a consequence of a discussion I had with

[Python-Dev] Some more comments re new uriparse module, patch 1462525

2006-06-02 Thread John J Lee
[Not sure whether this kind of thing is best posted as tracker comments (but then the tracker gets terribly long and is mailed out every time a change happens) or posted here. Feel free to tell me I'm posting in the wrong place...] Some comments on this patch (a new module, submitted by Paul

Re: [Python-Dev] Python Benchmarks

2006-06-02 Thread Andrew Dalke
On 6/2/06, Terry Reedy [EMAIL PROTECTED] wrote: Hardly a setting in which to run comparison tests, seems to me. The point though was to show that the time distribution is non-Gaussian, so intuition based on that doesn't help. Using the minimum looks like the way to go for calibration. Or

Re: [Python-Dev] Python Benchmarks

2006-06-02 Thread Andrew Dalke
On 6/2/06, M.-A. Lemburg [EMAIL PROTECTED] wrote: It's interesting that even pressing a key on your keyboard will cause forced context switches. When niceness was first added to multiprocessing OSes people found their CPU intensive jobs would go faster by pressing enter a lot.

Re: [Python-Dev] Python Benchmarks

2006-06-02 Thread Tim Peters
[MAL] Using the minimum looks like the way to go for calibration. [Terry Reedy] Or possibly the median. [Andrew Dalke] Why? I can't think of why that's more useful than the minimum time. A lot of things get mixed up here ;-) The _mean_ is actually useful if you're using a poor-resolution

Re: [Python-Dev] Python Benchmarks

2006-06-02 Thread A.M. Kuchling
On Fri, Jun 02, 2006 at 07:44:07PM -0400, Tim Peters wrote: Fortran code could scream. Test times were reproducible to the nanosecond with no effort. Running on a modern box for a few microseconds at a time is a way to approximate that, provided you measure the minimum time with a

Re: [Python-Dev] Python Benchmarks

2006-06-02 Thread Greg Ewing
Tim Peters wrote: I liked benchmarking on Crays in the good old days. ... Test times were reproducible to the nanosecond with no effort. Running on a modern box for a few microseconds at a time is a way to approximate that, provided you measure the minimum time with a high-resolution

Re: [Python-Dev] Python Benchmarks

2006-06-02 Thread Greg Ewing
A.M. Kuchling wrote: (At work we're trying to move toward this approach for doing realtime audio: devote one CPU to the audio computation and use other CPUs for I/O, web servers, and whatnot.) Speaking of creative uses for multiple CPUs, I was thinking about dual-core Intel Macs the other

Re: [Python-Dev] Python Benchmarks

2006-06-02 Thread Josiah Carlson
Greg Ewing [EMAIL PROTECTED] wrote: Tim Peters wrote: I liked benchmarking on Crays in the good old days. ... Test times were reproducible to the nanosecond with no effort. Running on a modern box for a few microseconds at a time is a way to approximate that, provided you