Neal Norwitz wrote:
Any ideas?
this is a recent change, so it looks like the box simply didn't get
around to rebuild the unicodeobject module.
(I'm beginning to wonder if I didn't forget to add some header file
dependencies somewhere during the stringlib refactoring, but none of the
other
Looks pretty good, except for 1 cjk problem:
test_codecencodings_jp
Invalid read of size 1
at 0x110AEBC3: shift_jis_2004_decode (_codecs_jp.c:642)
by 0xBFCBDB7: mbidecoder_decode (multibytecodec.c:839)
Address 0xAEC376B is 0 bytes after a block of size 3 alloc'd
at 0x4A19B7E:
On 6/1/06, Fredrik Lundh [EMAIL PROTECTED] wrote:
Neal Norwitz wrote:
Any ideas?
this is a recent change, so it looks like the box simply didn't get
around to rebuild the unicodeobject module.
That shouldn't be. make distclean should be called (it was make clean
until recently). However,
Fredrik Lundh wrote:
M.-A. Lemburg wrote:
Seriously, I've been using and running pybench for years
and even though tweaks to the interpreter do sometimes
result in speedups or slow-downs where you wouldn't expect
them (due to the interpreter using the Python objects),
they are reproducable
M.-A. Lemburg wrote:
Of course, but then changes to try-except logic can interfere
with the performance of setting up method calls. This is what
pybench then uncovers.
I think the only thing PyBench has uncovered is that you're convinced that it's
always right, and everybody else is always
Fredrik Lundh wrote:
M.-A. Lemburg wrote:
Of course, but then changes to try-except logic can interfere
with the performance of setting up method calls. This is what
pybench then uncovers.
I think the only thing PyBench has uncovered is that you're convinced that
it's
always right, and
M.-A. Lemburg:
The approach pybench is using is as follows:
...
The calibration step is run multiple times and is used
to calculate an average test overhead time.
One of the changes that occured during the sprint was to change this algorithm
to use the best time rather than the average.
Just and Jack have confirmed that you can throw away everything except
possibly Demo/*. (Just even speculated that some cruft may have been
accidentally revived by the cvs - svn transition?)
--Guido
On 6/1/06, Neal Norwitz [EMAIL PROTECTED] wrote:
I was about to remove Mac/IDE scripts, but it
Andrew Dalke wrote:
M.-A. Lemburg:
The approach pybench is using is as follows:
...
The calibration step is run multiple times and is used
to calculate an average test overhead time.
One of the changes that occured during the sprint was to change this
algorithm
to use the best time
Anthony Baxter [EMAIL PROTECTED] writes:
On Friday 02 June 2006 02:21, Jack Diederich wrote:
The CCP Games CEO said they have trouble retaining talent from more
moderate latitudes for this reason. 18 hours of daylight makes
them a bit goofy and when the Winter Solstice rolls around they are
M.-A. Lemburg wrote:
I believe that using wall-clock timers
for benchmarking is not a good approach due to the high
noise level. Process time timers typically have a lower
resolution, but give a better picture of the actual
run-time of your code and also don't exhibit as much noise
as the
M.-A. Lemburg wrote:
That's why the timers being used by pybench will become a
parameter that you can then select to adapt pybench it to
the OS your running pybench on.
Wasn't that decision a consequence of the problems found during
the sprint?
It's a consequence of a discussion I had with
Marc-Andre Lemburg writes:
Using the minimum looks like the way to go for calibration.
I wonder whether the same is true for the actual tests; since
you're looking for the expected run-time, the minimum may
not necessarily be the choice.
No, you're not looking for the expected run-time. The
[Thomas Heller]
test_ctypes fails on the ppc64 machine. I don't have access to such
a machine myself, so I would have to do some trial and error, or try
to print some diagnostic information.
This should not be done in the trunk, so the question is: can the buildbots
build branches?
Yes.
Tim Peters wrote:
[Thomas Heller]
test_ctypes fails on the ppc64 machine. I don't have access to such
a machine myself, so I would have to do some trial and error, or try
to print some diagnostic information.
This should not be done in the trunk, so the question is: can the
buildbots
On 6/1/06, Guido van Rossum [EMAIL PROTECTED] wrote:
On 6/1/06, Phillip J. Eby [EMAIL PROTECTED] wrote: I didn't know it was assigned to me.I guess SF doesn't send any notifications, and neither did Georg, so your email is the very first time
that I've heard of it.This is a longstanding SF bug.
M.-A. Lemburg [EMAIL PROTECTED] wrote in message
news:[EMAIL PROTECTED]
Granted, I hit a couple of web pages while doing this and my spam
filter processed my mailbox in the background...
Hardly a setting in which to run comparison tests, seems to me.
Using the minimum looks like the way to
Terry Reedy wrote:
But even better, the way to go to run comparison timings is to use a system
with as little other stuff going on as possible. For Windows, this means
rebooting in safe mode, waiting until the system is quiescent, and then run
the timing test with *nothing* else active
M.-A. Lemburg wrote:
I just had an idea: if we could get each test to run
inside a single time slice assigned by the OS scheduler,
then we could benefit from the better resolution of the
hardware timers while still keeping the noise to a
minimum.
I suppose this could be achieved by:
*
M.-A. Lemburg wrote:
That's why the timers being used by pybench will become a
parameter that you can then select to adapt pybench it to
the OS your running pybench on.
Wasn't that decision a consequence of the problems found during
the sprint?
It's a consequence of a discussion I had with
[Not sure whether this kind of thing is best posted as tracker comments
(but then the tracker gets terribly long and is mailed out every time a
change happens) or posted here. Feel free to tell me I'm posting in the
wrong place...]
Some comments on this patch (a new module, submitted by Paul
On 6/2/06, Terry Reedy [EMAIL PROTECTED] wrote:
Hardly a setting in which to run comparison tests, seems to me.
The point though was to show that the time distribution is non-Gaussian,
so intuition based on that doesn't help.
Using the minimum looks like the way to go for calibration.
Or
On 6/2/06, M.-A. Lemburg [EMAIL PROTECTED] wrote:
It's interesting that even pressing a key on your keyboard
will cause forced context switches.
When niceness was first added to multiprocessing OSes people found their
CPU intensive jobs would go faster by pressing enter a lot.
[MAL]
Using the minimum looks like the way to go for calibration.
[Terry Reedy]
Or possibly the median.
[Andrew Dalke]
Why? I can't think of why that's more useful than the minimum time.
A lot of things get mixed up here ;-) The _mean_ is actually useful
if you're using a poor-resolution
On Fri, Jun 02, 2006 at 07:44:07PM -0400, Tim Peters wrote:
Fortran code could scream. Test times were reproducible to the
nanosecond with no effort. Running on a modern box for a few
microseconds at a time is a way to approximate that, provided you
measure the minimum time with a
Tim Peters wrote:
I liked benchmarking on Crays in the good old days. ...
Test times were reproducible to the
nanosecond with no effort. Running on a modern box for a few
microseconds at a time is a way to approximate that, provided you
measure the minimum time with a high-resolution
A.M. Kuchling wrote:
(At work we're trying to move toward this approach for doing realtime
audio: devote one CPU to the audio computation and use other CPUs for
I/O, web servers, and whatnot.)
Speaking of creative uses for multiple CPUs, I was thinking
about dual-core Intel Macs the other
Greg Ewing [EMAIL PROTECTED] wrote:
Tim Peters wrote:
I liked benchmarking on Crays in the good old days. ...
Test times were reproducible to the
nanosecond with no effort. Running on a modern box for a few
microseconds at a time is a way to approximate that, provided you
28 matches
Mail list logo