On Friday, Jun 2, 2006, John J Lee writes:
>[Not sure whether this kind of thing is best posted as tracker comments
>(but then the tracker gets terribly long and is mailed out every time a
>change happens) or posted here. Feel free to tell me I'm posting in the
>wrong place...]
I think this is
On 6/3/06, Georg Brandl <[EMAIL PROTECTED]> wrote:
Georg Brandl wrote:> I've worked on two patches for NeedForSpeed, and would like someone> familiar with the areas they touch to review them before I check them> in, breaking all the buildbots which aren't broken yet ;)
>> They are:>> http://python.
Georg Brandl wrote:
> I've worked on two patches for NeedForSpeed, and would like someone
> familiar with the areas they touch to review them before I check them
> in, breaking all the buildbots which aren't broken yet ;)
>
> They are:
>
> http://python.org/sf/1346214
> Better dead code elimi
On 6/3/06, Georg Brandl <[EMAIL PROTECTED]> wrote:
> Collin Winter wrote:
> > Idea: what if Python's -O option caused PySequence_Contains() to
> > convert all errors into False return values?
>
> It would certainly give me an uneasy feeling if a command-line switch
> caused such a change in semanti
On 6/3/06, Collin Winter <[EMAIL PROTECTED]> wrote:
> My question is this: maybe set/frozenset.__contains__ (as well as
> dict.__contains__, etc) should catch such TypeErrors and convert them
> to a return value of False? It makes sense that "{} in frozenset([(1,
> 2, 3])" should be False, since un
Collin Winter wrote:
> I recently submitted a patch that would optimise "in (5, 6, 7)" (ie,
> "in" ops on constant tuples) to "in frozenset([5, 6, 7])". Raymond
> Hettinger rejected (rightly) the patch since it's not semantically
> consistent. Quoth:
>
>>> Sorry, this enticing idea has already bee
I recently submitted a patch that would optimise "in (5, 6, 7)" (ie,
"in" ops on constant tuples) to "in frozenset([5, 6, 7])". Raymond
Hettinger rejected (rightly) the patch since it's not semantically
consistent. Quoth:
>> Sorry, this enticing idea has already been explored and
>> rejected. Thi
Hi,
I'm going over the possible tasks for the Arlington Sprint.
Documentation for wsgiref looks like somethng I could handle. My
friend Joe Griffin and I did something similar for Tim Peters'
FixedPoint module.
Is anyone already working on this?
--
Doug Fort, Consulting Programmer
http://www.dou
On 6/3/06, [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote:
> - I would average the timings of runs instead of taking the minimum value as
> sometimes bench marks could be running code that is not deterministic in its
> calculations (could be using random numbers that effect convergence).
I would rewr
On 5/29/06, Tim Peters <[EMAIL PROTECTED]> wrote:
> [Neal Norwitz]
> > * hash values
> > Include/abstract.h: long PyObject_Hash(PyObject *o); // also in
> > object.h
> > Include/object.h:typedef long (*hashfunc)(PyObject *);
>
> We should leave these alone for now. There's no real connectio
Here are my suggestions:
- While running bench marks don't listen to music, watch videos, use the
keyboard/mouse, or run anything other than the bench mark code. Seams like
common sense to me.
- I would average the timings of runs instead of taking the minimum value as
sometimes bench marks c
Tim Peters wrote:
> Without the sleep, it gets charged 6 CPU seconds. With the sleep, 0
> CPU seconds.
>
> But life would be more boring if people believed you the first time ;-)
This only proves that it uses clock ticks for the accounting, and not
something with higher resolution. To find out w
Tim Peters wrote:
>> then
>> process time *is* measured, not sampled, on any modern operating
>> system: it is updated whenever the scheduler schedules a different
>> thread.
>
> That doesn't seem to agree with, e.g.,
>
>http://lwn.net/2001/0412/kernel.php3
>
> under "No more jiffies?":
[..
[Fredrik Lundh]
> but it's always the thread that runs when the timer interrupt
> arrives that gets the entire jiffy time. for example, this script runs
> for ten seconds, usually without using any process time at all:
>
> import time
> for i in range(1000):
> for i in rang
Tim Peters wrote:
> Maybe this varies by Linux flavor or version? While the article above
> was published in 2001, Googling didn't turn up any hint that Linux
> jiffies have actually gone away, or become better loved, since then.
well, on x86, they have changed from 10 ms in 2.4 to 1 ms in early
[Fredrik Lundh]
>> ...
>> since process time is *sampled*, not measured, process time isn't exactly in-
>> vulnerable either.
[Martin v. Löwis]
> I can't share that view. The scheduler knows *exactly* what thread is
> running on the processor at any time, and that thread won't change
> until the s
Martin v. Löwis wrote:
> Sure: when a thread doesn't consume its entire quantum, accounting
> becomes difficult. Still, if the scheduler reads the current time
> when scheduling, it measures the time consumed.
yeah, but the point is that it *doesn't* read the current time: all the
system does it
Tim:
> A lot of things get mixed up here ;-) The _mean_ is actually useful
> if you're using a poor-resolution timer with a fast test.
In which case discrete probability distributions are better than my assumption
of a continuous distribution.
I looked at the distribution of times for 1,000 repe
Fredrik Lundh wrote:
>> it is updated whenever the scheduler schedules a different thread.
>
> updated with what? afaik, the scheduler doesn't have to wait for a
> timer interrupt to reschedule things (think blocking, or interrupts that
> request rescheduling, or new processes, or...) -- but it
Martin v. Löwis wrote:
>> since process time is *sampled*, not measured, process time isn't exactly in-
>> vulnerable either.
>
> I can't share that view. The scheduler knows *exactly* what thread is
> running on the processor at any time, and that thread won't change
> until the scheduler makes
Greg Ewing <[EMAIL PROTECTED]> writes:
> Tim Peters wrote:
>
>> I liked benchmarking on Crays in the good old days. ...
> > Test times were reproducible to the
>> nanosecond with no effort. Running on a modern box for a few
>> microseconds at a time is a way to approximate that, provided you
Thomas Heller wrote:
> I have already mailed him asking if he can give me interactive access
> to this machine ;-). He has not yet replied - I'm not sure if this is because
> he's been shocked to see such a request, or if he already is in holidays.
I believe its a machine donated to Debian. They
Guido van Rossum wrote:
> Just and Jack have confirmed that you can throw away everything except
> possibly Demo/*. (Just even speculated that some cruft may have been
> accidentally revived by the cvs -> svn transition?)
No, they had been present when cvs was converted:
http://python.cvs.sourcef
Fredrik Lundh wrote:
> since process time is *sampled*, not measured, process time isn't exactly in-
> vulnerable either.
I can't share that view. The scheduler knows *exactly* what thread is
running on the processor at any time, and that thread won't change
until the scheduler makes it change. So
24 matches
Mail list logo