[Brett Cannon]
My [EMAIL PROTECTED] SSH key should be removed since my internship is now
over.
Thank you for being conscientious. While it feared death, your key
didn't complain about being deleted, and right before it vanished I
saw the most astonishing look of profound peace passing over its
[Bruce Christensen]
We seem to have stumbled upon some strange behavior in cPickle's memo
use when pickling instances.
Here's the repro:
[mymodule.py]
class C:
def __getstate__(self): return ('s1', 's2', 's3')
[interactive interpreter]
Python 2.4.3 (#69, Mar 29 2006, 17:35:34) [MSC
[Neal Norwitz]
I'm wondering if the following change should be made to
Include/weakrefobject.h:
Yes.
-PyAPI_FUNC(long) _PyWeakref_GetWeakrefCount(PyWeakReference *head);
+PyAPI_FUNC(Py_ssize_t) _PyWeakref_GetWeakrefCount(PyWeakReference *head);
And the 2 other files which use this
[Armin Rigo]
There is an oversight in the design of __index__() that only just
surfaced :-( It is responsible for the following behavior, on a 32-bit
machine with = 2GB of RAM:
s = 'x' * (2**100) # works!
len(s)
2147483647
This is because PySequence_Repeat(v, w) works
[Tim]
...
This is a mess :-)
[Nick Coghlan]
I've been trawling through the code a bit, and I don't think it's as bad as
all that.
[also Nick, but older wiser ;-)]
Damn, it really is a mess. . . nb_index returns the Pyssize_t directly,
Bingo. It's a /conceptual/ mess. Best I can make
[Martin v. Löwis]
Didn't you know that you signed in to run arbitrary viruses, worms, and
trojan horses when you added your machine to the buildbot infrastructure
:-?
Hey, I signed up for that when I bought a Windows box :-)
You just haven't seen buildbot erasing your hard disk and filling
Georg discovered that test_uuid didn't run any tests, and fixed that
on Thursday. A number of buildbots have failed that test since then.
My XP box appears unique among the Windows buildbots in failing. It
always fails like so:
AssertionError: different sources disagree on node:
from
[Tim]
... uuid.getnode() tries things in this order on Windows:
getters = [_windll_getnode, _netbios_getnode, _ipconfig_getnode]
It's only the first one that returns the bogus 0x00038a15; both of
the latter return 0x00B2B7BF [the correct MAC address for my
network card].
Rarely I'll be running the Python tests in my sandbox from a DOS box,
and the test run will just end. Like so:
C:\Code\python\PCbuildpython -E -tt ../lib/test/regrtest.py -uall -rw
test_softspace
test_codecmaps_kr
...
test_float
test_userdict
C:\Code\python\PCbuild
No indication of success or
...
[Raymond]
Even then, we need to drop the concept of having the flags as counters
rather than booleans.
[Georg Brandl]
Yes. Given that even Tim couldn't imagine a use case for counting the
exceptions, I think it's sensible.
That's not it -- someone will find a use for anything. It's
[Raymond Hettinger]
...
If the current approach gets in their way, the C implementers should feel
free to
make an alternate design choice.
I expect they will, eventually. Converting this to C is a big job,
and at the NFS sprint we settled on an incremental strategy allowing
most of the
[Neal Norwitz]
...
That leaves 1 unexplained failure on a Windows bot.
It wasn't my Windows bot, but I believe test_profile has failed
(rarely) on several of the bots, and in the same (or very similar)
way. Note that the failure went away on the Windows bot in question
the next time the tests
[Neal Norwitz]
There hasn't been much positive response (in the original thread or
here).
Do note that there was little response of any kind, but all it got was
positive. It's not sexy, but is essential for debugging deadlocks.
If you ask for positive response, you'll get some -- the use is
As noted in
http://mail.python.org/pipermail/python-dev/2006-May/065478.html
it looks like we need a new Python C API function to make new warnings
from the struct module non-useless. For example, runnning
test_zipfile on Windows now yields
test_zipfile
C:\Code\python\lib\struct.py:63:
[Raymond]
FWIW, I think this patch should go in. The benefits are
obvious and real.
[Anthony Baxter]
Yep. I'm going to check it in, unless someone else beats me to it in
the next couple of hours before the b2 freeze.
I'll merge it from my branch right after I send this email. It still
needs
[Scott Dial]
Wouldn't this function be better named sys._getframes since we already
have a sys._getframe for getting the current frame?
http://mail.python.org/pipermail/python-dev/2005-March/051887.html
The first only name suggested won. As it says there, I usually have
no appetite for
[Anthony Baxter]
Hm. Would it be a smaller change to expose head_mutex so that the
external module could use it?
No, in part because `head_mutex` may not exist (depends on the build
type). What an external module would actually need is 3 new public C
API functions, callable workalikes for
[Neil Schemenauer]
The bug was reported by Armin in SF #1333982:
the literal -2147483648 (i.e. the value of -sys.maxint-1) gives
a long in 2.5, but an int in = 2.4.
That actually depends on how far back you go. It was also a long at
the start. IIRC, Fred or I added hackery to make
Just to make life harder ;-), I should note that code, docs and tests
for sys._current_frames() are done, on the tim-current_frames branch.
All tests pass, and there are no leaks in the new code. It's just a
NEWS blurb away from being just another hectic release memory :-)
Back in:
http://mail.python.org/pipermail/python-dev/2005-March/051856.html
I made a pitch for adding:
sys._current_frames()
to 2.5, which would return a dict mapping each thread's id to that
thread's current (Python) frame. As noted there, an extension module
exists along these lines
[Neal Norwitz]
In import.c starting around line 1210 (I removed a bunch of code that
doesn't matter for the problem):
if (PyUnicode_Check(v)) {
copy = PyUnicode_Encode(PyUnicode_AS_UNICODE(v),
PyUnicode_GET_SIZE(v),
[Neal]
Then later on we do PyString_GET_SIZE and PyString_AS_STRING. That doesn't
work, does it? What am I missing?
[Tim]
The conceptual type of the object returned by PyUnicode_Encode().
[Neal]
Phew, I sure am glad I was missing that. :-)
I saw as the first line in PyUnicode_Encode
[Jack Diederich]
PyObject_MALLOC does a good job of reusing small allocations but it
can't quite manage the same speed as a free list, especially for things that
have some extra setup involved (tuples have a free list for each length).
[Martin v. Löwis]
I would question that statement, for
[Tim Peters]
Note that this is quite unlike Scheme, in which declaration must
appear before use (ignoring fancy letrec cases),
[Greg Ewing]
I think that's overstating things a bit --
So do I :-), but I don't really care about Scheme here.
mutually recursive functions are quite easy to write
[Giovanni Bajo]
a = []
for i in range(10):
... a.append(lambda: i)
...
print [x() for x in a]
[9, 9, 9, 9, 9, 9, 9, 9, 9, 9]
This subtle semantic of lambda is quite confusing, and still forces people to
use the i=i trick.
[Greg Ewing]
This has *nothing* to do with the
[Tim]
Don't recall what that was, but creating a new scope on each iteration
sounds hard to explain in Python.
[Andrew Koenig]
I don't think it's particularly hard to explain. For example, one way to
explain it is
to say that
for i in stuff:
body
is equivalent
[Ka-Ping Yee, on
http://www.python.org/dev/peps/pep-0356/
]
Among them is this one:
Incorrect LOAD/STORE_GLOBAL generation
http://python.org/sf/1501934
The question is, what behaviour is preferable for this code:
g = 1
def f():
g += 1
f()
Should this
[Bruce Christensen]
So just to be clear, is it something like this?
I hope you've read PEP 307:
http://www.python.org/dev/peps/pep-0307/
That's where __reduce_ex__ was introduced (along with all the rest of
pickle protocol 2).
class object:
def __reduce__(self):
return
[Tim Peters]
I hope you've read PEP 307:
[Bruce Christensen]
I have. Thanks to you and Guido for writing it! It's been a huge help.
You're welcome -- although we were paid for that, so thanks aren't needed ;-)
The implementation is more like:
[snip]
Thanks! That helps a lot. PEP 307
[Andrew Koenig]
I saw messages out of sequence and did not realize that this would be a
change in behavior from 2.4. Sigh.
[Ka-Ping Yee]
Yes, this is not a good time to change it.
I hope Py3000 has lexical scoping a la Scheme...
Me too -- that would be really nice.
[Guido]
That's not a
[Andrew Koenig]
...
Incidentally, I think that lexical scoping would also deal with the problem
that people often encounter in which they have to write things like lambda
x=x: where one would think lambda x: would suffice.
They _shouldn't_ encounter that at all anymore. For example,
def
[Andrew Koenig]
Almost. What I really want is for it to be possible to determine the
binding of every name by inspecting the source text of the program. Right
now, it is often possible to do so, but sometimes it isn't.
Local names are always determined at compile-time in Python. What you
[Giovanni Bajo]
Yes but:
a = []
for i in range(10):
... a.append(lambda: i)
...
print [x() for x in a]
[9, 9, 9, 9, 9, 9, 9, 9, 9, 9]
This subtle semantic of lambda is quite confusing, and still forces people to
use the i=i trick.
So stay away from excruciating abuses of lexical
Only one gripe:
[Anthony Baxter]
...
Once we hit release candidate 1, the trunk gets branched to
reease25-maint.
Save the branch for 2.5 final (i.e., the 2.5final tag and the
release25-maint branch start life exactly the same). Adding a new
step before it's possible to fix rc1 critical bugs
[Kevin Jacobs]
...
A good place to start: You mentioned earlier that there where some
nonsensical things in floatobject.c. Can you list some of the most serious
of these?
I suspect Nick spends way too much time reading standards ;-) What he said is:
If you look at floatobject.c, you
[/F]
SC22WG14? is that some marketing academy? not a very good one, obviously.
That's because it's European ;-) The ISO standards process has highly
visible layers of bureaucracy, and, in full, JTC1/SC22/WG14 is just
the Joint ISO/IEC Technical Committee 1's SubCommittee 22's Working
Group 14
[Gerhard Häring]
...
Also, somebody please add me as Python developer on Sourceforge (I cannot
assign items to myself there).
If you still can't, scream at me ;-)
___
Python-Dev mailing list
Python-Dev@python.org
[Gerhard]
...
Also, somebody please add me as Python developer on Sourceforge (I cannot
assign items to myself there).
[Tim]
If you still can't, scream at me ;-)
[Gerhard]
Bwah!!! :-P
I still cannot see myself in the Assigned to dropdown ...
Screaming apparently helped! I
[Brett]
Looks like Tim's XP box is crapping out on a header file included from
Tcl/Tk. Did the Tcl/Tk folk just break something and we are doing an
external svn pull and thus got bit by it?
[Martin]
No, that comes straight out of
FYI, the tests all pass on my box again. Going offline line to check the disk.
...
I probably left the 2.4 buildbot tree in a broken state,
BTW -- if I don't remember to fix that, somebody poke me :-)
I should clarify that that's _my_ 2.4 buildbot tree, only on my
machine. I didn't break
[Gerhard Häring]
...
Until recently, SQLite was buggy and it was only fixed in
http://www.sqlite.org/cvstrac/chngview?cn=2981
that callbacks can throw errors that are usefully returned to the
original caller.
The tests for the sqlite3 module currently assume a recent version
SQLite
[georg.brandl]
Author: georg.brandl
Date: Fri Jun 9 20:45:48 2006
New Revision: 46795
Log:
RFE #1491485: str/unicode.endswith()/startswith() now accept a
tuple as first argument.
[Neal Norwitz]
What's the reason to not support any sequence and only support tuples?
It can't support any
[Guido]
Here's how I interpret PEP 237. Some changes to hex() and oct() are
warned about in B1and to be implemented in B2. But I'm pretty sure
that was about the treatment of negative numbers, not about the
trailing 'L'. I believe the PEP authors overlooked the trailing 'L'
for hex() and
[Brett]
But I don't think this is trying to say they don't care. People just want
to lower the overhead of maintaining the distro.
[Fredrik]
well, wouldn't the best way to do that be to leave all non-trivial
maintenance of a
given component to an existing external community?
(after all,
[Tim]
In addition, not shown above is that I changed test_wsgiref.py to stop
a test failure under -O. Given that we're close to the next Python
release, and test_wsgiref was the only -O test failure, I wasn't going
to let that stand. I did wait ~30 hours between emailing about the
problem
[Fredrik Lundh]
I just ran the PIL test suite using the current Python trunk, and the
tests for a user-contributed plugin raised an interesting exception:
ValueError: can't unpack IEEE 754 special value on non-IEEE platform
fixing this is easy, but the error is somewhat confusing: since when
[Phillip J. Eby]
Actually, I started out with please -- twice, after having previously
asked please in advance. I've also seen lots of messages on Python-Dev
where Tim Peters wrote about having wasted time due to other folks not
following established procedures, and I tried to emulate his
[Raymond Hettinger]
I think the note is still useful, but the rather small wording
should be replaced by something most precise (such as the
value of n=len(x) where n! 2**19997).
Note that I already removed it, and I'm not putting it back. The
period of W-H was so short you could get into
[Ka-Ping Yee]
I did this earlier:
hex(9)
'0x9184e729fffL'
and found it a little jarring, because i feel there's been a general
trend toward getting rid of the 'L' suffix in Python.
Literal long integers don't need an L anymore; they're automatically
made into longs
[Terry Jones]
The code below uses a RNG with period 5, is deterministic, and has one
initial state. It produces 20 different outcomes.
Well, I'd call the sequence of 20 numbers it produces one outcome.
From that view, there are at most 5 outcomes it can produce (at most 5
distinct 20-number
Just noticed that, at least on Windows, test_wsgiref fails when Python
is run with -O (but passes without -O):
$ python -O -E -tt ../Lib/test/regrtest.py -v test_wsgiref
test_wsgiref
testAbstractMethods (test.test_wsgiref.HandlerTests) ... ok
testBasicErrorOutput (test.test_wsgiref.HandlerTests)
[Alex Martelli]
...claims:
Note that for even rather small len(x), the total number of
permutations of x is larger than the period of most random number
generators; this implies that most permutations of a long
sequence can never be generated.
Now -- why would the behavior of most random
[Terry Jones]
That doc note should surely be removed. Perhaps it's an artifact from some
earlier shuffle algorithm.
No, it's an artifact form an earlier PRNG. The shuffle algorithm
hasn't changed.
The current algorithm (which is simple, well known,
Both true.
and which produces all
[Greg Ewing]
But isn't the problem with the Twister that for *some
initial states* the period could be much *shorter* than
the theoretical maximum?
Or is the probability of getting such an initial state
too small to worry about?
The Twister's state is held in a vector of 624 32-bit words.
[Brett Cannon]
I discovered last night that if you run ``./python.exe -Wi`` the interpreter
exists rather badly::
Fatal Python error: PyThreadState_Get: no current thread
Anyone else seeing this error on any other platforms or have an inkling of
what checkin would cause this?
See
[EMAIL PROTECTED]
Maybe this belongs in the dev faq. I didn't see anything there or in the
Subversion book.
I have three Python branches, trunk, release23-maint and release24-maint.
In the (for example) release24-maint, what svn up command would I use to get
to the 2.4.2 version? In cvs
FYI, here's the minimal set of failing tests:
$ python_d ../Lib/test/regrtest.py test_file test_optparse
test_file
test_optparse
test test_optparse failed -- Traceback (most recent call last):
File C:\Code\python\lib\test\test_optparse.py, line 1042, in
test_filetype_noexist
[Tim]
FYI, here's the minimal set of failing tests:
$ python_d ../Lib/test/regrtest.py test_file test_optparse
test_file
test_optparse
test test_optparse failed -- Traceback (most recent call last):
File C:\Code\python\lib\test\test_optparse.py, line 1042, in
test_filetype_noexist
...
[Tim]
What revision was your laptop at before the update? It could help a
lot to know the earliest revision at which this fails.
[Brett]
No clue. I had not updated my local version in quite some time since most
of my dev as of late has been at work.
A good clue is to look at the
Well, this sure sucks. This is the earliest revision at which the tests fail:
r46752 | georg.brandl | 2006-06-08 10:50:53 -0400 (Thu, 08 Jun 2006) | 3 lines
Changed paths:
M /python/trunk/Lib/test/test_file.py
Convert test_file to unittest.
If _that's_ not a reason for using doctest, I
[Tim]
Well, this sure sucks. This is the earliest revision at which the
tests fail:
r46752 | georg.brandl | 2006-06-08 10:50:53 -0400 (Thu, 08 Jun
2006) | 3 lines
Changed paths:
M /python/trunk/Lib/test/test_file.py
Convert test_file to unittest.
If _that's_ not a reason for using
[Tim, gets different results across whole runs of
python_d ../Lib/test/regrtest.py -R 2:40: test_filecmp test_exceptions
]
I think I found the cause for test_filecmp giving different results
across runs, at least on Windows. It appears to be due to this test
line:
[moving to python-dev]
[Tim, gets different results across whole runs of
python_d ../Lib/test/regrtest.py -R 2:40: test_filecmp test_exceptions
]
Does that make any sense? Not to me -- I don't know of a clear reason
other than wild loads/stores for why such runs should ever differ.
[Andrew MacIntyre]
In reviewing the buildbot logs after committing this patch, I see 2
issues arising that I need advice about...
1. The Solaris build failure in thread.c has me mystified as I can't
find any _sysconf symbol - is this in a system header?
The patch's
#if THREAD_STACK_MIN
[Fredrik Lundh]
...
since process time is *sampled*, not measured, process time isn't exactly in-
vulnerable either.
[Martin v. Löwis]
I can't share that view. The scheduler knows *exactly* what thread is
running on the processor at any time, and that thread won't change
until the scheduler
[Fredrik Lundh]
but it's always the thread that runs when the timer interrupt
arrives that gets the entire jiffy time. for example, this script runs
for ten seconds, usually without using any process time at all:
import time
for i in range(1000):
for i in
[Thomas Heller]
test_ctypes fails on the ppc64 machine. I don't have access to such
a machine myself, so I would have to do some trial and error, or try
to print some diagnostic information.
This should not be done in the trunk, so the question is: can the buildbots
build branches?
Yes.
[MAL]
Using the minimum looks like the way to go for calibration.
[Terry Reedy]
Or possibly the median.
[Andrew Dalke]
Why? I can't think of why that's more useful than the minimum time.
A lot of things get mixed up here ;-) The _mean_ is actually useful
if you're using a poor-resolution
[Neal]
This is still in Lib/test/string_tests.py:
#EQ(A, , replace, , A)
# That was the correct result; this is the result we actually get
# now (for str, but not for unicode):
#EQ(, , replace, , A)
Is this going to be fixed?
Done. I had to comment out
[Bob]
The warning is correct, and so is the size. Only native formats have
native sizes; l and i are exactly 4 bytes on all platforms when using
=, , , or !. That's what std size and alignment means.
[Neal]
Ah, you are correct. I see this is the behaviour in 2.4. Though I
wouldn't call 4
[Nick Coghlan]
What if we appended unexpected skips to the list of bad tests so that they get
rerun in verbose mode and the return value becomes non-zero?
print count(len(surprise), skip), \
unexpected on, plat + :
printlist(surprise)
# Add the next
[Ronald Oussoren, hijacking the test_struct failure on 64 bit platforms
thread]
The really annoying part of the new struct warnings is that the
warning line mentions a line in struct.py instead the caller of
struct.pack. That makes it hard to find the source of the
warning without telling the
I'm afraid a sabbatical year isn't long enough to understand what the
struct module did or intends to do by way of range checking 0.7
wink.
Is this intended? This is on a 32-bit Windows box with current trunk:
from struct import pack as p
p(I, 2**32 + 2343)
C:\Code\python\lib\struct.py:63:
[MvL, to Andreas Flöter]
This strictly doesn't belong to python-dev: this is the list where
you say I want to help, not so much I need your help.
LOL! How true.
If you want to resolve this yourself, we can guide you through
that. I would start running the binary in a debugger to find
out
[Martin Blais]
I'm still looking for a benchmark that is not amazingly uninformative
and crappy. I've been looking around all day, I even looked under the
bed, I cannot find it. I've also been looking around all day as well,
even looked for it shooting out of the Iceland geysirs, of all
[Fredrik Lundh]
would abc.find(, 100) == 3 be okay? or should we switch to treating the
optional start and end positions as return value boundaries (used to filter
the
result) rather than slice directives (used to process the source string
before
the operation)? it's all trivial to
[Bob Ippolito]
What should it be called instead of wrapping?
I don't know -- I don't know what it's trying to _say_ that isn't
already said by saying that the input is out of bounds for the format
code.
When it says it's wrapping, it means that it's doing x = (2 ^ (8 * n)) - 1
to force
a
[Bob Ippolito]
It seems that we should convert the crc32 functions in binascii,
zlib, etc. to deal with unsigned integers. Currently it seems that 32-
bit and 64-bit platforms are going to have different results for
these functions.
binascii.crc32 very deliberately intends to return the same
[Neal Norwitz]
* ints: Include/intobject.h:long ob_ival;
[Thomas Wouters]
I considered asking about this before, as it would give '64-bit power' to
Win64 integers. It's a rather big change, though (lots of code assumes
PyInts fit in signed longs, which would be untrue then.)
I expect
[Neal Norwitz]
* hash values
Include/abstract.h: long PyObject_Hash(PyObject *o); // also in object.h
Include/object.h:typedef long (*hashfunc)(PyObject *);
We should leave these alone for now. There's no real connection
between the width of a hash value and the number of elements in a
[Thomas Wouters]
...
Perhaps more people could chime in? Am I being too anal about backward
compatibility here?
Yes and no ;-) Backward compatibility _is_ important, but there seems
no way to know in this case whether struct's range-checking sloppiness
was accidental or deliberate. Having
[Guido]
...
It's really only a practical concern for 32-bit values on 32-bit
machines, where reasonable people can disagree over whether 0x
is -1 or 4294967295.
Then maybe we should only let that one slide 0.5 wink.
...
[Tim]
So, in all, I'm 95% sure 2.4's behavior is buggy, but
[Guido]
I think we should do as Thomas proposes: plan to make it an error in
2.6 (or 2.7 if there's a big outcry, which I don't expect) and accept
it with a warning in 2.5.
[Tim]
That's what I arrived at, although 2.4.3's checking behavior is
actually so inconsistent that it needs some
[Bob Ippolito]
...
Actually, should this be a FutureWarning or a DeprecationWarning?
Since it was never documented, UndocumentedBugGoingAwayError ;-)
Short of that, yes, DeprecationWarning. FutureWarning is for changes
in non-exceptional behavior (.e.g, if we swapped the meanings of
and in
[Greg Ewing]
Although Tim pointed out that replace() only regards
n+1 empty strings as existing in a string of lenth
n. So for consistency, find() should only find them
in those places, too.
[Guido]
And abc.count() should return 4.
And it does, but too much context was missing in Greg's
[Armin Rigo]
...
...
Am I allowed to be grumpy here, and repeat that speed should not be
used to justify bugs?
As a matter of fact, you are. OTOH, nobody at the sprint made that
argument, so nobody actually feels shame on that count :-)
I apologize for the insufficiently reviewed
[... a huge number of reference leaks reported ...]
FYI, I reduced the relatively simple test_bisect's leaks to this
self-contained program:
libreftest =
No actual doctests here.
import doctest
import gc
def main():
from sys import gettotalrefcount as trc
for i in range(10):
http://wiki.python.org/moin/NeedForSpeed/Successes
http://wiki.python.org/moin/NeedForSpeed/Failures
http://wiki.python.org/moin/NeedForSpeed/Deferred
And
http://wiki.python.org/moin/ListOfPerformanceRelatedPatches
All of these are linked to from the top page:
[Tim]
PyLong_FromString() only sees the starting
address, and-- as it always does --parses until it hits a character
that doesn't make sense for the input base.
[Greg Ewing]
This is the bug, then. long() shouldn't be using
PyLong_FromString() to convert its argument, but
something that
In various places we store triples of exception info, like a
PyFrameObject's f_exc_type, f_exc_value, and f_exc_traceback PyObject*
members.
No invariants are documented, and that's a shame. Patch 1145039 aims
to speed ceval a bit by relying on a weak guessed invariant, but I'd
like to make the
[Guido]
+1, if you can also prove that the traceback will never be null. I
failed at that myself last time I tried, but I didn't try very long or
hard.
Thanks! I'm digging.
Stuck right now on this miserable problem that's apparently been here
forever: I changed PyErr_SetObject to start like
[Fredrik]
-1 * (1, 2, 3)
()
-(1, 2, 3)
Traceback (most recent call last):
File stdin, line 1, in module
TypeError: bad operand type for unary -
We Really Need To Fix This!
What's broken? It's generally true that
n*s == s*n == empty_container_of_type_type(s)
whenever s is a
[Raymond Hettinger]
...
Also, I'm not clear on the rationale for transforming negative
repetition counts to zero instead of raising an exception.
There are natural use cases. Here's one: you have a string and want
to right-justify it to 80 columns with blanks if it's shorter than 80.
s =
[Bob Ippolito]
...
Unfortunately, this change to the struct module slightly alters the
documented API for the following format codes: I, L, q, Q. Currently
it is documented that those format codes will always return longs,
regardless of their value.
I view that more as having documented the
[/F]
so, which one is correct ?
Python 2.4.3
.replace(, a)
''
u.replace(u, ua)
u'a'
[Greg Ewing]
Probably there shouldn't be any correct in this case,
i.e. the result of replacing an empty string should be
undefined (because any string contains infinitely many
empty substrings).
[Phillip J. Eby]
It's not clear to me whether this means that Ian can just relicense his
code for me to slap into wsgiref and thence into Python by virtue of my own
PSF contribution form and the compatible license, or whether it means Ian
has to sign a form too.
It's clearly best if Ian signs
[Facundo Batista]
I'd start to see this not before two weeks (I have a conference, and
need to finish my papers).
TIm, we both know that I'm not, under any point of view, a numeric
expert. So, I'd ask you a favor.
Could you please send here some examples, for a given precision, of
perilous
[Brett Cannon]
Can someone install the attached SSH key (it's for my work machine)? The
fingerprint is::
cd:69:15:52:b2:e5:dc:2e:73:f1:62:1a:12:49:2b:a1
[EMAIL PROTECTED]
I tried. Scream at someone else if it didn't work ;-)
Also, how hard is it to have a specific key uninstalled?
[Guido]
...
In 2.6, I'd be okay with standardizing int on 64 bits everywhere (I
don't think bothering with 128 bits on 64-bit platforms is worth it).
In 2.5, I think we should leave this alone.
Nobody panic. This wasn't on the table for 2.5, and as Martin points
out it needs more
[elventear]
I am the in the need to do some numerical calculations that involve
real numbers that are larger than what the native float can handle.
I've tried to use Decimal, but I've found one main obstacle that I
don't know how to sort. I need to do exponentiation with real
exponents, but
501 - 600 of 962 matches
Mail list logo