A.M. Kuchling wrote:
time. But I would also hope that it would be smart enough to know that it
doesn't need to look past the 2nd character in 'not the xyz' when it is
searching for 'not there' (due to the lengths of the sequences).
Assuming stringobject.c:string_contains is the right
Mike Brown wrote:
any special reason why in is faster if the substring is found, but
a lot slower if it's not in there?
Just guessing here, but in general I would think that it would stop searching
as soon as it found it, whereas until then, it keeps looking, which takes more
time.
the
Guido van Rossum wrote:
Which is exactly how s.find() wins this race. (I guess it loses when
it's found by having to do the find lookup.) Maybe string_contains
should just call string_find_internal()?
I somehow suspected that in did some extra work in case the find
failed; guess I should have
memcmp still compiles to REP CMPB on many x86 compilers, and the setup
overhead for memcmp sucks on modern x86 hardware
make that compiles to REPE CMPSB and the setup overhead for
REPE CMPSB
/F
___
Python-Dev mailing list
Python-Dev@python.org
Scott David Daniels wrote:
Looking for not there in not the xyz*100 using Boyer-Moore should do
about 300 probes once the table is set (the underscores below):
not the xyznot the xyznot the xyz...
not ther_
not the__
not ther_
James Y Knight wrote:
However, last time this topic came up, this Tim Peters guy argued against it.
;)
Quoting http://mail.python.org/pipermail/python-dev/2004-November/050049.html:
Python doesn't promise to return a postive integer for id(), although
it may have been nicer if it did.
does anyone remember if there were any big changes in pymalloc between
the 2.1 series (where it was introduced) and 2.3 (where it was enabled by
default).
or in other words, is the 2.1.3 pymalloc stable enough for production use?
(we're having serious memory fragmentation problems on a 2.1.3
@@ -399,9 +393,8 @@
del self[name] # Won't fail if it doesn't exist
self.dict[name.lower()] = value
text = name + : + value
-lines = text.split(\n)
-for line in lines:
-self.headers.append(line + \n)
+
Bob Ippolito wrote:
Wouldn't it be nicer to have a facility that let you send messages between
processes and manage
concurrency properly instead? You'll need most of this anyway to do
multithreading sanely, and
the benefit to the multiple process model is that you can scale to multiple
Guido van Rossum wrote:
I'd like to see iterators become as easy to work with as lists are. At the
moment, anything that returns an iterator forces you to use the relatively
cumbersome itertools.islice mechanism, rather than Python's native slice
syntax.
Sorry. Still -1.
can we perhaps
Guido van Rossum wrote:
As a trivial example, here's how to skip the head of a zero-numbered list:
for i, item in enumerate(ABCDEF)[1:]:
print i, item
Is this idea a non-starter, or should I spend my holiday on Wednesday
finishing
it off and writing the documentation and tests
Just van Rossum wrote:
I don't think that in general you want to fold multiple empty lines into
one. This would be my prefered regex:
s = re.sub(r\r\n?, \n, s)
Catches both DOS and old-style Mac line endings. Alternatively, you can
use s.splitlines():
s = \n.join(s.splitlines()) +
Stuart Bishop wrote:
Do people consider this a bug that should be fixed in Python 2.4.1 and Python
2.3.6 (if it ever
exists), or is the resposibility for doing this transformation on the
application that embeds
Python?
the text you quoted is pretty clear on this:
It is envisioned
Stuart Bishop wrote:
I don't think it is possible for plpythonu to fix this by simply translating
the line endings, as
this would require significant knowledge of Python syntax to do correctly
(triple quoted strings
and character escaping I think).
of course it's possible: that's what
Alex Martelli wrote:
Problem: to write unit tests showing that the current copy.py misbehaves with
a classic extension
type, I need a classic extension type which defines __copy__ and __deepcopy__
just like /F's
cElementTree does. So, I made one: a small trycopy.c and accompanying
back in Python 2.1 (and before), an object could define how copy.copy should
work simply by definining a __copy__ method. here's the relevant portion:
...
try:
copierfunction = _copy_dispatch[type(x)]
except KeyError:
try:
copier = x.__copy__
Guido van Rossum wrote:
The only thing this intends to break /.../
it breaks classic C types:
import cElementTree
x = cElementTree.Element(tag)
x
Element 'tag' at 00B4BA30
x.__copy__
built-in method __copy__ of Element object at 0x00B4BA30
x.__copy__()
Element 'tag' at 00B4BA68
import
hi andre,
I have problem with re-install python 2.3.4, when I execute ./configure is
appear one message in
config.log, follow below :
configure:1710: gccconftest.cc 5
gcc: installation problem, cannot exec `cc1plus': No such file or directory
configure:1713: $? = 1
My gnu/linux is
Armin Rigo wrote:
Some code in the 'py' lib used to use marshal to send simple objects between
the main process and a subprocess. We ran into trouble when we extended the
idea to a subprocess that would actually run via ssh on a remote machine, and
the remote machine's Python version didn't
. Multiple assignment is slower than individual assignment. For
example x,y=a,b is slower than x=a; y=b. However, multiple
assignment is faster for variable swaps. For example, x,y=y,x is
faster than t=x; x=y; y=t.
marginally faster in 2.4, a lot slower in earlier versions. maybe you
One thing that bugs me: the article says 3 or 4 times that Python is
slow, each time with a refutation (but it's so flexible, but it's
fast enough) but still, they sure seem to harp on the point.
fwiw, IDG's Computer Sweden, sweden's leading IT-newspaper has a
surprisingly big Python article
Bjorn Tillenius wrote:
There are some issues regarding the use of unicode in doctests. Consider
the following three tests.
foo = u'föö'
foo
u'f\xf6\xf6'
foo
u'f\u00f6\u00f6'
foo
u'föö'
To me, those are identical.
really? if a function is expected to print
601 - 622 of 622 matches
Mail list logo