Re: [Python-Dev] 2.3.5 schedule, and something I'd like to get in

2005-01-06 Thread Bob Ippolito
On Jan 5, 2005, at 18:49, Martin v. Löwis wrote:
Jack Jansen wrote:
The "new" solution is basically to go back to the Unix way of 
building  an extension: link it against nothing and sort things out 
at runtime.  Not my personal preference, but at least we know that 
loading an  extension into one Python won't bring in a fresh copy of 
a different  interpreter or anything horrible like that.
This sounds good, except that it only works on OS X 10.3, right?
What about older versions?
Older versions do not support this feature and have to deal with the 
way things are as-is.  Mac OS X 10.2 is the only supported version that 
suffers this consequence, I don't think anyone has supported Python on 
Mac OS X 10.1 in quite some time.

-bob
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] an idea for improving struct.unpack api

2005-01-06 Thread Alex Martelli
On 2005 Jan 06, at 06:27, Ilya Sandler wrote:
   ...
We could have an optional offset argument for
unpack(format, buffer, offset=None)
I do agree on one concept here: when a function wants a string argument 
S, and the value for that string argument S is likely to come from some 
other bigger string Z as a subset Z[O:O+L], being able to optionally 
specify Z, O and L (or the endpoint, O+L), rather than having to do the 
slicing, can be a simplification and a substantial speedup.

When I had this kind of problem in the past I approached it with the 
buffer built-in.  Say I've slurped in a whole not-too-huge binary file 
into `data', and now need to unpack several pieces of it from different 
offsets; rather than:
somestuff = struct.unpack(fmt, data[offs:offs+struct.calcsize(fmt)])
I can use:
somestuff = struct.unpack(fmt, buffer(data, offs, 
struct.calcsize(fmt)))
as a kind of "virtual slicing".  Besides the vague-to-me "impending 
deprecation" state of the buffer builtin, there is some advantage, but 
it's a bit modest.  If I could pass data and offs directly to 
struct.unpack and thus avoid churning of one-use readonly buffer 
objects I'd probably be happier.

As for "passing offset implies the length is calcsize(fmt)" 
sub-concept, I find that slightly more controversial.  It's convenient, 
but somewhat ambiguous; in other cases (e.g. string methods) passing a 
start/offset and no end/length means to go to the end.  Maybe something 
more explicit, such as a length= parameter with a default of None 
(meaning "go to the end") but which can be explicitly passed as -1 to 
mean "use calcsize internally", might go down better.

As for the next part:
the offset argument is an object which contains a single integer field
which gets incremented inside unpack() to point to the next byte.
...I find this just too "magical".  It's only useful when you're 
specifically unpacking data bytes that are compactly back to back (no 
"filler" e.g. for alignment purposes) and pays some conceptual price -- 
introducing a new specialized type to play the role of "mutable int" 
and having an argument mutated, which is not usual in Python's library.

so with a new API the above code could be written as
 offset=struct.Offset(0)
 hdr=unpack("", offset)
 for i in range(hdr[0]):
item=unpack( "", rec, offset)
When an offset argument is provided, unpack() should allow some bytes 
to
be left unpacked at the end of the buffer..

Does this suggestion make sense? Any better ideas?
All in all, I suspect that something like...:
# out of the record-by-record loop:
hdrsize = struct.calcsize(hdr_fmt)
itemsize = struct.calcsize(item_fmt)
reclen = length_of_each_record
# loop record by record
while True:
rec = binfile.read(reclen)
if not rec:
break
hdr = struct.unpack(hdr_fmt, rec, 0, hdrsize)
for offs in itertools.islice(xrange(hdrsize, reclen, itemsize), 
hdr[0]):
item = struct.unpack(item_fmt, rec, offs, itemsize)
# process item

might be a better compromise.  More verbose, because more explicit, of 
course.  And if you do this kind of thing often, easy to encapsulate in 
a generator with 4 parameters -- the two formats (header and item), the 
record length, and the binfile -- just yield the hdr first, then each 
struct.unpack result from the inner loop.

Having the offset and length parameters to struct.unpack might still be 
a performance gain worth pursuing (of course, we'd need some 
performance measurements from real-life use cases) even though from the 
point of view of code simplicity, in this example, there appears to be 
little or no gain wrt slicing rec[offs:offs+itemsize] or using 
buffer(rec, offs, itemsize).

Alex
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] an idea for improving struct.unpack api

2005-01-06 Thread Anthony Baxter
My take on this:

struct.pack/struct.unpack is already one of my least-favourite parts 
of the stdlib. Of the modules I use regularly, I pretty much only ever
have to go back and re-read the struct (and re) documentation because
they just won't fit in my brain. Adding additional complexity to them 
seems like a net loss to me. 

I'd _love_ to find the time to write a sane replacement for struct - as
well as the current use case, I'd also like it to handle things like
attribute-length-value 3-tuples nicely (where you get a fixed field 
which identifies the attribute, a fixed field which specifies the value
length, and a value of 'length' bytes). Almost all sane network protocols
(i.e. those written before the plague of pointy brackets) use this in
some way.

I'd much rather specify the format as something like a tuple of values -
(INT, UINT, INT, STRING) (where INT &c are objects defined in the
struct module). This also then allows users to specify their own formats
if they have a particular need for something.

Anthony
-- 
Anthony Baxter <[EMAIL PROTECTED]>
It's never too late to have a happy childhood.
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] an idea for improving struct.unpack api

2005-01-06 Thread Paul Moore
On Thu, 6 Jan 2005 21:28:26 +1100, Anthony Baxter
<[EMAIL PROTECTED]> wrote:
> My take on this:
> 
> struct.pack/struct.unpack is already one of my least-favourite parts
> of the stdlib. Of the modules I use regularly, I pretty much only ever
> have to go back and re-read the struct (and re) documentation because
> they just won't fit in my brain. Adding additional complexity to them
> seems like a net loss to me.

Have you looked at Thomas Heller's ctypes? Ignoring the FFI stuff, it
has a fairly comprehensive interface for defining and using C
structure types. A simple example:

>>> class POINT(Structure):
..._fields_ = [('x', c_int), ('y', c_int)]
...
>>> p = POINT(1,2)
>>> p.x, p.y
(1, 2)
>>> str(buffer(p))
'\x01\x00\x00\x00\x02\x00\x00\x00'

To convert *from* a byte string is messier, but not too bad:

>>> s = str(buffer(p))
>>> s
'\x01\x00\x00\x00\x02\x00\x00\x00'
>>> p2 = POINT()
>>> ctypes.memmove(p2, s, ctypes.sizeof(POINT))
14688904
>>> p2.x, p2.y
(1, 2)

It might even be possible to get Thomas to add a small helper
classmethod to ctypes types, something like

POINT.unpack(str, offset=0, length=None)

which does the equivalent of

def unpack(cls, str, offset=0, length=None):
if length is None:
length=sizeof(cls)
b = buffer(str, offset, length)
new = cls()
ctypes.memmove(new, b, length)
return new

> I'd _love_ to find the time to write a sane replacement for struct - as
> well as the current use case, I'd also like it to handle things like
> attribute-length-value 3-tuples nicely (where you get a fixed field
> which identifies the attribute, a fixed field which specifies the value
> length, and a value of 'length' bytes). Almost all sane network protocols
> (i.e. those written before the plague of pointy brackets) use this in
> some way.

I'm not sure ctypes handles that, mainly because I don't think C does
(without the usual trick of defining the last field as fixed length)

Paul.
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] an idea for improving struct.unpack api

2005-01-06 Thread Thomas Heller
Paul Moore <[EMAIL PROTECTED]> writes:

> On Thu, 6 Jan 2005 21:28:26 +1100, Anthony Baxter
> <[EMAIL PROTECTED]> wrote:
>> My take on this:
>> 
>> struct.pack/struct.unpack is already one of my least-favourite parts
>> of the stdlib. Of the modules I use regularly, I pretty much only ever
>> have to go back and re-read the struct (and re) documentation because
>> they just won't fit in my brain. Adding additional complexity to them
>> seems like a net loss to me.
>
> Have you looked at Thomas Heller's ctypes? Ignoring the FFI stuff, it
> has a fairly comprehensive interface for defining and using C
> structure types. A simple example:
>
 class POINT(Structure):
> ..._fields_ = [('x', c_int), ('y', c_int)]
> ...
 p = POINT(1,2)
 p.x, p.y
> (1, 2)
 str(buffer(p))
> '\x01\x00\x00\x00\x02\x00\x00\x00'
>
> To convert *from* a byte string is messier, but not too bad:
[...]

For reading structures from files, the undocumented (*) readinto
method is very nice. An example:

class IMAGE_DOS_HEADER(Structure):

class IMAGE_NT_HEADERS(Structure):


class PEReader(object):
def read_image(self, pathname):

# the MSDOS header
image = open(pathname, "rb")
self.dos_header = IMAGE_DOS_HEADER()
image.readinto(self.dos_header)


# The PE header
image.seek(self.dos_header.e_lfanew)
self.nt_headers = IMAGE_NT_HEADERS()
image.readinto(self.nt_headers)


> It might even be possible to get Thomas to add a small helper
> classmethod to ctypes types, something like
>
> POINT.unpack(str, offset=0, length=None)

Maybe, but I would prefer the unbeloved buffer object (*) as argument,
because it has builtin offset and length.

> which does the equivalent of
>
> def unpack(cls, str, offset=0, length=None):
> if length is None:
> length=sizeof(cls)
> b = buffer(str, offset, length)
> new = cls()
> ctypes.memmove(new, b, length)
> return new
>
>> I'd _love_ to find the time to write a sane replacement for struct - as
>> well as the current use case, I'd also like it to handle things like
>> attribute-length-value 3-tuples nicely (where you get a fixed field
>> which identifies the attribute, a fixed field which specifies the value
>> length, and a value of 'length' bytes). Almost all sane network protocols
>> (i.e. those written before the plague of pointy brackets) use this in
>> some way.
>
> I'm not sure ctypes handles that, mainly because I don't think C does
> (without the usual trick of defining the last field as fixed length)

Correct.

(*) Which brings me to the questions I have in my mind for quite some
time: Why is readinto undocumented, and what about the status of the
buffer object: do the recent fixes to the buffer object change it's
status?

Thomas

___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


[Python-Dev] Subscribing to PEP updates

2005-01-06 Thread Nick Coghlan
Someone asked on python-list about getting notifications of changes to PEP's.
As a low-effort solution, would it be possible to add a Sourceforge mailing list 
hook just for checkins to the nondist/peps directory?

Call it python-pep-updates or some such beast. If I remember how checkin 
notifications work correctly, the updates would even come with automatic diffs :)

Cheers,
Nick.
--
Nick Coghlan   |   [EMAIL PROTECTED]   |   Brisbane, Australia
---
http://boredomandlaziness.skystorm.net
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] 2.3.5 schedule, and something I'd like to get in

2005-01-06 Thread Jack Jansen
On 6 Jan 2005, at 00:49, Martin v. Löwis wrote:
The "new" solution is basically to go back to the Unix way of 
building  an extension: link it against nothing and sort things out 
at runtime.  Not my personal preference, but at least we know that 
loading an  extension into one Python won't bring in a fresh copy of 
a different  interpreter or anything horrible like that.
This sounds good, except that it only works on OS X 10.3, right?
What about older versions?
10.3 or later. For older OSX releases (either because you build Python 
on 10.2 or earlier, or because you've set MACOSX_DEPLOYMENT_TARGET to a 
value of 10.2 or less) we use the old behaviour of linking with 
"-framework Python".
--
Jack Jansen, <[EMAIL PROTECTED]>, http://www.cwi.nl/~jack
If I can't dance I don't want to be part of your revolution -- Emma 
Goldman

___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] an idea for improving struct.unpack api

2005-01-06 Thread Michael Hudson
Ilya Sandler <[EMAIL PROTECTED]> writes:

> A problem:
>
> The current struct.unpack api works well for unpacking C-structures where
> everything is usually unpacked at once, but it
> becomes  inconvenient when unpacking binary files where things
> often have to be unpacked field by field. Then one has to keep track
> of offsets, slice the strings,call struct.calcsize(), etc...

IMO (and E), struct.unpack is the primitive atop which something more
sensible is built.  I've certainly tried to build that more sensible
thing at least once, but haven't ever got the point of believing what
I had would be applicable to the general case... maybe it's time to
write such a thing for the standard library.

Cheers,
mwh

-- 
  ARTHUR:  Ford, you're turning into a penguin, stop it.
-- The Hitch-Hikers Guide to the Galaxy, Episode 2
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


[Python-Dev] Re: Subscribing to PEP updates

2005-01-06 Thread David Goodger
[Nick Coghlan]
Someone asked on python-list about getting notifications of changes to
PEP's.
As a low-effort solution, would it be possible to add a Sourceforge
mailing list hook just for checkins to the nondist/peps directory?
-0
Probably possible, but not no-effort, so even if it gets a favorable
reaction someone needs to do some work.  Why not just subscribe to
python-checkins and filter out everything *but* nondist/peps?  As PEP
editor, that's what I do (although I filter manually/visually, since
I'm also interested in other checkins).
--
David Goodger 


signature.asc
Description: OpenPGP digital signature
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] an idea for improving struct.unpack api

2005-01-06 Thread Gustavo J. A. M. Carneiro
On Thu, 2005-01-06 at 13:17 +, Michael Hudson wrote:
> Ilya Sandler <[EMAIL PROTECTED]> writes:
> 
> > A problem:
> >
> > The current struct.unpack api works well for unpacking C-structures where
> > everything is usually unpacked at once, but it
> > becomes  inconvenient when unpacking binary files where things
> > often have to be unpacked field by field. Then one has to keep track
> > of offsets, slice the strings,call struct.calcsize(), etc...
> 
> IMO (and E), struct.unpack is the primitive atop which something more
> sensible is built.  I've certainly tried to build that more sensible
> thing at least once, but haven't ever got the point of believing what
> I had would be applicable to the general case... maybe it's time to
> write such a thing for the standard library.

  I've been using this simple wrapper:

def stream_unpack(stream, format):
return struct.unpack(format, stream.read(struct.calcsize(format)))

  It works with file-like objects, such as file, StringIO,
socket.makefile(), etc.  Working with streams is useful because
sometimes you don't know how much you need to read to decode a message
in advance.

  Regards.

> 
> Cheers,
> mwh
> 
-- 
Gustavo J. A. M. Carneiro
<[EMAIL PROTECTED]> <[EMAIL PROTECTED]>
The universe is always one step beyond logic.


smime.p7s
Description: S/MIME cryptographic signature
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] csv module TODO list

2005-01-06 Thread "Martin v. Löwis"
Andrew McNamara wrote:
Marc-Andre Lemburg mentioned that he has encountered UTF-16 encoded csv
files, so a reasonable starting point would be the ability to read and
parse, as well as the ability to generate, one of these.
I see. That would be reasonable, indeed. Notice that this is not so much
a "Unicode issue", but more an "encoding" issue. If you solve the
"arbitrary encodings" problem, you solve UTF-16 as a side effect.
The reader interface currently returns a row at a time, consuming as many
lines from the supplied iterable (with the most common iterable being
a file). This suggests to me that we will need an optional "encoding"
argument to the reader constructor, and that the reader will need to
decode the source lines.
Ok. In this context, I see two possible implementation strategies:
1. Implement the csv module two times: once for bytes, and once for
   Unicode characters. It is likely that the source code would be
   the same for each case; you just need to make sure the "Dialect
   and Formatting Parameters" change their width accordingly.
   If you use the SRE approach, you would do
   #define CSV_ITEM_T char
   #define CSV_NAME_PREFIX byte_
   #include "csvimpl.c"
   #define CSV_ITEM_T Py_Unicode
   #define CSV_NAME_PREFIX unicode_
   #include "csvimpl.c"
2. Use just the existing _csv module, and represent non-byte encodings
   as UTF-8. This will work as long as the delimiters and other markup
   characters have always a single byte in UTF-8, which is the case
   for "':\, as well as for \r and \n. Then, wenn processing using
   an explicit encoding, first convert the input into Unicode objects.
   Then encode the Unicode objects into UTF-8, and pass it to _csv.
   For the results you get back, convert each element back from UTF-8
   to a Unicode object.
This could be implemented as
def reader(f, encoding=None):
if encoding is None: return _csv.reader(f)
enc, dec, reader, writer = codecs.lookup(encoding)
utf8_enc, utf8_dec, utf8_r, utf8_w = codecs.lookup("UTF-8")
# Make a recoder which can only read
utf8_stream = codecs.StreamRecoder(f, utf8_enc, None, Reader, None)
csv_reader = _csv.reader(utf8_stream)
# For performance reasons, map_result could be implemented in C
def map_result(t):
result = [None]*len(t)
for i, val in enumerate(t):
result[i] = utf8_dec(val)
return tuple(result)
return itertools.imap(map_result, csv_reader)
# This code is untested
This approach has the disadvantage of performing three recodings:
from input charset to Unicode, from Unicode to UTF-8, from UTF-8
to Unicode. One could:
- skip the initial recoding if the encoding is already known
  to be _csv-safe (i.e. if it is a pure ASCII superset).
  This would be valid for ASCII, iso-8859-n, UTF-8, ...
- offer the user to keep the results in the input encoding,
  instead of always returning Unicode objects.
Apart from this disadvantage, I think this gives people what they want:
they can specify the encoding of the input, and they get the results not
only csv-separated, but also unicode-decode. This approach is the same
that is used for Python source code encodings: the source is first
recoded into UTF-8, then parsed, then recoded back.
That said, I'm hardly a unicode expert, so I
may be overlooking something (could a utf-16 encoded character span a
line break, for example).
This cannot happen: \r, in UTF-16, is also 2 bytes (0D 00, if UTF-16LE).
There are issues that Unicode has additional line break characters,
which is probably irrelevant.
Regards,
Martin
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] csv module TODO list

2005-01-06 Thread Anders J. Munch
Andrew McNamara wrote:
> 
> I'm not altogether sure there. The parsing state machine is all
> written in C, and deals with signed chars - I expect we'll need two
> versions of that (or one version that's compiled twice using
> pre-processor macros). Quite a large job. Suggestions gratefully
> received.

How about using UTF-8 internally?  Change nothing in _csv.c, but in
csv.py encode/decode any unicode strings into UTF-8 on the way to/from
_csv.  File-like objects passed in by the user can be wrapped in
proxies that take care of encoding and decoding user strings, as well
as trans-coding between UTF-8 and the users chosen file encoding.

All that coding work may slow things down, but your original fast _csv
module will still be there when you need it.

- Anders
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


RE: [Python-Dev] Re: Subscribing to PEP updates

2005-01-06 Thread Chermside, Michael
> Why not just subscribe to
> python-checkins and filter out everything *but* nondist/peps?

But there are lots of people who might be interested in
following PEP updates but not other checkins. Pretty
much anyone who considers themselves a "user" of Python
not a developer. Perhaps they don't even know C. That's a
lot to filter through for such people. (After all, I
sure HOPE that only a small fraction of checkins are for
PEPs not code.)

I'm +0 on it... but I'll mention that if such a list were
created I'd subscribe. So maybe that's +0.2 instead.

-- Michael Chermside



This email may contain confidential or privileged information. If you believe 
you have received the message in error, please notify the sender and delete the 
message without copying or disclosing it.

___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


[Python-Dev] Re: super() harmful?

2005-01-06 Thread Guido van Rossum
> Please notice that I'm talking about concrete, real issues, not just a
> "super is bad!" rant.

Then why is the title "Python's Super Considered Harmful" ???

Here's my final offer.  Change the title to something like "Multiple
Inheritance Pitfalls in Python" and nobody will get hurt.

> They are not inherent in cooperative
> multiple inheritance, but occur mostly because of its late addition to python,

Would you rather not have seen it (== cooperative inheritance) added at all?

> and the cumbersome way in which you have to invoke super.

Given Python's dynamic nature I couldn't think of a way to make it
less cumbersome. I see you tried (see below) and couldn't either. At
this point I tend to say "put up or shut up."

> I wrote up the page as part of an investigation into converting Twisted
> to use super. I thought it would be a good idea to do the conversion,
> but others told me it would be a bad idea for backwards compatibility
> reasons. I did not believe, at first, and conducted experiments. In the
> end, I concluded that it is not possible, because of the issues with
> mixing the new and old paradigm.

So it has nothing to do with the new paradigm, just with backwards
compatibility. I appreciate those issues (more than you'll ever know)
but I don't see why you should try to discourage others from using the
new paradigm, which is what your article appears to do.

> Leaving behind the backwards compatibility issues...
> 
> In order to make super really nice, it should be easier to use right.
> Again, the two major issues that cause problems are: 1) having to
> declare every method with *args, **kwargs, and having to pass those and
> all the arguments you take explicitly to super,

That's only an issue with __init__ or with code written without
cooperative MI in mind. When using cooperative MI, you shouldn't
redefine method signatures, and all is well.

 and 2) that
> traditionally __init__ is called with positional arguments.

Cooperative MI doesn't have a really good solution for __init__.
Defining and calling __init__ only with keyword arguments is a good
solution. But griping about "traditionally" is a backwards
compatibility issue, which you said you were leaving behind.

> To fix #1, it would be really nice if you could write code something
> like the following snippet. Notice especially here that the 'bar'
> argument gets passed through C.__init__ and A.__init__, into
> D.__init__, without the previous two having to do anything about it.
> However, if you ask me to detail how this could *possibly* *ever* work
> in python, I have no idea. Probably the answer is that it can't.

Exactly. What is your next_method statement supposed to do?

No need to reply except when you've changed the article. I'm tired of
the allegations.

-- 
--Guido van Rossum (home page: http://www.python.org/~guido/)
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] buffer objects [was: an idea for improving struct.unpack api]

2005-01-06 Thread James Y Knight
On Jan 6, 2005, at 7:22 AM, Thomas Heller wrote:
(*) Which brings me to the questions I have in my mind for quite some
time: Why is readinto undocumented, and what about the status of the
buffer object: do the recent fixes to the buffer object change it's
status?
I, for one, would be very unhappy if the byte buffer object were to go 
away. It's quite useful.

I didn't even realize readinto existed. It'd be great to add more of 
them. os.readinto for reading from fds and socket.socket.recvinto for 
reading from sockets. Is there any reason the writable buffer interface 
isn't exposed to python-land?

James
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Re: super() harmful?

2005-01-06 Thread Phillip J. Eby
At 02:46 AM 1/6/05 -0500, James Y Knight wrote:
To fix #1, it would be really nice if you could write code something like 
the following snippet. Notice especially here that the 'bar' argument gets 
passed through C.__init__ and A.__init__, into D.__init__, without the 
previous two having to do anything about it. However, if you ask me to 
detail how this could *possibly* *ever* work in python, I have no idea. 
Probably the answer is that it can't.

class A(object):
def __init__(self):
print "A"
next_method
class B(object):
def __init__(self):
print "B"
next_method
Not efficiently, no, but it's *possible*.  Just write a 'next_method()' 
routine that walks the frame stack and self's MRO, looking for a 
match.  You know the method name from f_code.co_name, and you can check 
each class' __dict__ until you find a function or classmethod object whose 
code is f_code.  If not, move up to the next frame and try again.Once 
you know the class that the function comes from, you can figure out the 
"next" method, and pull its args from the calling frame's args, walking 
backward to other calls on the same object, until you find all the args you 
need.  Oh, and don't forget to make sure that you're inspecting frames that 
have the same 'self' object.

Of course, the result would be a hideous evil ugly hack that should never 
see the light of day, but you could *do* it, if you *really really* wanted 
to.  And if you wrote it in C, it might be only 50 or 100 times slower than 
super().  :)

___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


RE: [Python-Dev] Re: Subscribing to PEP updates

2005-01-06 Thread Barry Warsaw
On Thu, 2005-01-06 at 11:33, Chermside, Michael wrote:
> > Why not just subscribe to
> > python-checkins and filter out everything *but* nondist/peps?
> 
> But there are lots of people who might be interested in
> following PEP updates but not other checkins. Pretty
> much anyone who considers themselves a "user" of Python
> not a developer. Perhaps they don't even know C. That's a
> lot to filter through for such people. (After all, I
> sure HOPE that only a small fraction of checkins are for
> PEPs not code.)
> 
> I'm +0 on it... but I'll mention that if such a list were
> created I'd subscribe. So maybe that's +0.2 instead.

As an experiment, I just added a PEP topic to the python-checkins
mailing list.  You could subscribe to this list and just select the PEP
topic (which matches the regex "PEP" in the Subject header or first few
lines of the body).

Give it a shot and let's see if that does the trick.
-Barry



signature.asc
Description: This is a digitally signed message part
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


[Python-Dev] Re: super() harmful?

2005-01-06 Thread Terry Reedy

"James Y Knight" <[EMAIL PROTECTED]> wrote in message 
news:[EMAIL PROTECTED]
> Please notice that I'm talking about concrete, real issues, not just a 
> "super is bad!" rant.

Umm, James, come on.  Let's be really real and concrete ;-).

Your title "Python's Super Considered Harmful" is an obvious reference to 
and takeoff on Dijkstra's influential polemic "Goto Considered Harmful".

To me, the obvious message therefore is that super(), like goto, is an 
ill-conceived monstrosity that warps peoples' minds and should be banished. 
I can also see a slight dig at Guido for introducing such a thing decades 
after Dijkstra taught us to know better.

If that is your summary message for me, fine.  If not, try something else. 
The title of a piece is part of its message -- especially when it has an 
intelligible meaning.  For people who read the title in, for instance, a 
clp post (as I did), but don't follow the link and read what is behind the 
title (which I did do), the title *is* the message.

Terry J. Reedy



___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Re: super() harmful?

2005-01-06 Thread Bill Janssen
> Then why is the title "Python's Super Considered Harmful" ???
> 
> Here's my final offer.  Change the title to something like "Multiple
> Inheritance Pitfalls in Python" and nobody will get hurt.

Or better yet, considering the recent thread on Python marketing,
"Multiple Inheritance Mastery in Python" :-).

Bill
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Subscribing to PEP updates

2005-01-06 Thread Brett C.
Nick Coghlan wrote:
Someone asked on python-list about getting notifications of changes to 
PEP's.

As a low-effort solution, would it be possible to add a Sourceforge 
mailing list hook just for checkins to the nondist/peps directory?

Call it python-pep-updates or some such beast. If I remember how checkin 
notifications work correctly, the updates would even come with automatic 
diffs :)

Probably not frequent or comprehensive enough, but I try to always have at 
least a single news item that clumps all PEP updates that python-dev gets 
notified about.

-Brett
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] an idea for improving struct.unpack api

2005-01-06 Thread Bob Ippolito
On Jan 6, 2005, at 8:17, Michael Hudson wrote:
Ilya Sandler <[EMAIL PROTECTED]> writes:
A problem:
The current struct.unpack api works well for unpacking C-structures 
where
everything is usually unpacked at once, but it
becomes  inconvenient when unpacking binary files where things
often have to be unpacked field by field. Then one has to keep track
of offsets, slice the strings,call struct.calcsize(), etc...
IMO (and E), struct.unpack is the primitive atop which something more
sensible is built.  I've certainly tried to build that more sensible
thing at least once, but haven't ever got the point of believing what
I had would be applicable to the general case... maybe it's time to
write such a thing for the standard library.
This is my ctypes-like attempt at a high-level interface for struct.  
It works well for me in macholib:  
http://svn.red-bean.com/bob/py2app/trunk/src/macholib/ptypes.py

-bob
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Re: super() harmful?

2005-01-06 Thread Tim Peters
[Guido]
>> Then why is the title "Python's Super Considered Harmful" ???
>>
>> Here's my final offer.  Change the title to something like "Multiple
>> Inheritance Pitfalls in Python" and nobody will get hurt.

[Bill Janssen]
> Or better yet, considering the recent thread on Python marketing,
> "Multiple Inheritance Mastery in Python" :-).

I'm sorry, but that's not good marketing -- it contains big words, and
putting the brand name last is ineffective.  How about

Python's Super() is Super -- Over 1528.7% Faster than C!

BTW, it's important that fractional percentages end with an odd digit.
 Research shows that if the last digit is even, 34.1% of consumers
tend to suspect the number was made up.
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] proto-pep: How to change CPython's bytecode

2005-01-06 Thread Brett C.
OK, latest update with all suggest revisions (mention this is for CPython, 
section for known previous bytecode work).

If no one has any revisions I will submit to David for official PEP acceptance 
this weekend.

--
PEP: XXX
Title: How to change CPython's bytecode
Version: $Revision: 1.4 $
Last-Modified: $Date: 2003/09/22 04:51:50 $
Author: Brett Cannoon <[EMAIL PROTECTED]>
Status: Draft
Type: Informational
Content-Type: text/x-rst
Created: XX-XXX-
Post-History: XX-XXX-
Abstract

Python source code is compiled down to something called bytecode.  This
bytecode must implement enough semantics to perform the actions required by the
Language Reference [#lang-ref].  As such, knowing how to add, remove, or change
the bytecode is important to do properly when changing the abilities of the
Python language.
This PEP covers how to accomplish this in the CPython implementation of the
language (referred to as simply "Python" for the rest of this PEP).
Rationale
=
While changing Python's bytecode is not a frequent occurence, it still happens.
Having the required steps documented in a single location should make
experimentation with the bytecode easier since it is not necessarily obvious
what the steps are to change the bytecode.
This PEP, paired with PEP 306 [#PEP-306]_, should provide enough basic
guidelines for handling any changes performed to the Python language itself in
terms of syntactic changes that introduce new semantics.
Checklist
=
This is a rough checklist of what files need to change and how they are
involved with the bytecode.  All paths are given from the viewpoint of
``/cvsroot/python/dist/src`` from CVS).  This list should not be considered
exhaustive nor to cover all possible situations.
- ``Include/opcode.h``
This include file lists all known opcodes and associates each opcode
name with
a unique number.  When adding a new opcode it is important to take note
of the ``HAVE_ARGUMENT`` value.  This ``#define``'s value specifies the
value at which all opcodes that have a value greater than
``HAVE_ARGUMENT`` are expected to take an argument to the opcode.
- ``Lib/opcode.py``
Lists all of the opcodes and their associated value.  Used by the dis
module [#dis]_ to map bytecode values to their names.
- ``Python/ceval.c``
Contains the main interpreter loop.  Code to handle the evalution of an
opcode here.
- ``Python/compile.c``
To make sure an opcode is actually used, this file must be altered.
The emitting of all bytecode occurs here.
- ``Lib/compiler/pyassem.py``, ``Lib/compiler/pycodegen.py``
The 'compiler' package [#compiler]_ needs to be altered to also reflect
any changes to the bytecode.
- ``Doc/lib/libdis.tex``
The documentation [#opcode-list] for the dis module contains a complete
list of all the opcodes.
- ``Python/import.c``
Defines the magic word (named ``MAGIC``) used in .pyc files to detect if
the bytecode used matches the one used by the version of Python running.
This number needs to be changed to make sure that the running
interpreter does not try to execute bytecode that it does not know
about.
Suggestions for bytecode development

A few things can be done to make sure that development goes smoothly when
experimenting with Python's bytecode.  One is to delete all .py(c|o) files
after each semantic change to Python/compile.c .  That way all files will use
any bytecode changes.
Make sure to run the entire testing suite [#test-suite]_.  Since the
``regrtest.py`` driver recompiles all source code before a test is run it acts
a good test to make sure that no existing semantics are broken.
Running parrotbench [#parrotbench]_ is also a good way to make sure existing
semantics are not broken; this benchmark is practically a compliance test.
Previous experiments

Skip Montanaro presented a paper at a Python workshop on a peephole optimizer
[#skip-peephole]_.
Michael Hudson has a non-active SourceForge project named Bytecodehacks
[#Bytecodehacks]_ that provides functionality for playing with bytecode
directly.
References
==
.. [#lang-ref] Python Language Reference, van Rossum & Drake
   (http://docs.python.org/ref/ref.html)
.. [#PEP-306] PEP 306, How to Change Python's Grammar, Hudson
   (http://www.python.org/peps/pep-0306.html)
.. [#dis] dis Module
   (http://docs.python.org/lib/module-dis.html)
.. [#test-suite] 'test' package
   (http://docs.python.org/lib/module-test.html)
.. [#parrotbench] Parrotbench
   (ftp://ftp.python.org/pub/python/parrotbench/parrotbench.tgz,
   http://mail.python.org/pipermail/python-dev/2003-December/041527.html)
.. [#opcode-list] Python Byte Code Instructions
   (http://docs.python.org/lib/bytecodes.html)
.. [#skip-peephole]
http://www.foretec.com/python/workshops/1998-11/proceed

Re: [Python-Dev] 2.3.5 schedule, and something I'd like to get in

2005-01-06 Thread Ronald Oussoren
On 6-jan-05, at 14:04, Jack Jansen wrote:
On 6 Jan 2005, at 00:49, Martin v. Löwis wrote:
The "new" solution is basically to go back to the Unix way of 
building  an extension: link it against nothing and sort things out 
at runtime.  Not my personal preference, but at least we know that 
loading an  extension into one Python won't bring in a fresh copy of 
a different  interpreter or anything horrible like that.
This sounds good, except that it only works on OS X 10.3, right?
What about older versions?
10.3 or later. For older OSX releases (either because you build Python 
on 10.2 or earlier, or because you've set MACOSX_DEPLOYMENT_TARGET to 
a value of 10.2 or less) we use the old behaviour of linking with 
"-framework Python".
Wouldn't it be better to link with the actual dylib inside the 
framework on 10.2? Otherwise you can no longer build 2.3 extensions 
after you've installed 2.4.

Ronald
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] 2.3.5 schedule, and something I'd like to get in

2005-01-06 Thread "Martin v. Löwis"
Ronald Oussoren wrote:
Wouldn't it be better to link with the actual dylib inside the framework 
on 10.2? Otherwise you can no longer build 2.3 extensions after you've 
installed 2.4.
That's what I thought, too.
Regards,
Martin
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] 2.3.5 schedule, and something I'd like to get in

2005-01-06 Thread Bob Ippolito
On Jan 6, 2005, at 14:59, Ronald Oussoren wrote:
On 6-jan-05, at 14:04, Jack Jansen wrote:
On 6 Jan 2005, at 00:49, Martin v. Löwis wrote:
The "new" solution is basically to go back to the Unix way of 
building  an extension: link it against nothing and sort things out 
at runtime.  Not my personal preference, but at least we know that 
loading an  extension into one Python won't bring in a fresh copy 
of a different  interpreter or anything horrible like that.
This sounds good, except that it only works on OS X 10.3, right?
What about older versions?
10.3 or later. For older OSX releases (either because you build 
Python on 10.2 or earlier, or because you've set 
MACOSX_DEPLOYMENT_TARGET to a value of 10.2 or less) we use the old 
behaviour of linking with "-framework Python".
Wouldn't it be better to link with the actual dylib inside the 
framework on 10.2? Otherwise you can no longer build 2.3 extensions 
after you've installed 2.4.
It would certainly be better to do this for 10.2.
-bob
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


[Python-Dev] Changing the default value of stat_float_times

2005-01-06 Thread "Martin v. Löwis"
When support for floating-point stat times was added in 2.3,
it was the plan that this should eventually become the default.
Does anybody object if I change the default now, for Python 2.5?
Applications which then break can globally change it back, with
os.stat_float_times(False)
Regards,
Martin
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Re: super() harmful?

2005-01-06 Thread Alex Martelli
On 2005 Jan 06, at 20:16, Terry Reedy wrote:
"James Y Knight" <[EMAIL PROTECTED]> wrote in message
news:[EMAIL PROTECTED]
Please notice that I'm talking about concrete, real issues, not just a
"super is bad!" rant.
Umm, James, come on.  Let's be really real and concrete ;-).
Your title "Python's Super Considered Harmful" is an obvious reference 
to
and takeoff on Dijkstra's influential polemic "Goto Considered 
Harmful".
...or any other of the 345,000 google hits on "considered harmful"...?-)

Alex
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


[Python-Dev] Re: an idea for improving struct.unpack api

2005-01-06 Thread Martin Bless
On Wed, 5 Jan 2005 21:27:16 -0800 (PST), Ilya Sandler
<[EMAIL PROTECTED]> wrote:

>The current struct.unpack api works well for unpacking C-structures where
>everything is usually unpacked at once, but it
>becomes  inconvenient when unpacking binary files where things
>often have to be unpacked field by field.

It may be helpful to remember Sam Rushings NPSTRUCT extension which
accompanied the Calldll module of that time (2001). Still available
from

http://www.nightmare.com/~rushing/dynwin/

mb - Martin

___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


RE: [Python-Dev] Re: super() harmful?

2005-01-06 Thread Delaney, Timothy C (Timothy)
Guido van Rossum wrote:

>> and the cumbersome way in which you have to invoke super.
> 
> Given Python's dynamic nature I couldn't think of a way to make it
> less cumbersome. I see you tried (see below) and couldn't either. At
> this point I tend to say "put up or shut up."

Well, there's my autosuper recipe you've seen before:
http://aspn.activestate.com/ASPN/Cookbook/Python/Recipe/286195

which does basically what Philip descibes ...

>>> import autosuper
>>>
>>> class A (autosuper.autosuper):
... def test (self, a):
... print 'A.test: %s' % (a,)
...
>>> class B (A):
... def test (self, a):
... print 'B.test: %s' % (a,)
... self.super(a + 1)
...
>>> class C (A):
... def test (self, a):
... print 'C.test: %s' % (a,)
... self.super.test(a + 1)
...
>>> class D (B, C):
... def test (self, a):
... print 'D.test: %s' % (a,)
... self.super(a + 1)
...
>>> D().test(1)
D.test: 1
B.test: 2
C.test: 3
A.test: 4

It uses sys._getframe() of course ...

Tim Delaney
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


[Python-Dev] Re: csv module TODO list

2005-01-06 Thread Skip Montanaro

Magnus> Quite a while ago I posted some material to the csv-list about
Magnus> problems using the csv module on Unix-style colon-separated
Magnus> files -- it just doesn't deal properly with backslash escaping
Magnus> and is quite useless for this kind of file. I seem to recall the
Magnus> general view was that it wasn't intended for this kind of thing
Magnus> -- only the sort of csv that Microsoft Excel outputs/inputs, 

Yes, that's my recollection as well.  It's possible that we can extend the
interpretation of the escape char.

Magnus> I'll be happy to re-send or summarize the relevant emails, if
Magnus> needed.

Yes, that would be helpful.  Can you send me an example (three or four
lines) of the sort of file it won't grok?

Skip
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


[Python-Dev] Re: [Csv] csv module TODO list

2005-01-06 Thread Skip Montanaro

>> * is CSV going to be maintained outside the python tree?
>> If not, remove the 2.2 compatibility macros for: PyDoc_STR,
>> PyDoc_STRVAR, PyMODINIT_FUNC, etc.

Andrew> Does anyone thing we should continue to maintain this 2.2
Andrew> compatibility?

With the release of 2.4, 2.2 has officially dropped off the radar screen,
right (zero probability of a 2.2.n+1 release, though the probability was
vanishingly small before).  I'd say toss it.  Do just that in a single
checkin so someone who's interested can do a simple cvs diff to yield
an initial patch file for external maintenance of that feature.

>> * inline the following functions since they are used only in one
>> place get_string, set_string, get_nullchar_as_None,
>> set_nullchar_as_None, join_reset (maybe)

Andrew> It was done that way as I felt we would be adding more getters
Andrew> and setters to the dialect object in future.

The only new dialect attribute I envision is an encoding attribute.

>> * is it necessary to have Dialect_methods, can you use 0 for tp_methods?

Andrew> I was assuming I would need to add methods at some point (in
Andrew> fact, I did have methods, but removed them).

Dialect objects are really just data containers, right?  I don't see that
they would need any methods.

>> * remove commented out code (PyMem_DEL) on line 261
>> Have you used valgrind on the test to find memory overwrites/leaks?

Andrew> No, valgrind wasn't used.

I have it here at work.  I'll try to find a few minutes to run the csv tests
under valgrind's control.

Skip
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


[Python-Dev] Re: Re: super() harmful?

2005-01-06 Thread Terry Reedy

"Alex Martelli" <[EMAIL PROTECTED]> wrote in message 
news:[EMAIL PROTECTED]
>
> On 2005 Jan 06, at 20:16, Terry Reedy wrote:
>
>> [Knight's] title "Python's Super Considered Harmful" is an obvious 
>> reference to
>> and takeoff on Dijkstra's influential polemic "Go To Statement 
>> Considered Harmful". http://www.acm.org/classics/oct95/
[title corrected from original posting and link added]

> ...or any other of the 345,000 google hits on "considered harmful"...?-)

Restricting the search space to 'Titles of computer science articles' would 
reduce the number of hits considerably.  Many things have been considered 
harmful at sometime in almost every field of human endeavor.  However, 
according to

Eric Meyer, "Considered Harmful" Essays Considered Harmful
> 

even that restriction would lead to thousands of hits inspired directly or 
indirectly by Niklaus Wirth's title for Dijkstra's Letter to the Editor. 
Thanks for the link.

Terry J. Reedy



___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


[Python-Dev] Re: [Csv] csv module TODO list

2005-01-06 Thread Andrew McNamara
>There's a bunch of jobs we (CSV module maintainers) have been putting
>off - attached is a list (in no particular order): 
[...]
>Also, review comments from Jeremy Hylton, 10 Apr 2003:
>
>I've been reviewing extension modules looking for C types that should
>participate in garbage collection.  I think the csv ReaderObj and
>WriterObj should participate.  The ReaderObj it contains a reference to
>input_iter that could be an arbitrary Python object.  The iterator
>object could well participate in a cycle that refers to the ReaderObj.
>The WriterObj has a reference to a writeline callable, which could well
>be a method of an object that also points to the WriterObj.

I finally got around to looking at this, only to realise Jeremy did the
work back in Apr 2003 (thanks). One question, however - the GC doco in
the Python/C API seems to suggest to me that PyObject_GC_Track should be
called on the newly minted object prior to returning from the initialiser
(and correspondingly PyObject_GC_UnTrack should be called prior to
dismantling). This isn't being done in the module as it stands. Is the
module wrong, or is my understanding of the reference manual incorrect?

-- 
Andrew McNamara, Senior Developer, Object Craft
http://www.object-craft.com.au/
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


[Python-Dev] Minor change to behaviour of csv module

2005-01-06 Thread Andrew McNamara
I'm considering a change to the csv module that could potentially break
some obscure uses of the module (but CSV files usually quote, rather
than escape, so the most common uses aren't effected).

Currently, with a non-default escapechar='\\', input like:

field one,field \
two,field three

Returns:

["field one", "field \\\ntwo", "field three"]

In the 2.5 series, I propose changing this to return:

["field one", "field \ntwo", "field three"]

Is this reasonable? Is the old behaviour desirable in any way (we could
add a switch to enable to new behaviour, but I feel that would only
allow the confusion to continue)?

BTW, some of my other changes have changed the exceptions raised when
bad arguments were passed to the reader and writer factory functions - 
previously, the exceptions were semi-random, including TypeError,
AttributeError and csv.Error - they should now almost always be TypeError
(like most other argument passing errors). I can't see this being a
problem, but I'm prepared to listen to arguments.

-- 
Andrew McNamara, Senior Developer, Object Craft
http://www.object-craft.com.au/
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com