[Python-Dev] Community buildbots (was Re: User's complaints)

2006-07-13 Thread glyph
On Wed, 12 Jul 2006 10:30:17 +1000, Michael Ellerman [EMAIL PROTECTED] wrote:

Well here's one I stumbled across the other day. I don't know if it's
legit, but it's still bad PR:

http://www.gbch.net/gjb/blog/software/discuss/python-sucks.html

Having been exhorted (or maybe I mean excoriated) by your friendly release 
manager earlier this week to post my comments and criticisms about Python here
rather than vent in random IRC chatter, I feel morally compelled to comment.

I see some responses to that post which indicate that the specific bug will be
fixed, and that's good, but there is definitely a pattern he's talking about
here, not just one issue.  I think there is a general pattern of small,
difficult to detect breakages in Python.  Twisted (and the various other
Divmod projects that I maintain) are thoroughly unit-tested, but there are
still significant issues in each Python release that require changes.

Unlike the jerk who posted that python sucks rant, I'm not leaving Python
because one function changed in a major release; I do expect to have to
maintain my projects myself, and major releases are major for a reason.  I
only wish that upgrading all my other dependencies were as easy as upgrading
Python!  I do see that the developers are working with some caution to avoid
breaking code, and attempting to consider the cost of each change.

However, although I've seen lots of discussions of what average code might
break when exposed to new versions of Python, these discussions tend to be
entirely hypothetical.  Do the core Python developers typically run the test
suites of major Python applications when considering major changes, such as 
Zope, Plone, TurboGears, and of course Twisted?  Not that breakages would be
considered unacceptable -- some gain is surely worth the cost -- but to
establish some empirical level of burden on the community?

I would like to propose, although I certainly don't have time to implement,
a program by which Python-using projects could contribute buildslaves which
would run their projects' tests with the latest Python trunk.  This would
provide two useful incentives: Python code would gain a reputation as
generally well-tested (since there is a direct incentive to write tests for
your project: get notified when core python changes might break it), and the
core developers would have instant feedback when a small change breaks more
code than it was expected to.

I can see that certain Python developers expect that some of this work is the
responsibility of the user community, and to some extent that's true, but at
least half of the work needs to be done _before_ the changes are made.  If
some Python change breaks half of Twisted, I would like to know about it in
time to complain about the implementation, rather than flailing around once
the Python feature-freeze has set in and hoping that it's nothing too serious.
For example, did anyone here know that the new-style exceptions stuff in 2.5
caused hundreds of unit-test failures in Twisted?  I am glad the change was
made, and one of our users did catch it, so the process isn't fatally broken,
but it is still worrying.

Another problem is simply that the Python developers don't have the same
very public voice that other language developers do.  It doesn't necessarily
have to be a blog, but python-dev is fast-paced and intimidating, and a
vehicle for discussion among those in the know, rather than dissimenation to
the community at large.  It's a constant source of anxiety to me that I might
miss some key feature addition to Python which breaks or distorts some key bit
of Twisted functionality (as the new-style exceptions, or recent ImportWarning
almost did) because I don't have enough time to follow all the threads here.
I really appreciate the work that Steve Bethard et. al. are doing on the
python-dev summaries, but they're still pretty dry and low level.

While the new python.org is very nice, I do note that there's no blogs entry
on the front page, something which has become a fixture on almost every other 
website I visit regularly.  The news page is not very personal, mainly a 
listing of releases and events.  There's no forum for getting the community 
_excited_ about new features (especially new features which are explicitly 
enabled by potential breakages), and selling them on the cool uses.  Who 
knows, maybe I'll even start using decorators syntax all over the place if I 
see them presented somewhere by someone who is very excited about the feature 
and thinks it's worthwhile, rather than as a defense on a mailing list 
against a criticism, or a simple announcement of the feature's existence.

I've seen the other side of this problem as well, so I can appreciate that it
is quite difficult to get this kind of thing right: lots of applications using
Twisted break when we change broken or deprecated APIs.  Twisted is lucky
though; it has numerous subprojects, and I maintain a half-dozen unrelated
projects in a different 

Re: [Python-Dev] Community buildbots

2006-07-13 Thread glyph
On Thu, 13 Jul 2006 19:19:08 +0100, Michael Hudson [EMAIL PROTECTED] wrote:
[EMAIL PROTECTED] writes:

 For example, did anyone here know that the new-style exceptions stuff in 2.5
 caused hundreds of unit-test failures in Twisted?  I am glad the change was
 made, and one of our users did catch it, so the process isn't fatally broken,
 but it is still worrying.

When implementing this stuff, I could have (... snip ...)

To be clear, I agree with the decision you made in this particular case.  I
just would have appreciated the opportunity to participate in the
discussion before the betas were out and the featureset frozen.  (Of course I
*can* always do that, and some other Twisted devs watch python-dev a bit more
closely than I do, but the point is that the amount of effort required to do
this is prohibitive for the average Python hacker, whereas the time to set up an
individual buildbot might not be.)

(Aside: IMHO, the sooner we can drop old-style classes entirely, the better.
That is one bumpy Python upgrade process that I will be _very_ happy to do.
There's no way to have documentation that expresses the requirement that an 
implementation of an interface be new-style or old-style without reference to 
numerous historical accidents, which are bewildering and upsetting to people
reading documentation for the first time.)
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Community buildbots (was Re: User's complaints)

2006-07-13 Thread glyph


On Thu, 13 Jul 2006 11:29:16 -0700, Aahz [EMAIL PROTECTED] wrote:

There's been some recent discussion in the PSF wondering where it would
make sense to throw some money to remove grit in the wheels; do you think
this is a case where that would help?

Most likely yes.  It's not a huge undertaking, and there are a lot of people out
there in the community with the knowledge of Buildbot to make this happen.
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Community buildbots (was Re: User's complaints)

2006-07-14 Thread glyph
On Fri, 14 Jul 2006 06:46:55 -0500, [EMAIL PROTECTED] wrote:

Neal How often is the python build broken or otherwise unusable?

Not very often.

I have to agree.  The effort I'm talking about is not in fixing large numbers
of problems, but simply gearing up to properly test to see if there *are*
problems.  Keep in mind though: just because the problems are small or easy
to fix doesn't mean they're not severe.  One tiny bug can prevent a program
from even starting up.

Admittedly, I'm not as sophisticated a user as Fredrik or Glyph, but I
suspect that my usage of the language isn't all that different from most
Python developers out there.

A huge percentage of Python developers are working with Zope, which means that
although *their* code might not be terribly sophisticated, it is
nevertheless sitting on top of a rather large and intricate pile of implicit
dependencies on interpreter behavior.

Neal Is part of your point that these developers only care about
Neal something called release and they won't start testing before
Neal then?  If that's the case why don't we start making
Neal (semi-)automated alpha releases every month?

How would that be any easier than a user setting up a read-only repository
and svn-up-ing it once a month then using that as the default interpreter on
that person's development machine?  I maintain interpreters for 2.3, 2.4 and
bleeding edge at the moment.  If I need to it's fairly trivial (a symlink
change) to fall back to the latest stable release.

Glyph, would that sort of scheme work for you?

No.

I really think you're underestimating the sort of effort required to upgrade
Python.

First of all, I do have a job :) and it'd be very hard to make the case to an
investor that it was worth tracking down every potential bug I had to
determine whether it was a problem because I was working with an unreleased
version of Python.  This is the same reason I don't use beta versions of the
kernel or libc for development.

For that matter, I've had to avoid posting to this mailing list because even
*this* is stretching my already overburdened schedule :).

Secondly, I have to work with a few extension modules.  To name a few:
  * ctypes
  * PyPAM
  * pysqlite2
  * zope.interface (C optimizations)
  * xapian
  * pylucene
  * dspam
  * PyOpenSSL
  * PIL
  * PyCrypto
  * pyexpat
  * pygtk
  * pyvte

Recompiling all of these is a project that takes a day.  PyLucene, in fact,
I've never managed to build myself, and I can only run because there happen
to be debian packages which work (with some fiddling) on Ubuntu.  There's no
interactive warning during 'svn up' that it's time to recompile everything,
either, so I don't even know when the ABIs are going to have changed.

Even if everything works perfectly, and were perfectly automated, the compile 
itself would take a few hours.

I also test work on Windows on occasion and recompiling these things _there_
is the work of a week and a half, not to mention it requires that I be sitting
at the one machine where I have my Microsoft™ Developer™ Tools™ installed.

I made the buildbot recommendation specifically because it would centralize
the not inconsiderable effort of integrating these numerous dependencies
(presuming that I could submit a Divmod buildbot as well as a Twisted one).
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Community buildbots (was Re: User's complaints)

2006-07-14 Thread glyph
On Thu, 13 Jul 2006 23:39:06 -0700, Neal Norwitz [EMAIL PROTECTED] wrote:
On 7/13/06, Fredrik Lundh [EMAIL PROTECTED] wrote:

 a longer beta period gives *external* developers more time to catch up,
 and results in less work for the end users.

This is the part I don't get.  For the external developers, if they
care about compatibility, why aren't they testing periodically,
regardless of alpha/beta releases?  How often is the python build
broken or otherwise unusable?

How often do you test new builds of Python against the most recent alpha of,
e.g. the Linux kernel?  This isn't just a hypothetical question: Twisted has
broken because of changes to Linux as often as it has broken due to changes in
Python :).  In Linux's case we're all lucky because *any* regressions with
existing software are considered bugs, whereas in Python's case, some breakagaes
are considered acceptable since it's more feasible to have multiple builds of
Python installed more than multiple kernels for different applications.
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Community buildbots

2006-07-15 Thread glyph


On Sat, 15 Jul 2006 00:13:35 -0400, Terry Reedy [EMAIL PROTECTED] wrote:

Is the following something like what you are suggesting?

Something like it, but...

A Python Application Testing (PAT) machine is set up with buildbot and any
needed custom scripts.  Sometime after that and after 2.5 is released, when
you have a version of, for instance, Twisted that passes its automated test
suite when run on 2.5, you send it (or a URL) and an email address to PAT.
Other developers do the same.  Periodically (once a week?), when PAT is
  ^
 once per checkin to Python trunk
free and a new green development version of either the 2.5.x or 2.6
branches is available, PAT runs the test suites against that version.  An
email is sent for any that fail, perhaps accompanied by the concatenation
of the relevant checkin message.  Some possible options are to select just
one of the branches for testing, to have more than one stable version being
tested, and to receive pass emails.

Sending email also isn't really necessary; I would just like a web page I can
look at (and draw the attention of the python core developers to).
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Community buildbots

2006-07-15 Thread glyph
On Sat, 15 Jul 2006 14:35:08 +1000, Nick Coghlan [EMAIL PROTECTED] wrote:
[EMAIL PROTECTED] wrote:
A __future__ import would allow these behaviors to be upgraded module-by- 
module.

No it wouldn't.

Yes it would! :)

__future__ works solely on the semantics of different pieces of syntax, 
because any syntax changes are purely local to the current module.
...
Changing all the literals in a module to be unicode instances instead of str 
instances is merely scratching the surface of the problem - such a module 
would still cause severe problems for any non-Unicode aware applications 
that expected it to return strings.

A module with the given __future__ import could be written to expect that
literals are unicode instances instead of str, and encode them appropriately
when passing to modules that expect str.  This doesn't solve the problem, but
unlike -U, you can make fixes which will work persistently without having to
treat the type of every string literal as unknown.

The obvious way to write code that works under -U and still works in normal
Python is to .encode('charmap') every value intended to be an octet, and put
'u' in front of every string intended to be unicode.  That would seem to
defeat the purpose of changing the default literal type.

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Community buildbots

2006-07-15 Thread glyph
On Sat, 15 Jul 2006 10:43:22 +0200, \Martin v. Löwis\ [EMAIL PROTECTED] 
wrote:

People can use [-U] to improve the Unicode support in the Python standard
library. When they find that something doesn't work, they can study the
problem, and ponder possible solutions. Then, they can contribute
patches. -U has worked fine for me in the past, I contributed various
patches to make it work better. It hasn't failed for me at all.

I guess it makes more sense as a development tool to work on zero-dependency
tools like the standard library.  Still, -Q has both a __future__ import and
a command-line option, why not -U?
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Dynamic module namspaces

2006-07-17 Thread glyph
On Mon, 17 Jul 2006 10:29:22 -0300, Johan Dahlin [EMAIL PROTECTED] wrote:

I consider __getattribute__ a hack, being able to override __dict__ is less
hackish, IMHO.

Why do you feel one is more hackish than the other?  In my experience the
opposite is true: certain C APIs expect __dict__ to be a real dictionary,
and if you monkey with it they won't call the overridden functions you expect,
whereas things accessing attributes will generally call through all the
appropriate Python-level APIs.

This makes sense to me for efficiency reasons and for clarity as well; if you're
trawling around in a module's __dict__ then you'd better be ready for what
you're going to get - *especially* if the module is actually a package.  Even in
normal python code, packages can have names which would be bound if they were
imported, but aren't yet.
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Problem with super() usage

2006-07-18 Thread glyph


On Tue, 18 Jul 2006 09:24:57 -0400, Jean-Paul Calderone [EMAIL PROTECTED] 
wrote:
On Tue, 18 Jul 2006 09:10:11 -0400, Scott Dial [EMAIL PROTECTED] wrote:
Greg Ewing wrote:
 Guido van Rossum wrote:
 In the world where cooperative multiple inheritance
 originated (C++), this would be a static error.

 I wasn't aware that C++ had anything resembling super().
 Is it a recent addition to the language?

C++ has no concept of MRO, so super() would be completely ambiguous.

I think this was Greg's point.  Talking about C++ and super() is
nonsensical.

C++ originally specified multiple inheritance, but it wasn't cooperative in
the sense that super is.  In Lisp, though, where cooperative method dispatch
originated, call-next-method does basically the same thing in the case where
there's no next method: it calls no-next-method which signals a generic
error.

http://www.lisp.org/HyperSpec/Body/locfun_call-next-method.html

However, you can write methods for no-next-method, so you can override that
behavior as appropriate.  In Python you might achieve a similar effect using a
hack like the one Greg suggested, but in a slightly more systematic way; using
Python's regular inheritance-based method ordering, of course, not bothering 
with multimethods.  Stand-alone it looks like an awful hack, but with a bit
of scaffolding I think it looks nice enough; at least, it looks like the Lisp
solution, which while potentially ugly, is complete :).

This is just implemented as a function for brevity; you could obviously use a
proxy object with all the features of 'super', including optional self,
method getters, etc.  For cooperative classes you could implement noNextMethod
to always be OK, or to provide an appropriate null value for a type map of
a method's indicated return value ('' for str, 0 for int, None for object, 
etc).

# ---cut here---

def callNextMethod(cls, self, methodName, *a, **k):
sup = super(cls, self)
method = getattr(sup, methodName, None)
if method is not None:
return method(*a, **k)
else:
return self.noNextMethod(methodName, *a, **k)

class NextMethodHelper(object):
def noNextMethod(self, methodName, *a, **k):
return getattr(self, noNext_+methodName)(*a, **k)

class A(object):
def m(self):
print A.m

class B(NextMethodHelper):
def m(self):
print B.m
return callNextMethod(B, self, m)

def noNext_m(self):
# it's ok not to have an 'm'!
print No next M, but that's OK!
return None

class C(B, A):
def m(self):
print C.m
return callNextMethod(C, self, m)


# c = C()
# c.m()
#C.m
#B.m
#A.m
# b = B()
# b.m()
#B.m
#No next M, but that's OK!
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Undocumented PEP 302 protocol change by need-for-speed sprint

2006-07-20 Thread glyph
On Thu, 20 Jul 2006 14:57:07 -0400, Phillip J. Eby [EMAIL PROTECTED] wrote:
While investigating the need to apply http://python.org/sf/1525766 I found
that there was a modification to pkgutil during the need-for-speed sprint
that affects the PEP 302 protocol in a backwards incompatible way.

It just so happens that the bug that is reported was probably reported because
I'm working on some controversial new functionality in Twisted - controversial
because it replicates the functionality that bug is about in pkgutil.  This
functionality does make some use of PEP 302 functionality :).

See http://twistedmatrix.com/trac/ticket/1940

Specifically, PEP 302 documents that path_importer_cache always contains
either importer objects or None.  Any code written to obtain importer
objects is therefore now broken, because import.c is slapping False in for
non-existent filesystem paths.

Oddly, for once I'm going to say I don't care about this change.  The code
I've written so far doesn't depend on this, and I was pretty careful to be
conservative about depending too much on the stuff described in PEP 302.  It
documents several features which don't exist (get_data, and methods in the 
imp module which don't exist in python2.3 or python2.4, where it was 
nominally accepted).

There are several options as to how to proceed:

2. Document the breakage, update PEP 302, and make everybody update their code

Personally I'd prefer it if PEP 302 were updated for a variety of reasons.
It's very hard to use as a reference for writing actual code because so many
features are optional or open issues, and there's no description in the 
PEP of what their status is.

Better yet, this breakage (and other things) should be documented in the
Python reference, and the PEP should link to the documentation for different
versions, which can each describe the PEP's implementation status.  The
importing modules section of the library reference seems like a natural
place to put it.

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] uuid tests failing on Windows

2006-08-17 Thread glyph
On Thu, 17 Aug 2006 22:06:24 +0200, \Martin v. Löwis\ [EMAIL PROTECTED] 
wrote:
Georg Brandl schrieb:
 Can somebody please fix that? If not, should we remove the uuid module
 as being immature?

 Patch #1541863 supposedly solves this.

Ah, good. I think it should go in.

Uh, I may be misunderstanding here, but that patch looks like it changes that 
part of the test for test_uuid4 to stop calling uuid4 and call uuid1 instead?
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] uuid tests failing on Windows

2006-08-17 Thread glyph
On Thu, 17 Aug 2006 23:58:27 +0200, \Martin v. Löwis\ [EMAIL PROTECTED] 
wrote:

You misunderstand indeed: the chunk reads (...)

it currently calls uuid1, and will call uuid4 when patched.
test_uuid4 should have never failed, except that it uses uuid1
as the uniqueness test.

Whooops.  I must have hit the reverse diff button in Emacs before reading it.

Thanks for the correction.
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Suggestion for a new built-in - flatten

2006-09-22 Thread glyph
On Fri, 22 Sep 2006 18:43:42 +0100, Michael Foord [EMAIL PROTECTED] wrote:

I have a suggestion for a new Python built in function: 'flatten'.

This seems superficially like a good idea, but I think adding it to Python 
anywhere would do a lot more harm than good.  I can see that consensus is 
already strongly against a builtin, but I think it would be bad to add to 
itertools too.

Flattening always *seems* to be a trivial and obvious operation.  I just need 
something that takes a group of deeply structured data and turns it into a 
group of shallowly structured data..  Everyone that has this requirement 
assumes that their list of implicit requirements for flattening is the 
obviously correct one.

This wouldn't be a problem except that everyone has a different idea of those 
requirements:).

Here are a few issues.

What do you do when you encounter a dict?  You can treat it as its keys(), its 
values(), or its items().

What do you do when you encounter an iterable object?

What order do you flatten set()s in?  (and, ha ha, do you Set the same?)

How are user-defined flattening behaviors registered?  Is it a new special 
method, a registration API?

How do you pass information about the flattening in progress to the 
user-defined behaviors?

If you do something special to iterables, do you special-case strings?  Why or 
why not?

What do you do if you encounter a function?  This is kind of a trick question, 
since Nevow's flattener *calls* functions as it encounters them, then treats 
the *result* of calling them as further input.

If you don't think that functions are special, what about *generator* 
functions?  How do you tell the difference?  What about functions that return 
generators but aren't themselves generators?  What about functions that return 
non-generator iterators?  What about pre-generated generator objects (if you 
don't want to treat iterables as special, are generators special?).

Do you produce the output as a structured list or an iterator that works 
incrementally?

Also, at least Nevow uses flatten to mean serialize to bytes, not produce 
a flat list, and I imagine at least a few other web frameworks do as well.  
That starts to get into encoding issues.

If you make a decision one way or another on any of these questions of policy, 
you are going to make flatten() useless to a significant portion of its 
potential userbase.  The only difference between having it in the standard 
library and not is that if it's there, they'll spend an hour being confused by 
the weird way that it's dealing with insert your favorite data type here 
rather than just doing the obvious thing, and they'll take a minute to write 
the 10-line function that they need.  Without the standard library, they'll 
skip to step 2 and save a lot of time.

I would love to see a unified API that figured out all of these problems, and 
put them together into a (non-stdlib) library that anyone interested could use 
for a few years to work the kinks out.  Although it might be nice to have a 
simple flatten interface, I don't think that it would ever be simple enough 
to stick into a builtin; it would just be the default instance of the 
IncrementalDestructuringProcess class with the most popular (as determined by 
polling users of the library after a year or so) 
IncrementalDestructuringTypePolicy.
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Suggestion for a new built-in - flatten

2006-09-22 Thread glyph
On Fri, 22 Sep 2006 20:55:18 +0100, Michael Foord [EMAIL PROTECTED] wrote:

[EMAIL PROTECTED] wrote:
On Fri, 22 Sep 2006 18:43:42 +0100, Michael Foord 
[EMAIL PROTECTED] wrote:

This wouldn't be a problem except that everyone has a different idea of 
those requirements:).

You didn't really address this, and it was my main point.  In fact, you more or 
less made my point for me.  You just assume that the type of application you 
have in mind right now is the only one that wants to use a flatten function, 
and dismiss out of hand any uses that I might have in mind.

If you consume iterables, and only special case strings - then none of the 
issues you raise above seem to be a problem.

You have just made two major policy decisions about the flattener without 
presenting a specific use case or set of use cases it is meant to be restricted 
to.

For example, you suggest special casing strings.  Why?  Your guideline 
otherwise is to follow what the iter() or list() functions do.  What about 
user-defined classes which subclass str and implement __iter__?

Sets and dictionaries are both iterable.

If it's not iterable it's an element.

I'd prefer to see this as a built-in, lots of people seem to want it. IMHO

Can you give specific examples?  The only significant use of a flattener I'm 
intimately familiar with (Nevow) works absolutely nothing like what you 
described.

Having it in itertools is a good compromise.

No need to compromise with me.  I am not in a position to reject your change.  
No particular reason for me to make any concessions either: I'm simply trying 
to communicate the fact that I think this is a terrible idea, not come to an 
agreement with you about how progress might be made.  Absolutely no changes on 
this front are A-OK by me :).

You have made a case for the fact that, perhaps, you should have a utility 
library which you use in all your projects could use for consistency and to 
avoid repeating yourself, since you have a clearly defined need for what a 
flattener should do.  I haven't read anything that indicates there's a good 
reason for this function to be in the standard library.  What are the use cases?

It's definitely better for the core language to define lots of basic types so 
that you can say something in a library like returns a dict mapping strings to 
ints without having a huge argument about what dict and string and int 
mean.  What's the benefit to having everyone flatten things the same way, 
though?  Flattening really isn't that common of an operation, and in the cases 
where it's needed, a unified approach would only help if you had two 
flattenable data-structures from different libraries which needed to be 
combined.  I can't say I've ever seen a case where that would happen, let alone 
for it to be common enough that there should be something in the core language 
to support it.

What do you do if you encounter a function?  This is kind of a trick 
question, since Nevow's flattener *calls* functions as it encounters 
them, then treats the *result* of calling them as further input.

Sounds like not what anyone would normally expect.

Of course not.  My point is that there is nothing that anyone would normally 
expect from a flattener except a few basic common features.  Bob's use-case is 
completely different from yours, for example: he's talking about flattening to 
support high-performance I/O.

What does the list constructor do with these ? Do the same.

 list('hello')
['h', 'e', 'l', 'l', 'o']

What more can I say?

Do you produce the output as a structured list or an iterator that works 
incrementally?

Either would be fine. I had in mind a list, but converting an iterator into 
a list is trivial.

There are applications where this makes a big difference.  Bob, for example, 
suggested that this should only work on structures that support the 
PySequence_Fast operations.

Also, at least Nevow uses flatten to mean serialize to bytes, not 
produce a flat list, and I imagine at least a few other web frameworks do 
as well.  That starts to get into encoding issues.

Not a use of the term I've come across. On the other hand I've heard of 
flatten in the context of nested data-structures many times.

Nevertheless the only respondent even mildly in favor of your proposal so far 
also mentions flattening sequences of bytes, although not quite as directly.

I think that you're over complicating it and that the term flatten is really 
fairly straightforward. Especially if it's clearly documented in terms of 
consuming iterables.

And I think that you're over-simplifying.  If you can demonstrate that there is 
really a broad consensus that this sort of thing is useful in a wide variety of 
applications, then sure, I wouldn't complain too much.  But I've spent a LOT of 
time thinking about what flattening is, and several applications that I've 
worked on have very different ideas about how it should work, and I see very 
little benefit to unifying them.  That's just the 

Re: [Python-Dev] PEP 355 status

2006-09-29 Thread glyph

On Fri, 29 Sep 2006 12:38:22 -0700, Guido van Rossum [EMAIL PROTECTED] wrote:
I would recommend not using it. IMO it's an amalgam of unrelated
functionality (much like the Java equivalent BTW) and the existing os
and os.path modules work just fine. Those who disagree with me haven't
done a very good job of convincing me, so I expect this PEP to remain
in limbo indefinitely, until it is eventually withdrawn or rejected.

Personally I don't like the path module in question either, and I think that 
PEP 355 presents an exceptionally weak case, but I do believe that there are 
several serious use-cases for object oriented filesystem access.  Twisted has 
a module for doing this:

http://twistedmatrix.com/trac/browser/trunk/twisted/python/filepath.py

I hope to one day propose this module as a replacement, or update, for PEP 355, 
but I have neither the time nor the motivation to do it currently.  I wouldn't 
propose it now; it is, for example, mostly undocumented, missing some useful 
functionality, and has some weird warts (for example, the name of the 
path-as-string attribute is path).

However, since it's come up I thought I'd share a few of the use-cases for the 
general feature, and the things that Twisted has done with it.

1: Testing.  If you want to provide filesystem stubs to test code which 
interacts with the filesystem, it is fragile and extremely complex to 
temporarily replace the 'os' module; you have to provide a replacement which 
knows about all the hairy string manipulations one can perform on paths, and 
you'll almost always forget some weird platform feature.  If you have an object 
with a narrow interface to duck-type instead; for example, a walk method 
which returns similar objects, or an open method which returns a file-like 
object, mocking the appropriate parts of it in a test is a lot easier.  The 
proposed PEP 355 module can be used for this, but its interface is pretty wide 
and implicit (and portions of it are platform-specific), and because it is also 
a string you may still have to deal with platform-specific features in tests 
(or even mixed os.path manipulations, on the same object).

This is especially helpful when writing tests for error conditions that are 
difficult to reproduce on an actual filesystem, such as a network filesystem 
becoming unavailable.

2: Fast failure, or for lack of a better phrase, type correctness.  PEP 355 
gets close to this idea when it talks about datetimes and sockets not being 
strings.  In many cases, code that manipulates filesystems is passing around 
'str' or 'unicode' objects, and may be accidentally passed the contents of a 
file rather than its name, leading to a bizarre failure further down the line.  
FilePath fails immediately with an unsupported operand types TypeError in 
that case.  It also provides nice, immediate feedback at the prompt that the 
object you're dealing with is supposed to be a filesystem path, with no 
confusion as to whether it represents a relative or absolute path, or a path 
relative to a particular directory.  Again, the PEP 355 module's subclassing of 
strings creates problems, because you don't get an immediate and obvious 
exception if you try to interpolate it with a non-path-name string, it silently 
succeeds.

3: Safety.  Almost every web server ever written (yes, including twisted.web) 
has been bitten by the /../../../ bug at least once.  The default child(name) 
method of Twisted's file path class will only let you go down (to go up you 
have to call the parent() method), and will trap obscure platform features like 
the NUL and CON files on Windows so that you can't trick a program into 
manipulating something that isn't actually a file.  You can take strings you've 
read from an untrusted source and pass them to FilePath.child and get something 
relatively safe out.  PEP 355 doesn't mention this at all.

4: last, but certainly not least: filesystem polymorphism.  For an example of 
what I mean, take a look at this in-development module:

http://twistedmatrix.com/trac/browser/trunk/twisted/python/zippath.py

It's currently far too informal, and incomplete, and there's no specified 
interface.  However, this module shows that by being objects and not 
module-methods, FilePath objects can also provide a sort of virtual filesystem 
for Python programs.  With FilePath plus ZipPath, You can write Python programs 
which can operate on a filesystem directory or a directory within a Zip 
archive, depending on what object they are passed.

On a more subjective note, I've been gradually moving over personal utility 
scripts from os.path manipulations to twisted.python.filepath for years.  I 
can't say that this will be everyone's experience, but in the same way that 
Python scripts avoid the class of errors present in most shell scripts 
(quoting), t.p.f scripts avoid the class of errors present in most Python 
scripts (off-by-one errors when looking at separators or extensions).

I hope that 

Re: [Python-Dev] PEP 355 status

2006-10-01 Thread glyph


On Sun, 01 Oct 2006 13:56:53 +1000, Nick Coghlan [EMAIL PROTECTED] wrote:

Things the PEP 355 path object lumps together:
   - string manipulation operations
   - abstract path manipulation operations (work for non-existent filesystems)
   - read-only traversal of a concrete filesystem (dir, stat, glob, etc)
   - addition  removal of files/directories/links within a concrete filesystem

Dumping all of these into a single class is certainly practical from a utility
point of view, but it's about as far away from beautiful as you can get, which
creates problems from a learnability point of view, and from a
capability-based security point of view. PEP 355 itself splits the methods up
into 11 distinct categories when listing the interface.

At the very least, I would want to split the interface into separate abstract
and concrete interfaces. The abstract object wouldn't care whether or not the
path actually existed on the current filesystem (and hence could be relied on
to never raise IOError), whereas the concrete object would include the many
operations that might need to touch the real IO device. (the PEP has already
made a step in the right direction here by removing the methods that accessed
a file's contents, leaving that job to the file object where it belongs).

There's a case to be made for the abstract object inheriting from str or
unicode for compatiblity with existing code,

I think that compatibility can be achieved by having a pathname string 
attribute or similar to convert to a string when appropriate.  It's not like 
datetime inherits from str to facilitate formatting or anything like that.

but an alternative would be to
enhance the standard library to better support the use of non-basestring
objects to describe filesystem paths. A PEP should at least look into what
would have to change at the Python API level and the C API level to go that
route rather than the inheritance route.

In C, this is going to be really difficult.  Existing C APIs want to use C 
functions to deal with pathnames, and many libraries are not going to support 
arbitrary VFS I/O operations.  For some libraries, like GNOME or KDE, you'd 
have to use the appropriate VFS object for their platform.

For the concrete interface, the behaviour is very dependent on whether the
path refers to a file, directory or symlink on the current filesystem. For an
OO filesystem interface, does it really make sense to leave them all lumped
into the one class with a bunch of isdir() and islink() style methods? Or does
it make more sense to have a method on the abstract object that will return
the appropriate kind of filesystem info object? 

I don't think returning different types of objects makes sense.  This sort of 
typing is inherently prone to race conditions.  If you get a DirectoryPath 
object in Python, and then the underlying filesystem changes so that the name 
that used to be a directory is now a file (or a device, or UNIX socket, or 
whatever), how do you change the underlying type?

If the latter, then how would
you deal with the issue of state coherency (i.e. it was a file when you last
touched it on the filesystem, but someone else has since changed it to a
link)? (that last question actually lends strong support to the idea of a
*single* concrete interface that dynamically responds to changes in the
underlying filesystem).

In non-filesystem cases, for example the zip path case, there are inherent 
failure modes that you can't really do anything about (what if the zip file is 
removed while you're in the middle of manipulating it?) but there are actual 
applications which depend on the precise atomic semantics and error conditions 
associated with moving, renaming, and deleting directories and files, at least 
on POSIX systems.

The way Twisted does this is that FilePath objects explicitly cache the results 
of stat and then have an explicit restat method for resychronizing with the 
current state of the filesystem.  None of their methods for *manipulating* the 
filesystem look at this state, since it is almost guaranteed to be out of date 
:).

Another key difference between the two is that the abstract objects would be
hashable and serialisable, as their state is immutable and independent of the
filesystem. For the concrete objects, the only immutable part of their state
is the path name - the rest would reflect the state of the filesystem at the
current point in time.

It doesn't really make sense to separate these to me; whenever you're 
serializing or hashing that information, the mutable parts should just be 
discarded.

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Path object design

2006-11-01 Thread glyph
On 03:14 am, [EMAIL PROTECTED] wrote:One thing is sure -- we urgently need something better than os.path.It functions well but it makes hard-to-read and unpythonic code.I'm not so sure. The need is not any more "urgent" today than it was 5 years ago, when os.path was equally "unpythonic" and unreadable. The problem is real but there is absolutely no reason to hurry to a premature solution.I've already recommended Twisted's twisted.python.filepath module as a possible basis for the implementation of this feature. I'm sorry I don't have the time to pursue that. I'm also sad that nobody else seems to have noticed. Twisted's implemenation has an advantage that it doesn't seem that these new proposals do, an advantage I would really like to see in whatever gets seriously considered for adoption:*It is already used in a large body of real, working code, and therefore its limitations are known.*If I'm wrong about this, and I can't claim to really know about the relative levels of usage of all of these various projects when they're not mentioned, please cite actual experiences using them vs. using os.path.Proposals for extending the language are contentious and it is very difficult to do experimentation with non-trivial projects because nobody wants to do that and then end up with a bunch of code written in a language that is no longer supported when the experiment fails. I understand, therefore, that language-change proposals are therefore going to be very contentious no matter what.However, there is no reason that library changes need to follow this same path. It is perfectly feasible to write a library, develop some substantial applications with it, tweak it based on that experience, and *THEN* propose it for inclusion in the standard library. Users of the library can happily continue using the library, whether it is accepted or not, and users of the language and standard library get a new feature for free. For example, I plan to continue using FilePath regardless of the outcome of this discussion, although perhaps some conversion methods or adapters will be in order if a new path object makes it into the standard library.I specifically say "library" and not "recipie". This is not a useful exercise if every user of the library has a subtly incompatible and manually tweaked version for their particular application.Path representation is a bike shed. Nobody would have proposed writing an entirely new embedded database engine for Python: python 2.5 simply included SQLite because its utility was already proven.I also believe it is important to get this issue right. It might be a bike shed, but it's a *very important* bike shed. Google for "web server url filesystem path vulnerability" and you'll see what I mean. Getting it wrong (or passing strings around everywhere) means potential security gotchas lurking around every corner. Even Twisted, with no C code at all, got its only known arbitrary-code-execution vulnerability from a path manipulation bug. That was even after we'd switched to an OO path-manipulation layer specifically to avoid bugs like this!I am not addressing this message to the py3k list because its general message of extreme conservatism on new features is more applicable to python-dev. However, py3k designers might also take note: if py3k is going to do something in this area and drop support for the "legacy" os.path, it would be good to choose something that is known to work and have few gotchas, rather than just choosing the devil we don't know over the devil we do. The weaknesses of os.path are at least well-understood.___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Path object design

2006-11-01 Thread glyph
On 10:06 am, [EMAIL PROTECTED] wrote:What a successor to os.path needs is not security, it's a better (more pythonic,if you like) interface to the old functionality.Why?I assert that it needs a better[1] interface because the current interface can lead to a variety of bugs through idiomatic, apparently correct usage. All the more because many of those bugs are related to critical errors such as security and data integrity.If I felt the current interface did a good job at doing the right thing in the right situation, but was cumbersome to use, I would strenuously object to _any_ work taking place to change it. This is a hard API to get right.[1]: I am rather explicitly avoiding the word "pythonic" here. It seems to have grown into a shibboleth (and its counterpart, "unpythonic", into an expletive). I have the impression it used to mean something a bit more specific, maybe adherence to Tim Peters' "Zen" (although that was certainly vague enough by itself and not always as self-evidently true as some seem to believe). More and more, now, though, I hear it used to mean 'stuff should be more betterer!' and then everyone nods sagely because we know that no filthy *java* programmer wants things to be more betterer; *we* know *they* want everything to be horrible. Words like this are a pet peeve of mine though, so perhaps I am overstating the case. Anyway, moving on... as long as I brought up the Zen, perhaps a particular couplet is appropriate here: Now is better than never. Although never is often better than *right* now.Rushing to a solution to a non-problem, e.g. the "pythonicness" of the interface, could exacerbate a very real problem, e.g. the security and data-integrity implications of idiomatic usage. Granted, it would be hard to do worse than os.path, but it is by no means impossible (just look at any C program!), and I can think of a couple of kinds of API which would initially appear more convenient but actually prove more problematic over time.That brings me back to my original point: the underlying issue here is too important a problem to get wrong *again* on the basis of a superficial "need" for an API that is "better" in some unspecified way. os.path is at least possible to get right if you know what you're doing, which is no mean feat; there are many path-manipulation libraries in many languages which cannot make that claim (especially portably). Its replacement might not be. Getting this wrong outside the standard library might create problems for some people, but making it worse _in_ the standard library could create a total disaster for everyone.I do believe that this wouldn't get past the dev team (least of all the release manager) but it would waste a lot less of everyone's time if we focused the inevitable continuing bike-shed discussion along the lines of discussing the known merits of widely deployed alternative path libraries, or at least an approach to *get* that data on some new code if there is consensus that existing alternatives are in some way inadequate.If for some reason it _is_ deemed necessary to go with an untried approach, I can appreciate the benefits that /F has proposed of trying to base the new interface entirely and explicitly off the old one. At least that way it will still definitely be possible to get right. There are problems with that too, but they are less severe.___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Path object design

2006-11-01 Thread glyph
On 06:14 pm, [EMAIL PROTECTED] wrote:[EMAIL PROTECTED] wrote: I assert that it needs a better[1] interface because the current interface can lead to a variety of bugs through idiomatic, apparently correct usage. All the more because many of those bugs are related to critical errors such as security and data integrity.instead of referring to some esoteric knowledge about file systems thatus non-twisted-using mere mortals may not be evolved enough to under-stand,On the contrary, twisted users understand even less, because (A) we've been demonstrated to get it wrong on numerous occasions in highly public and embarrassing ways and (B) we already have this class that does it all for us and we can't remember how it works :-).maybe you could just make a list of common bugs that may arisedue to idiomatic use of the existing primitives?Here are some common gotchas that I can think of off the top of my head. Not all of these are resolved by Twisted's path class:Path manipulation:* This is confusing as heck:  os.path.join("hello", "/world") '/world'  os.path.join("hello", "slash/world") 'hello/slash/world'  os.path.join("hello", "slash//world") 'hello/slash//world' Trying to formulate a general rule for what the arguments to os.path.join are supposed to be is really hard. I can't really figure out what it would be like on a non-POSIX/non-win32 platform.* it seems like slashes should be more aggressively converted to backslashes on windows, because it's near impossible to do anything with os.sep in the current situation.* "C:blah" does not mean what you think it means on Windows. Regardless of what you think it means, it is not that. I thought I understood it once as the current process having a current directory on every mapped drive, but then I had to learn about UNC paths of network mapped drives and it stopped making sense again.* There are special files on windows such as "CON" and "NUL" which exist in _every_ directory. Twisted does get around this, by looking at the result of abspath:  os.path.abspath("c:/foo/bar/nul") 'nul'* Sometimes a path isn't a path; the zip "paths" in sys.path are a good example. This is why I'm a big fan of including a polymorphic interface of some kind: this information is *already* being persisted in an ad-hoc and broken way now, so it needs to be represented; it would be good if it were actually represented properly. URL manipulation-as-path-manipulation is another; the recent perforce use-case mentioned here is a special case of that, I think.* paths can have spaces in them and there's no convenient, correct way to quote them if you want to pass them to some gross function like os.system - and a lot of the code that manipulates paths is shell-script-replacement crud which wants to call gross functions like os.system. Maybe this isn't really the path manipulation code's fault, but it's where people start looking when they want properly quoted path arguments.* you have to care about unicode sometimes. rarely enough that none of your tests will ever account for it, but often enough that _some_ users will notice breakage if your code is ever widely distributed. this is an even more obscure example, but pygtk always reports pathnames in utf8-encoded *byte* strings, regardless of your filesystem encoding. If you forget to decode/encode it, hilarity ensues. There's no consistent error reporting (as far as I can tell, I have encountered this rarely) and no real way to detect this until you have an actual insanely-configured system with an insanely-named file on it to test with. (Polymorphic interfaces might help a *bit* here. At worst, they would at least make it possible to develop a canonical "insanely encoded filesystem" test-case backend. At best, you'd absolutely have to work in terms of unicode all the time, and no implicit encoding issues would leak through to application code.) Twisted's thing doesn't deal with this at all, and it really should.* also *sort* of an encoding issue, although basically only for webservers or other network-accessible paths: thanks to some of these earlier issues as well as %2e%2e, there are effectively multiple ways to spell "..". Checking for all of them is impossible, you need to use the os.path APIs to determine if the paths you've got really relate in the ways you think they do.* os.pathsep can be, and actually sometimes is, embedded in a path. (again, more of a general path problem, not really python's fault)* relative path manipulation is difficult. ever tried to write the function to iterate two separate trees of files in parallel? shutil re-implements this twice completely differently via recursion, and it's harder to do with a generator (which is what you really want). you can't really split on os.sep and have it be correct due to the aforementioned windows-path issue, but that's what everybody does anyway.* os.path.split doesn't work anything like str.split.FS manipulation:* although individual operations are atomic, shutil.copytree and 

Re: [Python-Dev] Path object design

2006-11-01 Thread glyph
On 08:14 pm, [EMAIL PROTECTED] wrote:Argh, it's difficult to respond to one topic that's now spiraling intotwo conversations on two lists.[EMAIL PROTECTED] wrote:(...) people have had to spend five years putting hard-to-reados.path functions in the code, or reinventing the wheel with their ownlibraries that they're not sure they can trust. I started to usepath.py last year when it looked like it was emerging as the basis ofa new standard, but yanked it out again when it was clear the APIwould be different by the time it's accepted. I've gone back toos.path for now until something stable emerges but I really wish Ididn't have to.You *don't* have to. This is a weird attitude I've encountered over and over again in the Python community, although sometimes it masquerades as resistance to Twisted or Zope or whatever. It's OK to use libraries. It's OK even to use libraries that Guido doesn't like! I'm pretty sure the first person to tell you that would be Guido himself. (Well, second, since I just told you.) If you like path.py and it solves your problems, use path.py. You don't have to cram it into the standard library to do that. It won't be any harder to migrate from an old path object to a new path object than from os.path to a new path object, and in fact it would likely be considerably easier. *It is already used in a large body of real, working code, and therefore its limitations are known.*This is an important consideration.However, to me a clean API is moreimportant.It's not that I don't think a "clean" API is important. It's that I think that "clean" is a subjective assessment that is hard to back up, and it helps to have some data saying "we think this is clean because there are very few bugs in this 100,000 line program written using it". Any code that is really easy to use right will tend to have *some* aesthetic appeal.I took a quick look at filepath. It looks similar in concept to PEP355. Four concerns:  - unfamiliar method names (createDirectory vs mkdir, child vs join)Fair enough, but "child" really means child, not join. It is explicitly for joining one additional segment, with no slashes in it.  - basename/dirname/parent are methods rather than properties:leads to () overproliferation in user code.The () is there because every invocation returns a _new_ object. I think that this is correct behavior but I also would prefer that it remain explicit.  - the "secure" features may not be necessary. If they are, thisshould be a separate discussion, and perhaps implemented as asubclass.The main "secure" feature is "child" and it is, in my opinion, the best part about the whole class. Some of the other stuff (rummaging around for siblings with extensions, for example) is probably extraneous. child, however, lets you take a string from arbitrary user input and map it into a path segment, both securely and quietly. Here's a good example (and this actually happened, this is how I know about that crazy windows 'special files' thing I wrote in my other recent message): you have a decision-making program that makes two files to store information about a process: "pro" and "con". It turns out that "con" is shorthand for "fall in a well and die" in win32-ese. A "secure" path manipulation library would alert you to this problem with a traceback rather than having it inexplicably freeze. Obscure, sure, but less obscure would be getting deterministic errors from a user entering slashes into a text field that shouldn't accept them.  - stylistic objection to verbose camelCase names like createDirectoryThere is no accounting for taste, I suppose. Obviously if it violates the stlib's naming conventions it would have to be adjusted. Path representation is a bike shed. Nobody would have proposed writing an entirely new embedded database engine for Python: python 2.5 simply included SQLite because its utility was already proven.There's a quantum level of difference between path/file manipulation-- which has long been considered a requirement for any full-featuredprogramming language -- and a database engine which is much morecomplex."quantum" means "the smallest possible amount", although I don't think you're using like that, so I think I agree with you. No, it's not as hard as writing a database engine. Nevertheless it is a non-trivial problem, one worthy of having its own library and clearly capable of generating a fair amount of its own discussion.Fredrik has convinced me that it's more urgent to OOize the pathnameconversions than the filesystem operations.I agree in the relative values. I am still unconvinced that either is "urgent" in the sense that it needs to be in the standard library.Where have all the proponents of non-OO or limited-OO strategies been?This continuum doesn't make any sense to me. Where would you place Twisted's solution on it?___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 

Re: [Python-Dev] Path object design

2006-11-01 Thread glyph
On 01:46 am, [EMAIL PROTECTED] wrote:On 11/1/06, [EMAIL PROTECTED] [EMAIL PROTECTED] wrote:This is ironic coming from one of Python's celebrity geniuses. "Wemade this class but we don't know how it works." Actually, it'sdownright alarming coming from someone who knows Twisted inside andout yet still can't make sense of path patform oddities.Man, it is going to be hard being ironically self-deprecating if people keep going around calling me a "celebrity genius". My ego doesn't need any help, you know? :)In some sense I was being serious; part of the point of abstraction is embedding some of your knowledge in your code so you don't have to keep it around in your brain all the time. I'm sure that my analysis of path-based problems wasn't exhaustive because I don't really use os.path for path manipulation. I use static.File and it _works_, I only remember these os.path flaws from the process of writing it, not daily use. * This is confusing as heck:   os.path.join("hello", "/world")  '/world'That's in the documentation. I'm not sure it's "wrong". What shouldit do in this situation? Pretend the slash isn't there?You can document anything. That doesn't really make it a good idea.The point I was trying to make wasn't really that os.path is *wrong*. Far from it, in fact, it defines some useful operations and they are basically always correct. I didn't even say "wrong", I said "confusing". FilePath is implemented strictly in terms of os.path because it _does_ do the right thing with its inputs. The question is, how hard is it to remember what its inputs should be?   os.path.join("hello", "slash/world")  'hello/slash/world'That has always been a loophole in the function, and many programsdepend on it.If you ever think I'm suggesting breaking something in Python, you're misinterpreting me ;). I am as cagey as they come about this. No matter what else happens, the behavior of os.path should not really change.The user didn't call normpath, so should we normalize it anyway?That's really the main point here.What is a path that hasn't been "normalized"? Is it a path at all, or is it some random garbage with slashes (or maybe other things) in it? os.path performs correct path algebra on correct inputs, and it's correct (as far as one can be correct) on inputs that have weird junk in them.In the strings-and-functions model of paths, this all makes perfect sense, and there's no particular sensibility associated with defining ideas like "equivalency" for paths, unless that's yet another function you pass some strings to. I definitely prefer this:  path1 == path2to this:  os.path.abspath(pathstr1) == os.path.abspath(pathstr2)though.You'll notice I used abspath instead of normpath. As a side note, I've found interpreting relative paths as always relative to the current directory is a bad idea. You can see this when you have a daemon that daemonizes and then opens files: the user thinks they're specifying relative paths from wherever they were when they ran the program, the program thinks they're relative paths from /var/run/whatever. Relative paths, if they should exist at all, should have to be explicitly linked as relative to something *else* (e.g. made absolute) before they can be used. I think that sequences of strings might be sufficient though.Good point, but exactly what functionality do you want to see for zipfiles and URLs? Just pathname manipulation? Or the ability to seewhether a file exists and extract it, copy it, etc?The latter. See http://twistedmatrix.com/trac/browser/trunk/twisted/python/zippath.pyThis is still _really_ raw functionality though. I can't claim that it has the same "it's been used in real code" endorsement as the rest of the FilePath stuff I've been talking about. I've never even tried to hook this up to a Twisted webserver, and I've only used it in one environment. * you have to care about unicode sometimes.This is a Python-wide problem.I completely agree, and this isn't the thread to try to solve it. The absence of a path object, however, and the path module's reliance on strings, exacerbates the problem. The fact that FilePath doesn't deal with this either, however, is a fairly good indication that the problem is deeper than that. * the documentation really can't emphasize enough how bad using 'os.path.exists/isfile/isdir', and then assuming the file continues to exist when it is a contended resource, is. It can be handy, but it is _always_ a race condition.What else can you do? It's either os.path.exists()/os.remove() or "doit anyway and catch the exception". And sometimes you have to checkthe filetype in order to determine *what* to do.You have to catch the exception anyway in many cases. I probably shouldn't have mentioned it though, it's starting to get a bit far afield of even this ridiculously far-ranging discussion. A more accurate criticism might be that "the absence of a file locking system in the stdlib means that there are lots outside it, and many are broken". Different issue 

Re: [Python-Dev] Path object design

2006-11-02 Thread glyph
On 01:04 am, [EMAIL PROTECTED] wrote:[EMAIL PROTECTED] wrote:If you're serious about writing platform-agnosticpathname code, you don't put slashes in the argumentsat all. Instead you do  os.path.join("hello", "slash", "world")Many of the other things you mention are also aresult of not treating pathnames as properly opaqueobjects.Of course nobody who cares about these issues is going to put constant forward slashes into pathnames. The point is not that you'll forget you're supposed to be dealing with pathnames; the point is that you're going to get input from some source that you've got very little control over, and *especially* if that source is untrusted (although sometimes just due to mistakes) there are all kinds of ways it can trip you up. Did you accidentally pass it through something that doubles or undoubles all backslashes, etc. Sometimes these will result in harmless errors anyway, sometimes it's a critical error that will end up trying to delete /usr instead of /home/user/installer-build/ROOT/usr. If you have the path library catching these problems for you then a far greater percentage fall into the former category.If you're saying that the fact they're strings makesit easy to forget that you're supposed to be treatingthem opaquely,That's exactly what I'm saying. * although individual operations are atomic, shutil.copytree and friends aren't. I've often seen python programs confused by partially-copied trees of files.I can't see how this can be even remotely regardedas a pathname issue, or even a filesystem interfaceissue. It's no different to any other situationwhere a piece of code can fall over and leave apartial result behind.It is a bit of a stretch, I'll admit, but I included it because it is a weakness of the path library that it is difficult to do the kind of parallel iteration required to implement tree-copying yourself. If that were trivial, then you could write your own file-copying loop and cope with errors yourself.___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Python and the Linux Standard Base (LSB)

2006-11-28 Thread glyph
On 11:45 pm, [EMAIL PROTECTED] wrote:

I keep thinking I'd like to treat the OS as just another application,
so that there's nothing special about it and the same infrastructure
could be used for other applications with lots of entry level scripts.

I agree.  The motivation here is that the OS application keeps itself 
separate so that incorrect changes to configuration or installation of 
incompatible versions of dependencies don't break it.  There are other 
applications which also don't want to break.

This is a general problem with Python, one that should be solved with a 
comprehensive parallel installation or linker which explicitly describes 
dependencies and allows for different versions of packages.  I definitely don't 
think that this sort of problem should be solved during the *standardization* 
process - that should just describe the existing conventions for packaging 
Python stuff, and the OS can insulate itself in terms of that.  Definitely it 
shouldn't be changed as part of standardization unless the distributors are 
asking for it loudly.

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Python and the Linux Standard Base (LSB)

2006-11-29 Thread glyph
On 09:34 am, [EMAIL PROTECTED] wrote:

There's another standard place that is searched on MacOS: a per-user
package directory ~/Library/Python/2.5/site-packages (the name site-
packages is a misnomer, really). Standardising something here is
less important than for vendor-packages (as the effect can easily be
gotten by adding things to PYTHONPATH) but it has one advantage:
distutils and such could be taught about it and provide an option to
install either systemwide or for the current user only.

Yes, let's do that, please.  I've long been annoyed that site.py sets up a 
local user installation directory, a very useful feature, but _only_ on OS X.  
I've long since promoted my personal hack to add a local user installation 
directory into a public project -- divmod's Combinator -- but it would 
definitely be preferable for Python to do something sane by default (and have 
setuptools et. al. support it).

I'd suggest using ~/.local/lib/pythonX.X/site-packages for the official 
UNIX installation location, since it's what we're already using, and ~/.local 
seems like a convention being slowly adopted by GNOME and the like.  I don't 
know the cultural equivalent in Windows - %USERPROFILE%\Application 
Data\PythonXX maybe?

It would be nice if site.py would do this in the same place as it sets up the 
darwin-specific path, and to set that path as a module global, so packaging 
tools could use site.userinstdir or something.  Right now, if it's present, 
it's just some random entry on sys.path.
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Python and the Linux Standard Base (LSB)

2006-11-29 Thread glyph
On 29 Nov, 11:49 pm, [EMAIL PROTECTED] wrote:

On Nov 29, 2006, at 5:18 AM, [EMAIL PROTECTED] wrote:

 I'd suggest using ~/.local/lib/pythonX.X/site-packages for the
 official UNIX installation location, ...

+1 from me also for the concept.  I'm not sure I like ~/.local though
- -- it seems counter to the app-specific dot-file approach old
schoolers like me are used to.  OTOH, if that's a convention being
promoted by GNOME and other frameworks, then I don't have too much
objection.

Thanks.  I just had a look at the code in Combinator which sets this up and it 
turns out it's horribly inconsistent and buggy.  It doesn't really work on any 
platform other than Linux.  I'll try to clean it up in the next few days so it 
can serve as an example.

GNOME et. al. aren't promoting the concept too hard.  It's just the first 
convention I came across.  (Pardon the lack of references here, but it's very 
hard to google for ~/.local - I just know that I was looking for a convention 
when I wrote combinator, and this is the one I found.)

The major advantage ~/.local has for *nix systems is the ability to have a 
parallel *bin* directory, which provides the user one location to set their 
$PATH to, so that installed scripts work as expected, rather than having to 
edit a bunch of .foorc files to add to your environment with each additional 
package.  After all, what's the point of a per-user install if the software 
isn't actually installed in any meaningful way, and you have to manually edit 
your shell startup scripts, log out and log in again anyway?  Another nice 
feature there is that it uses a pre-existing layout convention (bin lib share 
etc ...) rather than attempting to build a new one, so the only thing that has 
to change about the package installation is the root.

Finally, I know there are quite a few Python developers out there already using 
Combinator, so at least there it's an established convention :).

I also think that setuptools has the potential to be a big
improvement here because it's much easier to install and use egg
files than it is to get distutils to DTRT with setup.py.  (I still
detest the command name 'easy_install' but hey that's still fixable
right? :).  What might be nice would be to build a little more
infrastructure into Python to support eggs, by say adding a default
PEP 302 style importer that knows how to search for eggs in
'nests' (a directory containing a bunch of eggs).

One of the things that combinator hacks is where distutils thinks it should 
install to - when *I* type python setup.py install nothing tries to insert 
itself into system directories (those are for Ubuntu, not me) - ~/.local is the 
*default* install location.  I haven't managed to make this feature work with 
eggs yet, but I haven't done a lot of work with setuptools.

On the easy_install naming front, how about layegg?

What if then that importer were general enough (...)

These all sound like interesting ideas, but they're starting to get pretty far 
afield - I wish I had more time to share ideas about packaging, but I know too 
well that I'm not going to be able to back them up with any implementation 
effort.

I'd really like Python to use the ~/.local/bin / ~/.local/lib convention for 
installing packages, though.___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Python and the Linux Standard Base (LSB)

2006-11-29 Thread glyph
On 12:34 am, [EMAIL PROTECTED] wrote:

The whole concept of hidden files seems ill-
considered to me, anyway. It's too easy to forget
that they're there. Putting infrequently-referenced
stuff in a non-hidden location such as ~/local
seems just as good and less magical to me.

Something like ~/.local is an implementation detail, not something that 
should be exposed to non-savvy users.  It's easy enough for an expert to show 
it if they want to - ln -s .local local - but impossible for someone more 
naive to hide if they don't understand what it is or what it's for.  (And if 
they try, by clicking a checkbox in Nautilus or somesuch, *all* their installed 
software breaks.)  This approach doesn't really work unless you have good 
support from the OS, so it can warn you you're about to do something crazy.

UI designers tend to get adamant about this sort of thing, but I'll admit they 
go both ways, some saying that everything should be exposed to the user, some 
saying that all details should be hidden by default.  Still, in the more recent 
UNIX desktops, the let's hide the things that the user shouldn't see and just 
work really hard to make them work right all the time camp seems to be winning.
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Python and the Linux Standard Base (LSB)

2006-11-29 Thread glyph
On 04:11 am, [EMAIL PROTECTED] wrote:
On Wednesday 29 November 2006 22:20, [EMAIL PROTECTED] wrote:
  GNOME et. al. aren't promoting the concept too hard.  It's just the first
  convention I came across.  (Pardon the lack of references here, but it's
  very hard to google for ~/.local - I just know that I was looking for a
  convention when I wrote combinator, and this is the one I found.)

~/.local/ is described in the XDG Base Directory Specification:

http://standards.freedesktop.org/basedir-spec/latest/

Thanks for digging that up!  Not a whole lot of meat there, but at least it 
gives me some env vars to set / check...

  On the easy_install naming front, how about layegg?

Actually, why not just egg?

That works for me.  I assumed there was some other reason the obvious answer 
hadn't been chosen :).___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Python and the Linux Standard Base (LSB)

2006-11-29 Thread glyph
On 04:36 am, [EMAIL PROTECTED] wrote:

easy_install uses the standard distutils configuration system, which means 
that you can do e.g.

Hmm.  I thought I knew quite a lot about distutils, but this particular nugget 
had evaded me.  Thanks!  I see that it's mentioned in the documentation, but I 
never thought to look in that section.  I have an aversion to .ini files; I 
tend to assume there's always an equivalent Python expression, and it's better. 
 Is there an equivalent Python API in this case?

I don't know if this is a personal quirk of mine, or a reinforcement of Talin's 
point about the audience for documentation documentation.
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Python and the Linux Standard Base (LSB)

2006-11-30 Thread glyph
On 05:37 pm, [EMAIL PROTECTED] wrote:
Perhaps pyinstall?

Keep in mind that Python packages will still generally be *system*-installed 
with other tools, like dpkg (or apt) and rpm, on systems which have them.  The 
name of the packaging system we're talking about is called either eggs or 
setuptools depending on the context.  pyinstall invites confusion with the 
Python installer, which is a different program, used to install Python itself 
on Windows.

It's just a brand.  If users can understand that Excel means Spreadsheet, 
Outlook means E-Mail, and GIMP means Image Editor, then I think we 
should give them some credit on being able to figure out what the installer 
program is called.

(I don't really care that much in this particular case, but this was one of my 
pet peeves with GNOME a while back.  There was a brief change to the names of 
everything in the menus to remove all brand-names: Firefox became Web 
Browser, Evolution became E-Mail, Rhythmbox became Music Player.  I 
remember looking at my applications menu and wondering which of the 3 music 
players that I had installed the menu would run.  Thankfully this nonsense 
stopped and they compromised on names like Firefox Web Browser and GIMP 
Image Editor.)
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] [Python-checkins] MSI being downloaded 10x more than all other files?!

2006-12-11 Thread glyph
On 11:17 pm, [EMAIL PROTECTED] wrote:
On 12/11/06, Jim Jewett [EMAIL PROTECTED] wrote:
 On 12/8/06, Guido van Rossum [EMAIL PROTECTED] wrote:
  /ftp/python/2.5/python-2.5.msi is by far the top download -- 271,971
  hits, more than 5x the next one, /ftp/python/2.5/Python-2.5.tgz
  (47,898 hits). Are these numbers real?

 Why wouldn't it be?

Just because in the past the ratio of downloads for a particular
version was always about 70% Windows vs. 30% source. Now it seems
closer to 90/10.

Personally speaking, since switching to Ubuntu, I've been so happy with the 
speed of releases and the quality of packaged Python that I haven't downloaded 
a source release from python.org in over a year.  If I need packages, they're 
already installed.  If I need source from a release, I 'apt-get source' to 
conveniently install it from a (very fast) ubuntu mirror.  When I need 
something outside the Ubuntu release structure, it's typically an SVN trunk 
checkout, not a release tarball.

I don't know what Ubuntu's impact in the general user community has been, but 
it seems that the vast majority of python developers I interact with on a 
regular basis have switched.  I wouldn't be surprised if this were a major part 
of the impact.
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] [Python-3000] Warning for 2.6 and greater

2007-01-10 Thread glyph
On 07:42 pm, [EMAIL PROTECTED] wrote:
On 1/10/07, Raymond Hettinger [EMAIL PROTECTED] wrote:

Anthony Baxter
  Comments? What else should get warnings?

It is my strong preference that we not go down this path.
Instead, the 2.6 vs 3.0 difference analysis should go in an
external lint utility.

Having Python 2.6 optionally warn for
3.0-compatibility is a lot easier for the average developer than having a
separate tool or a separately compiled Python.

I could not possibly agree more.

Given the highly dynamic nature of Python, such a tool will *at best* catch 
only the most egregious uses of deprecated features.  Backticks are easy enough 
to find, but the *only* way that I can reasonably imagine migrating a body of 
code like Twisted (or any non-trivial Python library) would be having a way to 
examine the warning output of its test suite.

I am sper +1 on the warnings for 2.6, as well as forward-compatibility at 
some point in the 2.x series for new syntax.  Without the ability to bridge 
2.x-3.0 during some interim period, I can say for sure that Twisted _will not_ 
migrate to 3.0, ever.  We are really a small project and just don't have the 
manpower to maintain two overlapping but mutually incompatible codebases.

I've been assuming for some time that the only hope for Py3k compatibility 
within Twisted would be using PyPy as a translation layer.  With the addition 
of runtime compatibility warnings, it might be feasible that we could run on 
the bare metal (ha ha) of Python3's VM.
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] [Python-3000] Warning for 2.6 and greater

2007-01-11 Thread glyph
On 10 Jan, 11:10 pm, [EMAIL PROTECTED] wrote:
On 10/01/07, [EMAIL PROTECTED] [EMAIL PROTECTED] wrote:
I've been assuming for some time that the only hope for Py3k compatibility
within Twisted would be using PyPy as a translation layer.

Does this ring as many warning bells for me as it does for others? I
know very little about the current state of PyPy, but I read your
comment as implying that you expect Twisted to be unavailable (at some
level or other) for users of Py3K. Is that right?

This entirely depends on the migration path available.  My understanding of the 
situation is that Py3K is a different, incompatible language.  In that sense, 
Twisted will only be as unavailable to users of Py3K as it is unavailable to 
users of Ruby, Perl, or any other dynamic-but-not-Python language.

If the 2.x series grows deprecation warnings and gradually has all of py3k's 
features backported, such that there is a smooth transitional period of a few 
years where we can support one or two versions of 2.x and a version of 3.x, 
then eventually perhaps it will be worthwhile.

Alternately, if they really aren't compatible *but* there is such a compelling 
featureset drawing other developers that there's really a critical mass of 
projects that migrate to 3.x and stop maintaining their 2.x counterparts, it 
might be worthwhile to jump the chasm to follow them.  At a minimum, for it to 
be worthwhile for Twisted to attempt this if the following packages are all 
moved to 3.x:

 - pyopenssl
 - pygtk
 - pysqlite
 - PyObjC
 - win32all
 - zope interface (and therefore, probably, all of Zope)
 - pycrypto
 - pypam
 - pygame
 - pylucene

Those projects that do move I'm sure will take the opportunity to 
incompatibly break all of their APIs as well, since no existing code will 
actually use them, exacerbating the problems involved in porting.
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] [Python-3000] Warning for 2.6 and greater

2007-01-12 Thread glyph
On 11 Jan, 08:22 pm, [EMAIL PROTECTED] wrote:
On 1/11/07, James Y Knight [EMAIL PROTECTED] wrote:
 If the goal is really to have Py 3.0 be released later this year,

There will certainly be demand for an asynchronous server in 3.0,

To flip the question around: there might be a demand for Twisted in 3.0, but 
will there be a demand for 3.0 in Twisted?  It might just be easier for 
everyone concerned to just continue maintaining 2.x forever.  I have yet to see 
a reason why, other than continued maintenance, 3.0 would be a preferable 
development platform.

So the two projects will operate independently, and the 3.0 one may be
smaller and less ambitious than Twisted.  But if the need is there it
will be written.

It is quite likely that someone else will write some completely different code 
for python 3.0 that calls select().  I hadn't considered that the goal of 3.0 
was to *discover* these people by alienating existing Python developers - 
that's crafty!  If so, though, you'll have to figure out a way to stop Anthony 
from providing all this compatibility stuff.  He might make it too attractive 
for us to continue development on future versions :).

How did Perl 4 and Perl 5 handle the situation?  I basically waited
2-3 years after Perl 5 came out, then started programming the new way.
 If it mattered (it didn't), I would have tied my applications
specifically to Perl 4.

I handled the Perl 4 to 5 transition by dropping Perl and moving to Python, 
because if I was going to port all my code to another language I wanted to at 
least port to a better language.
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] [Python-3000] Warning for 2.6 and greater

2007-01-12 Thread glyph
On 01:12 am, [EMAIL PROTECTED] wrote:
On Friday 12 January 2007 06:09, James Y Knight wrote:
 On Jan 10, 2007, at 6:46 PM, Benji York wrote:
  Paul Moore wrote:
  How many other projects/packages anticipate *not* migrating to
  Py3K, I wonder?
 
  I certainly can't speak for the project as a whole, but I
  anticipate a fair bit of work to port Zope 3 (100+ KLOC) to
  Python 3.0.

 I (another Twisted developer, among other hats I wear) am also
 very worried about the Python 3.0 transition.

I'm plan to try and make the transition as painless as possible. A
goal I have is to try and make it possible to write code that works
in both 2.6 and 3.0. Obviously 2.6 will be backwards-compatible
with previous versions, but I'd like to see it get either a command
line option or a from __future__ statement to enable compatibility
with 3.0-isms, like the dict.items change.

This is all very encouraging.

Please stick with a __future__ statement if at all possible though.  The 
biggest challenge in migration is to reduce the impact so that it can be done 
partially in a real system.

If you have a module X wants to be 3.0 safe which imports a module Y from a 
different developer that is not, command line options for compatibility might 
not help at all.  A __future__ statment in X but not in Y, though, would allow 
for a smooth transition.

I can see how that would be tricky for things like dictionary methods, but it 
does seem doable.  For example, have a has_key descriptor which can raise an 
AttributeError if the globals() of the calling stack frame has been marked in 
some way.

 Basically: my plea is: please don't remove the old way of doing
 things in Py 3.0 at the same time as you add the new way of doing
 things.

If there was a way to make 2.6 as compatible as possible with 3.0,
would this make life less painful? Obviously there'd have to be
breakages in a backwards direction, but I'd hope it would make it
easier to go forward. Some things should also be OK to backport to
2.6 - for instance, I can't see an obvious reason why 2.6 can't
support except FooError as oopsie as an alternate spelling of
except FooError, oopsie.

You happen to have hit my favorite hot-button 3.0 issue right there :).  I 
don't care about backticks but sometimes I _do_ have to catch exceptions.

Similarly, where the stdlib has been shuffled around, there should
be shims in 2.6 that allow people to work with the new names.

This part I wouldn't even mind having to write myself.  It would certainly be 
good to have somewhere more official though.

I don't think waiting for 2.7 to make the compatibility work is a
workable approach - we're something like 2.5-3 years away from a
2.7 release, and (optimistically) 12-18 months from a 3.0 final.
That leaves a window of 1.5-2 years where we are missing an
important tool for people.

It would be nice if the versioning would actually happen in order.

 [1] Unless of course there's a perfect automated conversion
 script that can generate the 3.X compatible source code from the
 2.X compatible source code.

I doubt that the 2to3 script will be perfect, but hopefully it can
get most things. I can't see it easily fixing up things like

check = mydict.has_key
...
if check(foo):

Currently a common idiom, by the way, used throughout the lower levels of 
Twisted.

This is why I also want to add Py3kDeprecationWarning to 2.6.

On that note, I'd like to see changes like the mass-change to the
stdlib in the 3.0 branch that changed raise A, B into raise A(B)
applied to the trunk. This makes it much easier to apply patches to
both the 3.0 branch and the trunk. Similar changes should be
applied to remove, for instance, use of  and dict.has_key from
the stdlib. Simply put, I'd like the stdlib between 2 and 3 to be
as similar as possible.

It would be nice if the stdlib could be used as a case study - if the 3 stdlib 
tests can pass on some version of 2 (or vice versa) that should be a minimum 
bar for application portability.

Maybe a better way to handle the similarity is that the reorganized stdlib 
should simply be available as a separate piece of code on 2.x?  That would 
allow 2.x to use any new features added and distinguish between code which had 
been moved to use new stdlib APIs and that which hadn't.


___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] [Python-3000] Warning for 2.6 and greater

2007-01-12 Thread glyph
On 07:56 am, [EMAIL PROTECTED] wrote:

Additionally, without a 2.x-3.x upgrade path 3.x is essentially a
new language, having to build a new userbase from scratch. Worse
yet, 2.x will suffer as people have the perception Python 2?
That's a dead/abandoned language

It's worse than that.  This perception has _already_ been created.  I already 
have heard a few folks looking for new languages to learn choose Ruby over 
Python and give Py3K as a reason.  Isn't Python going to be totally different 
in a few years anyway?  I'll just wait until then, seems like a waste of time 
to learn it now.

Given Ruby's own checkered history of compatibility, I don't think this is an 
_accurate_ perception, but it is the nature of perception to be inaccurate.

If the plan is to provide a smooth transition, it would help a lot to have this 
plan of foward and backward compatibility documented somewhere very public.  
It's hard to find information on Py3K right now, even if you know your way 
around the universe of PEPs.

FWIW, I also agree with James that Python 3 shouldn't even be released until 
the 2.x series has reached parity with its feature set.  However, if there's 
continuity in the version numbers instead of the release dates, I can at least 
explain to Twisted users that we will _pretend_ they are released in the order 
of their versions.
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] [Python-3000] Warning for 2.6 and greater

2007-01-12 Thread glyph
On 10:12 am, [EMAIL PROTECTED] wrote:

For practical reasons (we have enough work to be getting on with) PyPy
is more-or-less ignoring Python 2.5 at the moment.  After funding and
so on, when there's less pressure, maybe it will seem worth it.  Not
soon though.

I think I know what you mean from previous conversations, but the context of 
the question makes the answer ambiguous.  Are you ignoring 2.5 in favor of 2.4, 
or 3.0?
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] [Python-3000] Warning for 2.6 and greater

2007-01-12 Thread glyph
On 11:22 am, [EMAIL PROTECTED] wrote:

 FWIW, I also agree with James that Python 3 shouldn't even be
 released until the 2.x series has reached parity with its feature
 set.  However, if there's continuity in the version numbers
 instead of the release dates, I can at least explain to Twisted
 users that we will _pretend_ they are released in the order of
 their versions.

I'm not sure what parity with it's feature set means.

I don't either!  Parity with it is feature set?  I can't even parse that!  ;-)

By parity with *its* feature set, though, I meant what you said here:

I do hope that it's _possible_ to work in a version of the language that works 
in both 2.6+ and 3.0+, even if under the hood there are differences.

In order to do this, everything that has been changed in 3.0 has to have some 
mechanism for working both ways in some 2.x release.  I phrased this as its 
feature set because I am not aware of any new functionality in 3.0 that 
simply isn't available 2.x - everything I've seen are cleanups, which expose 
basically the same features as 2.5.

If there are some real new features, then I am in error.
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] The bytes type

2007-01-12 Thread glyph
On 06:49 pm, [EMAIL PROTECTED] wrote:

I think we should draw a line in the sand and resolve not to garbage-up Py2.6.
The whole Py3.0 project is about eliminating cruft and being free of the
bonds of backwards compatibility.  Adding non-essential cruft to Py2.6
goes against that philosophy.

Emotionally charged like cruft and garbage are obscuring the issue.

Let's replace them with equivalents charged in the opposite direction:

I think we should draw a line in the sand and resolve not to compatibility-up 
Py2.6.  The whole Py3.0 project is about eliminating useful libraries and being 
free of the bonds of working software.  Adding non-essential 
forward-compatibility to Py2.6 goes against that philosophy.

The benefit (to me, and to many others) of 3.x over 2.x is the promise of more 
future maintenance, not the lack of cruft.  In fact, if I made a list of my 
current top ten problems with Python, cruft wouldn't even make it in.  There 
is lots of useful software that will not work in the 3.0 series, and without 
forward compatibility there is no way to get there from here.

As Guido said, if 3.0 is going to break compatibility, that burdens the 2.x 
series with the need to provide transitional functionality.  The upgrade path 
needs to be available in one version or the other, or 2.x needs to be 
maintained forever.  You can't have it both ways.
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] [Python-3000] Warning for 2.6 and greater

2007-01-12 Thread glyph
On 09:04 pm, [EMAIL PROTECTED] wrote:

I'm wondering if we might be going the wrong way about warning about
compatibility between 2.x and 3.x. Perhaps it might be better if the 3.0
alpha had a 2.x compatibility mode command-line flag, which is removed late
in the beta cycle.

Please, no.

I don't think command-line flags are going to help much, because the problem is 
not the complexity of downloading and compiling py3k, it is the complexity of 
porting software that has modules from a variety of different sources.

More importantly however, I'm not even going to be looking into porting 
anything to py3k until a year or so after its release - and that's only if 2.6 
has given me some tools to deal with the transition.  The reason I am posting 
to this thread is that I had *written off py3k entirely* before Anthony started 
making noises about forward compatibility.  The support cycle of Ubuntu Dapper 
makes it likely that Twisted will be supporting Python 2.4 until at least 2011. 
 I assume 2.5 will last a year after that, so assuming 2.6 has perfect forward 
compatibility, we'll be looking at a py3k port in early 2013.  Features 
available in the beta releases aren't going to help me there.

Plus, this defeats the whole purpose of py3k.  Why break compatibility if 
you're going to include a bunch of gross compatibility code?  Compatibility has 
already been broken in the py3k branch, there's no point in putting it back 
there.  Adding it to 2.x might make sense though.
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] file(file)

2007-01-12 Thread glyph
On 12:37 am, [EMAIL PROTECTED] wrote:

For security reasons I might be asking for file's constructor to be
removed from the type for Python source code at some point (it can be
relocated to an extension module if desired).  By forcing people to go
through open() to create a file object you can more easily control
read/write access to the file system (assuming the proper importation
of extension modules has been blocked).  Not removing the constructor
allows any code that has been explicitly given a file object but not
open() to just get the class and call the constructor to open a new
file.

This is a general problem with type access.  Secure versions of any type should 
not allow access to the type period.  It is hardly unique to files, and is not 
limited to constructors either.  How do you, e.g., allow a restricted piece of 
code write access to only a specified area of the filesystem?

More importantly, given the random behavior that open() will be growing 
(opening sockets?  dynamic dispatch on URL scheme???) file() will likely remain 
a popular way to be sure you are accessing the filesystem.



___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] file(file)

2007-01-12 Thread glyph



On 02:42 am, [EMAIL PROTECTED] wrote:

Wrapper around open() that does proper checking of its arguments.  I
will be discussing my security stuff at PyCon if you are attending and
are interested.

I am both, so I guess I'll see you there :).
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] [Python-3000] Warning for 2.6 and greater

2007-01-13 Thread glyph
On 08:19 am, [EMAIL PROTECTED] wrote:
Georg Brandl schrieb:
 If Python 3.0 was simply a release which removed deprecated features,
 there would clearly be no issue. I would update my code in advance of
 the 3.0 release to not use any of those features being removed, and
 I'm all set. But that's not what I'm hearing. Python 3 is both adding
 new ways to do things, and removing the older way, in the same
 version, with no overlap. This makes me very anxious.

 It has always been planned that in those cases that allow it, the new way to 
 do
 it will be introduced in a 2.x release too, and the old way removed only in 
 3.x.

What does that mean for the example James gave: if dict.items is going
to be an iterator in 3.0, what 2.x version can make it return an
iterator, when it currently returns a list?

There simply can't be a 2.x version that *introduces* the new way, as it
is not merely a new API, but a changed API.

The API isn't changed.  It's just dependent on its execution context in a 
confusing way.  That difference right now is being reflected as 2.x VM vs. 
3.0 VM but there are other ways to reflect it more explicitly.  It would 
certainly be possible to have:

   from __future__ import items_is_iter

be the same as:

   __py3k_compat_items_is_iter__ = True

and have the 2.x series' items() method check the globals() of the calling 
scope to identify the return value of items() in that particular context.

If the actual data structures of module dictionaries and stack objects are too 
expensive there are other, similar things that could be done at the C level.  
This implementation strategy is just the obvious thing that occurred to me 
after maybe 2 minutes of consideration.  I'm sure someone more familiar with 
the internals of Python could come up with something better.
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Unipath package

2007-01-28 Thread glyph

On 28 Jan, 06:30 pm, [EMAIL PROTECTED] wrote:
The discussion has been hampered by the lack of released code.  Only
Orendorff's class has been widely used in production systems.  The
others have either never been used or only by their authors; they
haven't made it to the Cheeseshop.  Unipath is merely to say Here's
another way to do it; see if it works for you.  Certainly the Path
methods need more testing and use in the real world before they'd be
ready for the stdlib.  The FSPath methods are more experimental so I'd
say they need a year of use before they can be considered sufficiently
stable.

Mike is mistaken here; Twisted has a module, twisted.python.filepath, which has 
been used extensively in production as well as by other projects.  You can see 
such a project here (just the first hit on google code search):


http://www.google.com/codesearch?hl=enq=+twisted+python+filepath+show:3PfULFjjkV4:Vyh6PbYbXnU:7_Cvpg7zWo4sa=Ncd=30ct=rccs_p=https://taupro.com/pubsvn/Projects/Xanalogica/ShardsKeeper/trunkcs_f=shardskeeper.py#a0

and the source to the module itself here:

http://twistedmatrix.com/trac/browser/trunk/twisted/python/filepath.py

Twisted's filepath implementation also provides a zipfile implementation of the 
same interface, so that, for example, you can copy a directory in the 
filesystem and put it into a zipfile with the same function.  We plan to 
eventually also include tarfile and in-memory implementations.

Of course, I am FilePath's author, so I have a certain bias; but I think it 
would be well suited to the standard library, and I would be interested to hear 
any feedback on it, especially that which would make it unsuitable for 
inclusion.
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Trial balloon: microthreads library in stdlib

2007-02-13 Thread glyph
On 12 Feb, 05:11 pm, [EMAIL PROTECTED] wrote:
On 2/12/07, Tristan Seligmann [EMAIL PROTECTED] wrote:
 * Richard Tew [EMAIL PROTECTED] [2007-02-12 13:46:43 +]:
  Perhaps there is a better way.  And I of course have no concept of
  how this might be done on other platforms.

 Building on an existing framework that does this seems better than
 reinventing the wheel, for something of this magnitude.

This to me seems to be the low level support you would build
something like Twisted on top of.  Pushing Twisted so that others
can get it seems a little over the top.

It sounds like you don't really understand what Twisted is, what it does, or 
the difficulty involved in building that low level so that it's usable by 
something like Twisted.

Tristan is correct: this should be a patch against Twisted, or perhaps as a 
separate library that could implement a reactor.

I have no problem with other, competing event-driven mechanisms being 
developed; in fact, I think it's great that the style of programming is getting 
some attention.  But this is not a robust and straightforward wrapping of some 
lower level.  This is an entirely new, experimental project, and the place to 
start developing it is _NOT_ in the Python core.  Another post in this thread 
outlined that the first thing you should do is develop something and get people 
in the community to use it.  Please do that, start its own mailing list, and 
stop discussing it here.

On a personal note, I believe that those who do not understand twisted are 
doomed to repeat it.  I predict that the ultimate outcome of this project will 
be that all concerned will realize that, actually, Twisted already does almost 
everything that it proposes, and the microthreading features being discussed 
here are a trivial hack somewhere in its mainloop machinery, not an entirely 
new subsystem that it should be implemented in terms of.
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


[Python-Dev] Twisted Isn't Specific (was Re: Trial balloon: microthreads library in stdlib)

2007-02-13 Thread glyph
On 02:20 am, [EMAIL PROTECTED] wrote:

If Twisted is designed so that it absolutely *has* to
use its own special event mechanism, and everything else
needs to be modified to suit its requirements, then it's
part of the problem, not part of the solution.

I've often heard this complaint, usually of the form that's twisted-specific. 
 The thing is, Twisted isn't specific.  Its event mechanism isn't special.  
In fact it's hard to imagine how it might be made less special than it 
currently is.

Whatever your event dispatch mechanism, *some* code has to be responsible for 
calling the OS API of select(), WaitForMultipleEvents(), g_main_loop_run(), or 
whatever.  Twisted actually imposes very few requirements for code to 
participate in this, and was designed from day one specifically to be a 
generalized mainloop mechanism which would not limit you to one underlying 
multiplexing implementation, event-dispatch mechanism, or operating system if 
you used its API.

There have even been a few hacks to integrate Twisted with the asyncore 
mainloop, but almost everyone who knows both asyncore and Twisted has rapidly 
decided to just port all of their code to Twisted rather than maintain that 
bridge.

In fact, Twisted includes fairly robust support for threads, so if you really 
need it you can mix and match event-driven and blocking styles.  Again, most 
people who try this find that it is just nicer to write straight to the Twisted 
API, but for those that need really it, such as Zope and other WSGI-contained 
applications, it is available.

Aside from the perennial issue of restartable reactors (which could be resolved 
as part of a stdlib push), Twisted's event loop imposes very few constraints on 
your code.  It provides a lot of tools, sure, but few of them are required.  
You don't even *need* to use Deferreds.  Now, Twisted, overall, can be 
daunting.  It has a lot of conventions, a lot of different interfaces to 
memorize and deal with, but if you are using the main loop you don't have to 
necessarily care about our SSH, ECMA 48, NNTP, OSCAR or WebDAV implementation.  
Those are all built at a higher level.

It may seem like I'm belaboring the point here, but every time a thread comes 
up on this list mentioning the possibility of a standard event mechanism, 
people who do not know it very well or haven't used it start in with implied 
FUD that Twisted is a big, complicated, inseparable hairy mess that has no 
place in the everyday Python programmer's life.  It's tangled-up Deep Magic, 
only for Wizards Of The Highest Order.  Alternatively, more specific to this 
method, it's highly specific and intricate and very tightly coupled.

Nothing could be further from the truth.  It is *strictly* layered to prevent 
pollution of the lower levels by higher level code, and all the dependencies 
are one-way.  Maybe our introductory documentation isn't very good.  Maybe 
event-driven programming is confusing for those expecting something else.  
Maybe the cute names of some of the modules are offputting.  Still, all that 
aside, if you are looking for an event-driven networking engine, Twisted is 
about as straightforward as you can get without slavishly coding to one 
specific platform API.

When you boil it down, Twisted's event loop is just a notification for a 
connection was made, some data was received on a connection, a connection 
was closed, and a few APIs to listen or initiate different kinds of 
connections, start timed calls, and communicate with threads.  All of the 
platform details of how data is delivered to the connections are abstracted 
away.  How do you propose we would make a less specific event mechanism?

I strongly believe that there is room for a portion of the Twisted reactor API 
to be standardized in the stdlib so that people can write simple event-driven 
code out of the box with Python, but still have the different plug-in 
implementations live in Twisted itself.  The main thing blocking this on my end 
(why I am not writing PEPs, advocating for it more actively, etc) is that it is 
an extremely low priority, and other, higher level pieces of Twisted have more 
pressing issues (such as the current confusion in the web universe).  Put 
simply, although it might be nice, nobody really *needs* it in the stdlib, so 
they're not going to spend the effort to get it there.

If someone out there really had a need for an event mechanism in the standard 
library, though, I encourage them to look long and hard at how the existing 
interfaces in Twisted could be promoted to the standard library and continue to 
be maintained compatibly in both places.

At the very least, standardizing on something very much like IProtocol would go 
a long way towards making it possible to write async clients and servers that 
could run out of the box in the stdlib as well as with Twisted, even if the 
specific hookup mechanism (listenTCP, listenSSL, et. al.) were incompatible - 
although a signature compatible 

Re: [Python-Dev] Summary: rejection of 'dynamic attribute' syntax

2007-02-14 Thread glyph
On 01:04 pm, [EMAIL PROTECTED] wrote:

[EMAIL PROTECTED]:
 I really, really wish that every feature proposal for Python had to meet
 some burden of proof [...].  I suspect this would kill 90% of hey
 wouldn't this syntax be neat proposals on day zero [...]

This is what I understood the initial posting to python-ideas to be
about.  If the feedback from there had been that's a stupid idea, then
that would have been the end of it.  I think it's a good thing that
there's the python-ideas mechanism for speculative suggestions.

Discussion, with any audience, no matter how discerning, isn't really going to 
be a useful filter unless the standards of that audience are clear.

What I was trying to say there is that the proposal of new ideas should not 
begin with Hey, I think this might be 'good' - that's too ill defined.  It 
should be, I noticed (myself/my users/my students/other open source projects) 
writing lots of code that looks like *this*, and I think it would be a real 
savings if it could look like *that*, instead.  Some back-of-the-envelope math 
might be nice, in terms of savings of lines of code, or an explanation of the 
new functionality that might be enabled by a feature.

More than the way proposals should work, I'm suggesting that the standards of 
the community in _evaluating_ to the proposals should be clearer.  The 
cost-benefit analysis should be done in terms of programmer time, or ease of 
learning, or  integration possibilities, or something.  It doesn't 
*necessarily* have to be in terms of lines of code saved, but especially for 
syntax proposals, that is probably a necessary start.  It's fine to have some 
aesthetic considerations, but there's a lot more to Python than aesthetics, and 
right now it seems like the majority of posts coming into threads like this one 
are I don't like the proposed ':==|', it's ugly.  How about '?!?+' instead?

The short version of this is that any feature should describe what task it 
makes easier, for whom, and by how much.  This description should be done up 
front, so that the discussion can center around whether than analysis is 
correct and whether it is worth the ever-present tradeoff against upgrade 
headaches.
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Twisted Isn't Specific (was Re: Trial balloon: microthreads library in stdlib)

2007-02-14 Thread glyph
On 02:22 pm, [EMAIL PROTECTED] wrote:
On Wed, Feb 14, 2007, [EMAIL PROTECTED] wrote:

 As I said, I don't have time to write the PEPs myself, but I might fix
 some specific bugs if there were a clear set of issues preventing this
 from moving forward.  Better integration with the standard library
 would definitely be a big win for both Twisted and Python.

Here's where I'm coming from:

My first experience with Twisted was excellent: I needed a web server in
fifteen minutes to do my PyCon presentation, and it Just Worked.

My second experience with Twisted?  Well, I didn't really have one.

Thanks for the feedback, Aahz... I'm a little confused, though.  Is this just a 
personal anecdote, or were you trying to make a larger point about Twisted's 
suitability for the stdlib?
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Trial balloon: microthreads library in stdlib

2007-02-14 Thread glyph
On 04:49 pm, [EMAIL PROTECTED] wrote:
On 2/13/07, [EMAIL PROTECTED] [EMAIL PROTECTED] wrote:
Tristan is correct: this should be a patch against Twisted, or perhaps as a
separate library that could implement a reactor.

I think there is some confusion here.

Quite.

I am not writing a competing event driven mechanism.  What I was doing
was feeling out whether there was any interest in better support for
asynchronous calls.

Perhaps you could describe what you were proposing a bit better, because both 
to Tristan and myself, it sounded (and frankly, still sounds) like you were 
describing that would directly replace Twisted's mainloop core API.  
asynchronous calls is a very vague term with at least three completely 
separate definitions.

Yes I have looked at Twisted.  It was the first place I looked, several
months ago, and what drew me to it was the IOCP reactor.  However
as I also explained in my reply to Tristan it is not suitable for my
needs.

As far as I can tell, you still haven't even clearly expressed what your needs 
are, let alone whether or not Twisted is suitable.  In the reply you're citing, 
you said that this sounded like something low level that twisted would be 
written on top of - but the this you were talking about, based on your 
previous messages, sounded like monkeypatching the socket and asyncore modules 
to provide asynchronous file I/O based on the platform-specific IOCP API for 
Windows.

Twisted can already provide that functionality in a _much_ cleaner fashion 
already, although you might need to edit some code or monkeypatch some objects 
specifically to get asynchronous file I/O, at least Twisted is *designed* for 
such applications.

To quote you from the original message that Tristan replied to:

 I can't go into low level details because I do not have the necessary 
 experience or knowledge to know which approach would be best. ... 
 I of course have no concept of how this might be done on other platforms.

As someone who _does_ have the necessary experience and knowledge, I can tell 
you that Twisted *IS* the one stop shop for events that you're describing.  I 
do know how one would do such a thing on other platforms and it is not a simple 
task - so much so that async file I/O is still not available in Twisted today.

It is a large dependency and it is a secondary framework.

Has it occurred to you that it is a large dependency not because we like 
making bloated and redundant code, but because it is doing something that is 
actually complex and involved?

And I did not feel the need to verify the implication that it wasn't
ready because this matched my own recollections.

It meaning... Twisted?  Twisted's IOCP support?  Ready for... what?  
IOCPReactor definitely not up to the standard of much of the rest of the code 
in Twisted, but it's clearly ready for _something_ since BitTorrent uses it.

But I hope you realise that asserting that things should be within
Twisted without giving any reason,

I have suggested that work proceed in Twisted rather than in Python because 
adding async file I/O to Twisted would be much easier than adding an entirely 
new event-loop core to Python, and then adding async file I/O to *that*.

I thought that I provided several reasons before as well, but let me state them 
as clearly as I can here.  Twisted is a large and mature framework with several 
years of development and an active user community.  The pluggable event loop it 
exposes is solving a hard problem, and its implementation encodes a lot of 
knowledge about how such a thing might be done.  It's also tested on a lot of 
different platforms.

Writing all this from scratch - even a small subset of it - is a lot of work, 
and is unlikely to result in something robust enough for use by all Python 
programs without quite a lot of effort.  It would be a lot easier to get the 
Twisted team involved in standardizing event-driven communications in Python.  
Every other effort I'm aware of is either much smaller, entirely proprietary, 
or both.  Again, I would love to be corrected here, and shown some other large 
event-driven framework in Python that I was heretofore unaware of.  
Standardization is much easier to achieve when you have multiple interested 
parties with different interests to accommodate.  As Yitzhak Rabin used to say, 
you don't engage in API standardization with your friends, you engage in API 
standardization with your enemies - or... something like that.

especially when the first person
saying it just stated that the matching work in Twisted wasn't even
ready, feels like Twisted is trying to push itself forward by bullying
people to work within it whenever something can be asserted as
laying within whatever domain it is which Twisted covers.

You say that you weren't proposing an alternate implementation of an event loop 
core, so I may be reacting to something you didn't say at all.  However, again, 
at least Tristan thought the same thing, so I'm not the only one.

Re: [Python-Dev] Twisted Isn't Specific (was Re: Trial balloon: microthreads library in stdlib)

2007-02-14 Thread glyph
On 08:52 pm, [EMAIL PROTECTED] wrote:

When I last looked at twisted (that is several years ago), there were
several reactors - win32reactor, wxreactor, maybe even more.

Yes.  That's intentional.  Each of those systems offers its own event loop, and 
each reactor implements the basic operations in terms of those loops.

They all have the same API.  Application code does 'from twisted.internet 
import reactor; reactor.listenTCP', 'reactor.callLater', etc.  Only the very 
top-most level decides which reactor the application will use.

And they
didn't even work too well.  The problems I remember were that the win32reactor
was limited to a only handful of handles, the wxreactor didn't process
events when a wx modal dialog boy was displayed, and so on.  Has this changed?

win32eventreactor is limited to 64 handles because WaitForMultipleObjects is 
limited to 64 handles.  wxreactor's event loop is ghastly and actually did 
pause when a modal dialog box is displayed (that has since been fixed).  
Process support on win32 now works in the default select reactor as well as the 
gtk reactor, so win32reactor is mostly a curiosity at this point (useful mainly 
if you want to implement your own GDI-based GUI, as PyUI did at one point), and 
its limitations are not as serious for Twisted as a whole.

In other words, Twisted exposes the platform limitations in its 
platform-specific event loop implementations, and only works around them where 
it's possible to do so without masking platform functionality.

For servers, the epoll, poll, and select reactors work just fine.  The select 
reactor does have a maximum of FD_SETSIZE simultaneous sockets as well, but it 
is very easy to switch reactors if you need something more scalable.

For clients, the best GUI toolkit for Twisted applications at this point is 
GTK, but WX, QT and Cocoa can all be made to work with a minimum of hassle.
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Twisted Isn't Specific (was Re: Trial balloon: microthreads library in stdlib)

2007-02-14 Thread glyph

On 12:31 am, [EMAIL PROTECTED] wrote:
[EMAIL PROTECTED] wrote:
 On 08:52 pm, [EMAIL PROTECTED] wrote:

  When I last looked at twisted (that is several years ago), there were
  several reactors - win32reactor, wxreactor, maybe even more.

 Only the very top-most level decides which reactor the application will use.

This is a worry, because it implies that there has to
*be* a top level that knows what kind of reactor the
whole application will use, and all parts of the
application need to know that they will be getting
their reactor from that top level.

The default reactor is the most portable one, 'select', and if no other reactor 
 is installed, that's the one that will be used.

That may not be the case. For example, you want to
incorporate some piece of event-driven code written
by someone else into your gtk application. But it
wasn't written with gtk in mind, so it doesn't know
to use a gtkreactor, or how to get a reactor that it
can use from your application.

from twisted.internet import reactor is the way you get at the reactor, 
regardless of which one is currently installed.

There can only be one reactor active at any given time, because at the very 
bottom of the event-handling machinery _some_ platform multiplexing API must be 
called, and that is mostly what the reactor represents.

The GTK reactor does not have its own API.  It simply allows you to use GTK 
APIs as well, by back-ending to the glib mainloop.  That is, in fact, the whole 
point of having a reactor API in the first place.

This is not my idea of what another poster called a
one-stop shop -- a common API that different pieces
of code can call independently without having to know
about each other.

To my mind, there shouldn't be a reactor object
exposed to the application at all. There should just
be functions for setting up callbacks.

That's what the Twisted reactor object *is*, exactly.  Functions (well, 
methods) for setting up callbacks.

The choice of
implementation should be made somewhere deep inside
the library, based on what platform is being used.

The deep inside the library decision is actually a policy decision made by a 
server's administrator, or dependent upon the GUI library being used if you 
need to interact with a GUI event loop.  Perhaps the default selection could be 
better, but installing a reactor is literally one line of code, or a single 
command-line option to the twistd daemon.  See:

http://twistedmatrix.com/projects/core/documentation/howto/choosing-reactor.html

It is completely transparent to the application, _unless_ the application wants 
to make use of platform-specific features.

See the following for more information:

http://www.cs.wustl.edu/~schmidt/PDF/reactor-siemens.pdf

although technically Twisted's reactor is more like the slightly higher level 
POSA proactor pattern; asyncore is more like a true reactor in the sense 
discussed in that paper.

Twisted exposes various APIs for setting up callbacks exactly as you describe:

http://twistedmatrix.com/documents/current/api/twisted.internet.interfaces.IReactorTCP.html
http://twistedmatrix.com/documents/current/api/twisted.internet.interfaces.IReactorTime.html

So at this point I'm skeptical that the Twisted
API for these things should be adopted as-is

Given that your supporting arguments were almost exactly the opposite of the 
truth in every case, I think this conclusion should be re-examined :).  If 
you're interested in how a normal Twisted application looks, see this tutorial:

http://twistedmatrix.com/projects/core/documentation/howto/servers.html
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Twisted Isn't Specific (was Re: Trial balloon: microthreads library in stdlib)

2007-02-16 Thread glyph

On 16 Feb, 06:30 pm, [EMAIL PROTECTED] wrote:

I suggest it is possible to implement a PerfectReactor.

I don't think this constitutes a sufficient existence proof.  Perhaps you could 
write a prototype?  There are a bunch of existing reactors you could either 
import or copy/paste from to bootstrap it, so you wouldn't necessarily need to 
do many of the platform-specific hacks it has already implemented for those 
loops.

Personally I am highly skeptical that this is possible, and I didn't find any 
of your arguments convincing, but I would be ecstatic if it worked; it isn't 
*fun* maintaining 12 or so subtly incompatible implementations of the same 
interface :).

I also think this discussion would be more appropriate to continue on the 
Twisted list, as we have veered pretty far afield of stdlib enhancements and 
are now talking about multiplexing method implementations, but I'll follow it 
wherever it continues.
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] with_traceback

2007-02-27 Thread glyph
On Tue, 27 Feb 2007 13:37:21 +1300, Greg Ewing [EMAIL PROTECTED] wrote:

I don't like that answer. I can think of legitimate
reasons for wanting to pre-create exceptions, e.g. if
I'm intending to raise and catch a particular exception
frequently and I don't want the overhead of creating
a new instance each time.

This seems like kind of a strange micro-optimization to have an impact on a 
language change discussion.  Wouldn't it be better just to optimize instance 
creation overhead?  Or modify __new__ on your particular heavily-optimized 
exception to have a free-list, so it can be both correct (you can still mutate 
exceptions) and efficient (you'll only get a new exception object if you really 
need it).

For me, this is casting serious doubt on the whole
idea of attaching the traceback to the exception.

I'm sorry if this has been proposed already in this discussion (I searched 
around but did not find it), but has anyone thought of adding methods to 
Exception to handle these edge cases and *not* attempting to interact with the 
'raise' keyword at all?  This is a strawman, but:

 except Foo as foo:
   if foo.some_property:
 foo.reraise(modify_stack=False)

This would make the internal implementation details less important, since the 
'raise' keyword machinery will have to respect some internal state of the 
exception object in either case, but the precise thing being raised need not be 
the result of the method, nor the exception itself.
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Python-3000 upgrade path

2007-02-27 Thread glyph
On Sun, 25 Feb 2007 13:12:51 -0800, Thomas Wouters [EMAIL PROTECTED] wrote:
I'm sending this to python-dev instead of python-3000 for two reasons: It's
not about features to be added to Python 3.0, and it's not really
about 3.0at all -- it's about
2.6 and later. It's about how we get Python 2.x to 3.0, and howmuch of
3.0we put into
2.6 and later.
 ...
I also don't mean doubts about how we're going to keep the
performance impact minimal or how we're going to handle dict views and what
not. I mean doubts about this procedure of having transitional releases
between 2.5 and 3.0, and the *community* impact of 2.6+ and 3.0 this way.

I'm pretty tired at the moment, fried from PyCon, and have a lot of work to 
catch up on, so I'll have to spend a while collecting my thoughts to respond in 
more depth.  However, especially since I've been a vocal proponent of a 
strategy like this for a while, I would like to say that I'm basically happy 
with this.  Not only that, I'm significantly relieved after PyCon about the 
difficulty of the migration, and confident that, if my current understanding is 
correct, Twisted _will_ support 3.0 and beyond.

Effectively, what this plan means is that the most *basic* aspects of 
transitioning to Python 3.0 should work like the transition to any other new 
Python release.  More changes may be necessary, but it should be possible to 
add a few restrictions to your codebase, and continue to supporting older 
releases with little trouble.  Transitioning to 2.5 was a fairly rocky upgrade 
for Twisted as well, though, and we made that one reasonably well.

The one concern I have is that there are things the source translator won't do 
correctly.  That's fine, but I think there are two important technical features 
it really needs to make the social aspects of this upgrade work well.

2to3 should take great care _tell_ you when it fails.  One concern I have is 
that the source translation may subtly alter the *semantics* of unit test code, 
so that the tests are no longer effective and do not provide adequate coverage. 
 I think this is an extremely unlikely edge case, but a very dangerous one in 
the event that it does happen.  I don't know of any bugs in 2to3 that will do 
this at the moment, but then it's hardly the final release.

Also, the limitations of 2to3 should be very well documented, so that when it 
does tell you how it has failed, it's clear to the developer what changes to 
make to the *2.6 source* to cause it to translate correctly, not how to fix up 
the translated 3.0 code.

Thanks *very* much to all the python core developers and community members who 
have been working to make this happen, especially to Neal, Thomas, and Guido 
for listening intently to all of our concerns at PyCon.
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] with_traceback

2007-02-28 Thread glyph

On Wed, 28 Feb 2007 18:10:21 -0800, Guido van Rossum [EMAIL PROTECTED] wrote:
I am beginning to think that there are serious problems with attaching
the traceback to the exception; I really don't like the answer that
pre-creating an exception is unpythonic in Py3k.

In Twisted, to deal with asynchronous exceptions, we needed an object to 
specifically represent a raised exception, i.e. an Exception instance with 
its attached traceback and methods to manipulate it.

You can find its API here:

http://twistedmatrix.com/documents/current/api/twisted.python.failure.Failure.html

Perhaps the use-cases for attaching the traceback object to the exception would 
be better satisfied by simply having sys.exc_info() return an object with 
methods like Failure?  Reading the motivation section of PEP 344, it 
describes  passing these three things in parallel as tedious and 
error-prone.  Having one object one could call methods on instead of a 3-tuple 
which needed to be selectively passed on would simplify things.

For example, chaining could be accomplished by doing something like this:

sys.current_exc_thingy().chain()

I can't think of a good name for the new object type, since traceback, 
error, exception and stack all already mean things in Python.
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Encouraging developers

2007-03-05 Thread glyph

A few meta-points:

On 07:30 pm, [EMAIL PROTECTED] wrote:
2. Publically identify the core developers and their areas of expertise 
and

responsibility (ie. which parts of the source tree they own).


I'd like to stress that this is an important point; although we all know 
that Guido's the eventual decision makers, there are people whose 
opinions need to be influenced around particular areas of the code and 
whose opinions carry particular weight.  *I* have trouble figuring out 
who these people are, and I think I have more than a casual outsider's 
understanding of the Python development process.

3. Provide a forum (a python-patch mailing list) where patches can be
proposed, reviewed and revised informally but quickly.


This reminds me of a post I really wanted to make after PyCon but 
rapidly became too sick to.


The patches list really ought to be _this_ list.  The fact that it isn't 
is indicative of a failure of the community.  A good deal of the 
discussion here in recent months has either been highly speculative, or 
only tangentially related to Python's development, which is ostensibly 
its topic.  We shouldn't really be talking about PR or deployment or any 
issues which aren't bug reports or patches here.


I've certainly contributed somewhat to this problem myself, and I've 
made a resolution to stick to development issues here.


This post itself is obviously in a grey area near the edge of that, but 
I do feel that, given the rather diverse population of readers here, we 
should collectively make the purpose of this forum explicit so that the 
python developers can use it to develop Python.


One worrying trend I noticed at PyCon is that it seems that quite a lot 
of communication between core developers these days happens over private 
email.  Core developers use private email to deal with pressing issues 
because python-dev has become crowded.  This makes it difficult to see 
high-priority issues, as well as fostering an environment where every 
minor detail might get responded to with a cascade of me too posts or 
bike-shed discussions.  The core guys have a lot of stuff to get done, 
and if there isn't an environment where they can do that in public, 
they're going to get it done in private.


Taken together, all this has the overall effect of making the 
development process a lot harder to follow, which worsens, for example, 
issue #2 which I responded to above.  It also creates a number of false 
impressions about what sort of development is really going on, since 
many posters here are not, in fact, working on core Python at all, just 
speculating about it.


A few others have already pointed out the python-ideas list:

   http://mail.python.org/mailman/listinfo/python-ideas

where the more speculative ideas should be discussed before being 
brought here as a patch or PEP.  Of course, for more general discussion 
there's always good old python-list.


As far as bike-shed discussions, we can all do our part by just 
considering what threads we all really have something useful to 
contribute to.  Let's all try to put the python dev back in python-dev!
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] splitext('.cshrc')

2007-03-06 Thread glyph

On 10:18 pm, [EMAIL PROTECTED] wrote:

Phillip J. Eby schrieb:

At 10:01 PM 3/6/2007 +0100, Martin v. L�wis wrote:

It's unfortunate, of course, that people apparently relied on
this behavior


I was going to say it's the *documented* behavior, but I see that the
documentation is actually such that it could be interpreted either 
way.


However, since it's not documented more specifically, it seems 
perfectly
reasonable to rely on the implementation's behavior to resolve the 
ambiguity.


Despite the generally quite good documentation, I've learned about how 
quite a lot of how the standard library works by messing around at the 
interactive interpreter.  This is one cost of the incompatibility - 
having to re-train developers who thought they knew how something 
worked, or are continuing to write new software while experimenting with 
older VMs.

Sure, it is an incompatible change, no doubt. However, incompatible
changes are ok for feature releases (not so fo bugfix releases).


Incompatible changes may be *acceptable* for feature releases, but that 
doesn't mean they are desirable.  The cost of incompatibility should be 
considered for every change.  This cost is particularly bad when the 
incompatibility is of the silent breakage variety - the change being 
discussed here would not be the sort of thing that one could, say, 
produce a warning about or gently deprecate; anything relying on the old 
behavior would suddenly be incorrect, and any software wishing to 
straddle the 2.5-2.6 boundary would be better off just implementing 
their own splitext() than relying on the stdlib.

So this being an incompatible change alone is not a reason to reject
the patch. Significant breakage in many applications might be, but
I don't expect that for this change (which is really tiny).


Software is a chaotic system.  The size of the change is unrelated to 
how badly it might break things.


More to the point, we know the cost, what's the benefit?  Is there any 
sort of bug that it is likely to prevent in *new* code?  It clearly 
isn't going to help any old code.  It seems there are people who see it 
both ways, and I haven't seen anything compelling to indicate that 
either behavior is particularly less surprising in the edge case.


In cases like this, historical context should be considered, even for a 
major overhaul like 3.0.  Of course, if the newly proposed semantics 
provided a solution to a real problem or common error, compatibility 
might be determined to be a less important issue.


The use-cases being discussed here would be better served by having new 
APIs that do particular things and don't change existing semantics, 
though.  For example, a guess_mime_type(path) function which could 
examine a path and figure out its mime type based on its extension 
(among other things).
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Python-3000 upgrade path

2007-03-06 Thread glyph

On 10:22 pm, [EMAIL PROTECTED] wrote:

On 2/27/07, [EMAIL PROTECTED] [EMAIL PROTECTED] wrote:
2to3 should take great care _tell_ you when it fails.  One concern I 
have is that the source translation may subtly alter the *semantics* 
of unit test code, so that the tests are no longer effective and do 
not provide adequate coverage.



Actually, this did happen with the iterkeys - keys translation. A few
unittests were reduced to pure nonsense by this. Most of those failed,
but it was definitely a hassle finding out what went wrong (in part
because I was too greedy and checked in a bunch of converted code
without any review at all). Lesson learned I would say. Fortunately,
this is a case where 2.6's Py3k warning mode should compensate.


Was this due to a bug that can be fixed, or an inherent problem?

I could imagine, thanks to Python's dynamism, there would be edge cases 
that are impossible to detect completely reliably.  If that's the case 
it would be good to at least have pedantic warnings from 2to3 alerting 
the user to places where translation could possibly be doing something 
semantically dodgy.


Ideally before jumping the chasm with Twisted I'd like to make sure that 
all of that sort of warning was gone, in _addition_ to a clean test run.
Also, the limitations of 2to3 should be very well documented, so that 
when it does tell you how it has failed, it's clear to the developer 
what changes to make to the *2.6 source* to cause it to translate 
correctly, not how to fix up the translated 3.0 code.


I'm hoping Collin will continue his excellent work on 2to3. Hopefully
he'll get help from others in writing docs aimed at teaching the
c.l.py crowd how to use it and what to expect.


I'm sure he'll hear from me if anything goes wrong with it :).
Thanks *very* much to all the python core developers and community 
members who have been working to make this happen, especially to Neal, 
Thomas, and Guido for listening intently to all of our concerns at 
PyCon.


You're welcome. I believe I even got a beer out of it. ;-)


Well deserved!
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


[Python-Dev] Policy Decisions, Judgment Calls, and Backwards Compatibility (was Re: splitext('.cshrc'))

2007-03-08 Thread glyph

[EMAIL PROTECTED] wrote:

That assumes there is a need for the old functionality. I really don't
see it (pje claimed he needed it once, but I remain unconvinced, not
having seen an actual fragment where the old behavior is helpful).


This passage is symptomatic of the thing that really bothers me here.  I 
have rarely used splitext before (since what I really want is 
determineMIMEType(), and that is easier to implement with one's own 
string-munging) and certainly won't use it again after following this 
discussion.


The real issue I have here is one of process.  Why is it that PJE (or 
any python user who wishes their code to keep working against new 
versions of Python) must frequent python-dev and convince you (or 
whatever Python developer might be committing a patch) of each use-case 
for old functionality?  I would think that the goal here would be to 
keep all the old Python code running which it is reasonably possible to, 
regardless of whether the motivating use-cases are understood or not. 
It is the nature of infrastructure code to be used in bizarre and 
surprising ways.


My understanding of the current backwards-compatibility policy for 
Python, the one that Twisted has been trying to emulate strictly, is 
that, for each potentially incompatible change, there will be:


* at least one release with a pending deprecation warning and new, 
better API

* at least one release with a deprecation warning
* some number of releases later, the deprecated functionality is removed

I was under the impression that this was documented in a PEP somewhere, 
but scanning the ones about backwards compatibility doesn't yield 
anything.  I can't even figure out why I had this impression.  *Is* 
there actually such a policy?


If there isn't, there really should be.  Deciding each one of these 
things on a case-by-case basis leaves a lot of wiggle room for a lot of 
old code to break in each release.  For example, if you put me in charge 
of Python networking libraries and I simply used my judgment about what 
usages make sense and which don't, you might find that all the 
synchronous socket APIs were mysteriously gone within a few releases... 
;-)


Perhaps policy isn't the right way to solve the problem, but neither is 
asking every python application developer to meticulously follow and 
participate in every discussion on python-dev which *might* affect their 
code.


As recently as last week, Guido commented that people build mental 
models of performance which have no relation to reality and we must rely 
on empirical data to see how things *really* perform.  This is a common 
theme both here and anywhere good software development practices are 
discussed.


These broken mental models are not limited to performance.  In 
particular, people who develop software have inaccurate ideas about how 
it's really used.  I am avoiding being specific to Python here because 
I've had a similarly broken idea of how people use Twisted, heard 
extremely strange ideas from kernel developers about the way programs in 
userland behave, and don't get me started on how C compiler writers 
think people write C code.


If Python is not going to have an extremely conservative (and 
comprehensive, and strictly enforced) backwards-compatibility policy, we 
can't rely on those mental models as a way of determining what changes 
to allow.  One way to deal with the question of how do people really 
use python in the wild is to popularize the community buildbots and 
make it easy to submit projects so that at least we have a picture of 
what Python developers are really doing.


Buildbot has a build this branch feature which could be used to settle 
these discussions more rapidly, except for the fact that the community 
builders are currently in pretty sad shape:


   http://www.python.org/dev/buildbot/community/all/

By my count, currently only 9 out of 22 builders are passing.  The 
severity of these failures varies (many of the builders are simply 
offline, not failing) but they should all be passing.  If they were, 
rather than debating use-cases, we could at *least* have someone check 
this patch into a branch, and then build that branch across all these 
projects to see if any of them failed.


Unfortunately open source code is quite often better tested and 
understood than the wider body of in-house and ad-hoc Python code out 
there, so it will be an easier target for changes like this than 
reality.  I don't really have a problem with that though, because it 
creates a strong incentive for Python projects to both (A) be open 
source and (B) write comprehensive test suites, both of which are useful 
goals for many other reasons.


In the past I've begged off of actually writing PEPs because I don't 
have the time, but if there is interest in codifying this I think I 
don't have the time *not* to write it.  I'd prefer to document the 
pending/deprecate/remove policy first, but I can write up something more 
complicated about 

Re: [Python-Dev] splitext('.cshrc')

2007-03-08 Thread glyph


On 8 Mar, 06:02 pm, [EMAIL PROTECTED] wrote:

On Thu, Mar 08, 2007 at 06:54:30PM +0100, Martin v. L?wis wrote:

 back_name = splitext(name[0]) + '.bak'


back_name = splitext(name)[0] + '.bak'


This is really totally secondary to the point I actually care about, but 
seeing this antipattern bandied about as a reason to make the change is 
starting to bother me.


There's no way to fix this idiom.  Forget the corner case; what if you 
have a foobar.py and a foobar.txt in the same directory?  This is not at 
all uncommon, and your backup function here will clobber those files 
in any case.


Furthermore, although the module documentation is vague, the docstring 
for splitext specifically says either part may be empty and extension 
is everything from the last dot to the end.  Again, this is a case of 
needing a function designed to perform a specific task, and instead 
relying on half-broken idioms which involve other functions. 
make_backup_filename(x) might *use* splitext, but it is not what 
splitext is *for*.  A correct implementation which did use splitext 
would look like this:


   def make_backup_filename(filename):
   base, extension = splitext(filename)
   return base + '.bak' + extension

although personally I would probably prefer this:

   def make_backup_filename(filename):
   return filename + '.bak'

If the behavior of the old code is going to be buggy in any case, it 
might as well be buggy and consistent.  Consider a program that 
periodically makes and then cleans up backup files, and uses the 
correct splitext-using function above.  Perhaps .cshrc.bak makes more 
sense than .bak.cshrc to some people, but that means that on a system 
where python is upgraded, .bak.cshrc will be left around forever.  Even 
on a program whose functionality is improved by this change, the 
incompatibility between versions might create problems.
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Backports of standard library modules

2007-03-11 Thread glyph

On 11 Mar, 10:32 pm, [EMAIL PROTECTED] wrote:

If this seems useful to others,  I could try to start a PEP on how the
process would work (but since I'm fairly new, it would be great if
someone could help out a bit by validating my verbiage against some of
the process requirements).


Isn't this PEP 297?

This does raise an interesting question, though, since I'm about to get 
into PEP authorship myself.  Have I missed an official way to propose 
alternatives or resurrect a languishing PEP?  I'd like very much to 
propose to obsolete PEP 355 with twisted's FilePath object that I've 
previously discussed, but ... does that mean adding text to 355? 
writing a new PEP and referencing it?


Also, the language/library evolution PEP I would like to write is 
basically an expanded version of PEP 5, but that is apparently moribund 
(and not followed, according to MvL).


Above all, how can I help to motivate timely yea-or-nay decisions from 
the BDFL or his chosen consultants?  PEP 1 seems to defer all of these 
questions to emailing the PEP editor; is that really the best way to go?
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Proposal to revert r54204 (splitext change)

2007-03-14 Thread glyph

On 14 Mar, 05:08 pm, [EMAIL PROTECTED] wrote:

In addition to the prior behavior being explicitly documented in the
function docstrings, r54204 shows that it was also *tested* as behaving
that way by test_macpath, test_ntpath, and test_posixpath.  When 
combined

with the explicit docstrings, this must be considered incontrovertible
proof that the previous behavior was either explicitly intended, or at 
the

very least a known, expected, and *accepted* behavior.


I (obviously, I think) agree with all of that.

This backwards-incompatible change is therefore contrary to policy and
should be reverted, pending a proper transition plan for the change 
(such
as introduction of an alternative API and deprecation of the existing 
one.)


I hope that this is true, but *is* this policy documented as required 
somewhere yet?  I think it should be reverted regardless, and such a 
policy instated if one does not exist, but it is my understanding at 
this point that it is an informal consensus rather than a policy.
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Status of thread cancellation

2007-03-15 Thread glyph

On 04:24 pm, [EMAIL PROTECTED] wrote:

Jean-Paul Calderone schrieb:

I inferred from Martin's proposal that he
expected the thread to be able to catch the exception.  Perhaps he 
can

elaborate on what cleanup actions the dying thread will be allowed to
perform.


Perhaps he can.  Hopefully, he can specifically address these points:

   1. A thread can throw a ThreadDeath exception almost anywhere. All
  synchronized methods and blocks would have to be studied in 
great

  detail, with this in mind.

   2. A thread can throw a second ThreadDeath exception while cleaning 
up
  from the first (in the catch or finally clause). Cleanup would 
have
  to repeated till it succeeded. The code to ensure this would be 
quite

  complex.


Clearly, a thread need to have its finally blocks performed in response
to a cancellation request. These issues are real, however, they apply
to any asynchronous exception, not just to thread cancellation.


To be sure, the problem does apply to all asynchronous exceptions. 
That's why it is generally understood that a program which has received 
an asynchronous exception cannot continue.
In Python, we already have an asynchronous exception: 
KeyboardInterrupt.

This suffers from the same problems: a KeyboadInterrupt also can occur
at any point, interrupting code in the middle of its finally-blocks.
The other exception that is nearly-asynchronous is OutOfMemoryError,
which can occur at nearly any point (but of course, never occurs in
practice).


KeyboardInterrupt and MemoryError share a common feature which forced 
thread termination does not: nobody reasonably expects the program to 
keep running after they have been raised.  Indeed, programs are written 
with the expectation that MemoryError will never occur, and if it does, 
the user is not surprised to find them in an inconsistent state.  In any 
situation where a MemoryError may reasonably be expected - that is to 
say, a specific, large allocation of a single block of memory - it can 
be trapped as if it were not asynchronous.


Long-running Python programs which expect to need to do serious clean-up 
in the face of interrupts, in fact, block KeyboardInterrupt by 
registering their own interrupt handlers (Zope, Twisted).


Developers who think they want thread cancellation inevitably believe 
they can, if they are sufficiently careful, implement operating- 
system-like scheduling features by starting arbitrary user code and then 
providing terminate, pause, and resume commands.  That was the 
original intent of these (now removed) Java APIs, and that is why they 
were removed: you can't do this.  It's impossible.


Asynchronous exceptions are better than immediate termination because 
they allow for code which is allocating scarce or fragile resources to 
have a probabilistically better chance of cleaning up.  However, nobody 
writes code like this:


def addSomeStuff(self, volume, mass):
   self.volume += volume
   try:
   self.mass += mass
   except AsynchronousInterrupt:
   while 1:
   try:
   self.volume -= volume
   break
   except AsynchronousInterrupt:
   pass

and nobody is going to start if the language provides thread 
termination.  Async-Exception-Safe Python code is, and will be, as rare 
as POSIX Async-Safe functions, which means at best you will be able to 
call a thread cancellation API and have it _appear_ to work in some 
circumstances.  In any system which uses Python code not explicitly 
designed to support asynchronous exceptions (let's say, the standard 
library) it will be completely impossible to write correct code.


I'm not a big fan of shared-state-threading, but it does allow for a 
particular programming model.  Threading provides you some guarantees. 
You can't poke around on the heap, but you know that your stack, and 
your program counter, are inviolate.  You can reason about, if not quite 
test, the impact of sharing a piece of state on the heap; its 
destructive methods need to be synchronized along with the read methods 
that interact with it.


Asynchronous exceptions destroy all hope of sanity.  Your program might 
suddenly perform a nonlocal exit _anywhere_ except, maybe, inside a 
finally block.   This literally encourages some people that program in 
environments where asynchronous exceptions are possible (.NET, in 
particular) to put huge chunks of application code inside finally 
blocks.  They generally look like this:


   try {}
   finally {
   // entire application here
   }

because that is really the only way you can hope to write code that will 
function robustly in such an environment.

So yes, it would be good if Python's exception handling supported
asynchronous exceptions in a sensible way. I have to research somewhat
more, but I think the standard solution to the problem in operating
system (i.e. disabling interrupts at certain points, explicitly
due to code or implicitly as a result of 

Re: [Python-Dev] Proposal to revert r54204 (splitext change)

2007-03-15 Thread glyph

On 05:51 pm, [EMAIL PROTECTED] wrote:

At 07:45 AM 3/15/2007 +0100, Martin v. L�wis wrote:

I apparently took the same position that you now take back then,
whereas I'm now leaning towards (or going beyond) the position
Tim had back then, who wrote BTW, if it *weren't* for the code 
breakage,

I'd be in favor of doing this.


If it weren't for the code breakage, I'd be in favor too.  That's not 
the

point.

The point is that how can Python be stable as a language if precedents 
can
be reversed without a migration plan, just because somebody changes 
their
mind?  In another five years, will you change your mind again, and 
decide

to put this back the way it was?


Hear, hear.  Python is _not_ stable as a language.  I have Java programs 
that I wrote almost ten years ago which still run perfectly on the 
latest runtime.  There is python software I wrote two years ago which 
doesn't work right on 2.5, and some of the Python stuff contemporary 
with that Java code won't even import.
Speaking as a business person, that seems to me... unwise.  When I 
found
out that this change had been checked in despite all the opposition, my 
gut
reaction was, I guess I can't rely on Python any more, despite 10 
years

of working with it, developing open source software with it, and
contributing to its development.  Because from a *business* 
perspective,

this sort of flip-flopping means that moving from one minor Python
version to another is potentially *very* costly.


And indeed it is.  Python's advantages in terms of rapidity of 
development have, thus far, made up the difference for me, but it is 
threatening to become a close thing.  This is a severe problem and 
something needs to be done about it.
But as you are so fond of pointing out, there is no many people. 
There
are only individual people.  That a majority want it one way, means 
that

there is a minority who want it another.  If next year, it becomes more
popular to have it the other way, will we switch again?  If a majority 
of

people want braces and required type declarations, will we add them?


And, in fact, there is not even a majority.  There is a *perception* of 
a majority.  There isn't even a *perception* of a majority of Python 
users, but a perception of a majority of python-dev readers, who are 
almost by definition less risk-averse when it comes to language change 
than anyone else!


If we actually care about majorities, let's set up a voting application 
and allow Python users to vote on each and every feature, and publicize 
it each time such a debate comes up.  Here, I'll get it started:


http://jyte.com/cl/python-should-have-a-strict-backward-compatibility- 
policy-to-guide-its-development


According to that highly scientific study, at this point in time, 
Nobody disagrees :).  (One in favor, zero against.)
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Proposal to revert r54204 (splitext change)

2007-03-15 Thread glyph

On 08:21 pm, [EMAIL PROTECTED] wrote:

Mike Krell schrieb:

FWIW, I agree completely with PJE's and glyph's remarks with respect
to expectations of stability, especially in a minor release.


Not sure what you mean by minor release. The change isn't proposed
for the next bug fix release (2.5.1), but for the next major release
(2.6). See PEP 6.


Common parlance for the parts of a version number is:
   major.minor.micro
See:
http://twistedmatrix.com/documents/current/api/twisted.python.versions.Version.html#__init__

Changing this terminology about Python releases to be more consistent 
with other projects would be a a subtle, but good shift towards a 
generally better attitude of the expectations of minor releases.
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Proposal to revert r54204 (splitext change)

2007-03-15 Thread glyph


On 08:43 pm, [EMAIL PROTECTED] wrote:

On 3/15/07, [EMAIL PROTECTED] [EMAIL PROTECTED] wrote:

On 05:51 pm, [EMAIL PROTECTED] wrote:
At 07:45 AM 3/15/2007 +0100, Martin v. L�wis wrote:
I apparently took the same position that you now take back then,
whereas I'm now leaning towards (or going beyond) the position
Tim had back then, who wrote BTW, if it *weren't* for the code 
breakage,

I'd be in favor of doing this.

If it weren't for the code breakage, I'd be in favor too.  That's not 
the

point.

The point is that how can Python be stable as a language if 
precedents can
be reversed without a migration plan, just because somebody changes 
their
mind?  In another five years, will you change your mind again, and 
decide

to put this back the way it was?

Hear, hear.  Python is _not_ stable as a language.  I have Java 
programs
that I wrote almost ten years ago which still run perfectly on the 
latest
runtime.  There is python software I wrote two years ago which doesn't 
work
right on 2.5, and some of the Python stuff contemporary with that Java 
code

won't even import.


I think the problem has less to do with bug fixing than with lack of
any clear specifications or documentation about what developers can
depend on.You could probably make a case that any change that
doesn't fix a crash bug is likely to cause some particular program to
behave differently.


Absolutely.  One of the reasons I'm working on documenting this process 
is that, of course, *everything* can't be made compatible.  The mere act 
of adding a function or module adds a detectable change in behavior that 
some program *might* insanely depend on.


A clear understanding of what is meant by backwards compatibility is 
equally important to developers trying to future-proof their code as it 
is to those trying to make sure they don't break code which has been 
future-proofed.  This is a form of social contract, and both sides need 
to know about it.

Take bug 1504333 which lead to a change in sgmllib behavior for angle
brackets in quoted attribute values.  Did the sgmllib documentation
explain that the fixed behavior was incorrect?  Might a programmer
working with sgmllib have written code that depended on this bug?  Do
you object to this bug fix?


I don't know enough about the specification to say for sure, but I 
suspect that it is a legitimate bug fix, because sgmllib is implementing 
an externally-determined spec.  In cases where the spec is flagrantly 
violated, it seems like it should be fixed to adhere to it.

For many of these bugs, some people will have written code against the
documentation and some people against the implementation or behavior.
(In this case, the documentation is vague or conflicting.)  I don't
think I know how to balance the important of these two classes of
users.  Some code is going to break the first time they run into the
under-specific edge case, some code is going to break when the
specification and implementation are clarified.  You have to weigh
which you think is more likely and which will benefit users the most.


If the documentation is vague and conflicting, then it seems likely that 
a parsing option could be added.  I am not advocating perfect, 100% 
backwards compatibility, merely some standards for what happens when a 
(potentially) incompatible change is made.  For example, you could add a 
flag to the parser which tweaks the treatment of quoted angle brackets, 
and warn if the argument is not passed that the default will change in 
the future (or, better yet, that the argument will be required in the 
future).  Or, you could provide a separate name for invoking the 
different behavior.

I think everyone wants to do the right thing by Python's users, but
it's not clear what that right thing is.


I really think that starting with the golden rule would be a good 
idea.  Would Python core developers mind if something analogous in the C 
runtime changed?  How would they cope with it?  What kind of feedback 
would you expect the C compiler or runtime to provide in such a case? 
Python should do unto others, etc.

Could you point out a few such programs that people on python-dev can
look at?  I think it would be useful to gather some data about the
kind of migration pains real users are having.  I believe Martin and
others are trying to do the right thing.  Real data is more likely to
convince them than passionate arguments on python-dev.


(I assume you're responding to my other comment about my programs not 
running, even though that's not what you quoted.)


I don't think these programs would contribute much to the discussion. 
I've probably got them archived somewhere, but they were broken circa 
2.3 and I don't think I've run them since.  I doubt they would make any 
sense to anyone here, and we would all get into a heated debate as to 
whether my usage of Python was valid or not (hint: it was *REALLY* 
gross).


In fact, let's back up a step.  These programs were never released as 

Re: [Python-Dev] Proposal to revert r54204 (splitext change)

2007-03-15 Thread glyph

On 09:17 pm, [EMAIL PROTECTED] wrote:

But the key point I want to get across is people should not being
getting mad at Martin.  The people who are getting all bent out of
shape over this should be upset at python-dev as a whole for not
having a clear policy on this sort of thing.  Martin just happened to
be the guy who made a change that sparked this and he is explaining
his thinking behind it (which also happens to mirror my thinking on
this whole situation).  It could have easily been someone else.


On part of this point, I have to agree.  Nullum crimen, nulla poena sine 
praevia lege poenali.


However, the decision was a bad one regardless of the existing policy, 
and sets a bad precedent while we are discussing this policy.  I could 
be wrong, but I think it would be reasonable to assume that if Martin 
strongly supports such a change, Martin would oppose a policy which 
would strictly forbid such changes, and it is just such a policy that 
Python needs.

Bottom line, let's work together as a group to come up with a policy
in a civil, positive manner (in a new thread!) and let the result of
that decision guide what is done with this fix.  Yelling at poor
Martin about one patch when we could be spending this effort on trying
to discuss what kind of policy we want is not getting us anywhere.


I *am* working on that on the side and I hope to have something coherent 
and whole to present here, in that different thread, very soon.  The 
point, for me, of participating in *this* thread is (A) to continue to 
keep the issue high-visibility, because in my opinion, compatibility 
policy is _THE_ issue that python-dev needs to deal with now, (B) to 
deal with the aforementioned strongly implied opposition to such a 
policy, and (C) last but not least, to actually get the patch reverted, 
since, while it is not the larger problem, it is, itself, a problem that 
needs to be fixed.
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Proposal to revert r54204 (splitext change)

2007-03-16 Thread glyph


On 15 Mar, 11:34 pm, [EMAIL PROTECTED] wrote:

[EMAIL PROTECTED] schrieb:
However, the decision was a bad one regardless of the existing policy, 
and sets a bad precedent while we are discussing this policy.  I could 
be wrong, but I think it would be reasonable to assume that if Martin 
strongly supports such a change, Martin would oppose a policy which 
would strictly forbid such changes, and it is just such a policy that 
Python needs.


I still can't guess what policy you have in mind, so I can't object to
it :-) I may accept a policy that rejects this change, but allows
another change to fix the problem. I would oppose a policy that causes
this bug to be unfixable forever.


Well, there's *also* the fact that I strongly disagree that this is a 
bug, but I don't know that I could codify that in a policy.  Hence the 
parallel discussion.


However, I do apologize for obliquely referring to this thing I'm 
working on without showing a work in progress.  It's just that different 
parts of the policy will rely on each other, and I don't want to get 
bogged down talking about individual details which will be dealt with in 
the final rev.  That, and I am trying to integrate feedback from the 
ongoing discussion...
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Status of thread cancellation

2007-03-16 Thread glyph

On 12:06 am, [EMAIL PROTECTED] wrote:

[EMAIL PROTECTED] wrote:
Can you suggest any use-cases for thread termination which will *not* 
result in a completely broken and unpredictable heap after the thread 
has died?


Suppose you have a GUI and you want to launch a
long-running computation without blocking the
user interface. You don't know how long it will
take, so you want the user to be able to cancel
it if he gets bored.


That's a perfectly reasonable use-case which doesn't require this 
feature at all ;).

Interaction with the rest of the program is
extremely limited -- some data is passed in,
it churns away, and some data is returned. It
doesn't matter what happens to its internal
state if it gets interrupted, as it's all going
to be thrown away.


If that's true, then the state-sharing features of threads aren't 
required, which is the right way to design concurrent software anyway.

In that situation, it doesn't seem unreasonable
to me to want to be able to just kill the thread.
I don't see how it could do any more harm than
using KeyboardInterrupt to kill a program,
because that's all it is -- a subprogram running
inside your main program.


The key distinction between threads and processes is the sharing of 
internal program state.

How would you handle this situation?


Spawn a process, deliver the result via an event.

If you're allergic to event-driven programming, then you can spawn a 
process *in* a thread, and block in the thread on reading from the 
process's output, then kill the *process* and have that terminate the 
output, which terminates the read().  This is a lot like having a queue 
that you can put a stop object into, except the file interface 
provided by OSes is kind of crude.  Still no need to kill the thread.


At the end of the day though, you're writing a GUI in this use-case and 
so you typically *must* be cognizant of event-driven issues anyway. 
Many GUIs (even in the thread-happy world of Windows) aren't thread-safe 
except for a few specific data-exchange methods, which behave more or 
less like a queue.


One of the 35 different existing ways in which one can spawn a process 
from Python, I hope, will be sufficient for this case :).
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Proposal to revert r54204 (splitext change)

2007-03-18 Thread glyph

On 18 Mar, 07:53 pm, [EMAIL PROTECTED] wrote:

My second observation is that there seems to have been a lack of
people skills all around. That is perhaps to expect in a community of
geeks, but given the length of the previous thread on this topic
(before Martin checked it in) I think all the participants might have
done wiser by taking each others' feelings into account rather than
attempting to relentlessly arguing the technical point at hand.


Let me take the opportunity to make this clear, then.  I have the utmost 
respect for Martin and his contributions for Python.  I have been 
following commits for quite a while and I know that Martin, in 
particular, is often the one who deals with the crap work of reviewing 
huge piles of orphaned patches and fixing piles of minor issues, and he 
is therefore responsible for much of the general upward trend in 
Python's overall quality in the last few releases.  I appreciate that 
very much.

My first observation is that this is a tempest in a teacup.


On the one hand I agree.  This particular change is trivial, most likely 
doesn't affect me, and I accept that, in practice, it probably won't 
even break too many programs (and may even fix a few).


On the other, I think it is important to quell the tempest before it 
exits the teacup.  Previously in this discussion, both I and PJE have 
repeatedly declared that this _particular_ change is not really what's 
at issue here, merely the pattern of reasoning that comes to consider 
this change acceptable.  At some point a large number of small breakages 
are actually worse than one big one, and it's hard to point to the exact 
place where that happens.


On the gripping hand, I am almost glad that it was a relatively minor 
change that triggered this avalanche of posts.  Even with such a small 
change, the change itself threatens to obscure a larger issue, and if 
the change itself were any bigger, it would eclipse the other discussion 
completely.

My third observation is tha a policy that would have disallowed or
allowed (depending on your POV) this particular change is not an
option. A policy isn't going to solve all disagreements, there will
always be debate possible about the interpretations of the rules.
What's needed is a way to handle such debate in a way that produces an
outcome without wearing everyone out.


The allow vs. disallow issue is not *really* what the policy should be 
addressing.  A major problem with this thread is the differing 
definitions that some people have, beginning with extension, but I 
can't see that a policy will fix *that*.  Words like bug, fix, 
compatible, and so on, all have obvious general meanings but much 
more nuanced and specific meanings in particular contexts.  A policy 
should outline specifics of what, for example, is to be considered an 
incompatible change, and what must be done in that case.  A policy 
could not outright forbid changes of a certain type, since that is 
pretty much asking that it be broken any time a sufficiently important 
change is requested and the core team likes it.


Maybe policy isn't even the right word here, since the part of it that 
would facilitate discussions like this one would be more lexicon than 
policy.

It's important that the participants in the debate respect each other
-- before, during and after the debate.


Agreed.

Any lack of people skills notwithstanding, I hope I haven't said 
anything that implied (or stated, of course) that I did not *respect* 
the other participants of the discussion.  If I have, I retract it. 
Strong disagreement is different than disrespect.

If you want, I can make a decision. But I will only do that if I hear
from both sides of the debate that they are willing to accept my
choice even if it favors the other party. Can I hear agreement to
that? In particular; Phillip and Glyph, if I decide that Martin's
change is OK for 2.6, will you accept it and stop debating it and get
on with your lives? And Martin, if I decide that the change should be
rolled back, will you be okay with that?


I will certainly accept the decision.  I don't *like* generating trouble 
on this mailing list, believe me.  Once a BDFL pronouncement is made, 
further discussion is just generating trouble.


That isn't the same as *agreeing* with the decision, of course :-).

The important thing for me is not reaching a decision on this particular 
issue (or even a particular decision on this particular issue).  It is 
that we achieve some kind of consensus around how backward compatibility 
is supposed to work in the large rather than in a particular instance. 
For those of you who don't think this issue is important in and of 
itself, consider the secondary consequence of this ruckus happening 
every time someone commits a potentially-incompatible change.


I would not mind if, for example, this patch were grandfathered in to 
the lack of any clear backwards compatibility policy, as long as similar 
future changes were subject

Re: [Python-Dev] Proposal to revert r54204 (splitext change)

2007-03-19 Thread glyph

On 19 Mar, 02:46 pm, [EMAIL PROTECTED] wrote:

As you see I'm trying to discourage you from working on such a
document; but I won't stop you and if there's general demand for it
and agreement I'll gladly review it when it's ready. (It's a bit
annoying to have to read long posts alluding to a document under
development without being able to know what will be in it.)


Quite so.  Again, I apologize for that.  I won't say anything further 
until I have something ready to post for review, and at that point I 
hope the motivation section will make it clear why I think this is so 
important.  I estimate something this weekend.
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] deprecate commands.getstatus()

2007-03-22 Thread glyph

On 22 Mar, 08:38 pm, [EMAIL PROTECTED] wrote:

And do we even need os.fork(), os.exec*(), os.spawn*()?


Maybe not os.spawn*, but Twisted's spawnProcess is implemented (on UNIX) 
in terms of fork/exec and I believe it should stay that way.  The 
subprocess module isn't really usable for asynchronous subprocess 
communication.
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] deprecate commands.getstatus()

2007-03-22 Thread glyph


On 22 Mar, 09:37 pm, [EMAIL PROTECTED] wrote:

Sure. os.fork() and the os.exec*() family can stay. But os.spawn*(),
that abomination invented by Microsoft?


Right, I personally would not miss it. It's also not a system call,
but a library function on both Windows and Unix (the equivalent
of exposing fork would be to expose CreateProcessEx - something
that I think Python should do out of the box, and not just when
PythonWin is installed - but you can now get it through ctypes).



I also hear no opposition
against killign os.system() and os.popen().


Both are library functions; I can implement them in Python on top
of what is there (plus popen is based on stdio, which we declared
evil). So yes, the can go.


In the long term (read: 3k) I think I agree completely.

It seems that this is a clear-cut case of TATMWTDI (there are too many 
ways to do it) and the subprocess module should satisfy all of these 
use-cases.


I also like Martin's earlier suggestion of calling the remaining OS 
process-manipulation functions posix.fork, etc.  I think it would be a 
lot clearer to read and maintain the implementation of subprocess (and 
asynchronous equivalents, like Twisted's process support) if the 
platform back-ends were explicitly using APIs in platform-specific 
modules.  The current Twisted implementation misleadingly looks like the 
UNIX implementation is cross-platform because it uses functions in the 
os module, whereas the Windows implementation uses win32all.
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Tweaking the stdlib test infrastructure

2007-04-24 Thread glyph

On 12:39 am, [EMAIL PROTECTED] wrote:

Fast and simple: I want all stdlib test cases to stop subclassing
unittest.TestCase and start subclassing test_support.TestCase.



So: any objections to making this change?


Not an objection so much as a question - if these feature additions are 
generally interesting (and the ones you mentioned sounded like they are) 
why not simply add them to unittest.TestCase itself?  After all, that is 
in the stdlib itself already.
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Tweaking the stdlib test infrastructure

2007-04-24 Thread glyph

On 04:36 pm, [EMAIL PROTECTED] wrote:

On 4/24/07, [EMAIL PROTECTED] [EMAIL PROTECTED] wrote:

On 12:39 am, [EMAIL PROTECTED] wrote:
Fast and simple: I want all stdlib test cases to stop subclassing
unittest.TestCase and start subclassing test_support.TestCase.

So: any objections to making this change?

Not an objection so much as a question - if these feature additions 
are
generally interesting (and the ones you mentioned sounded like they 
are) why
not simply add them to unittest.TestCase itself?  After all, that is 
in the

stdlib itself already.


Because something like per-test refleak checking is completely useless
when testing pure-python code.


Not everybody outside the stdlib is using the python unittest module to 
test pure-python code.

More generally, because not everything
that will be useful for the stdlib's test suite will be useful to
every single third-party test suite.


I don't think that every single third-party test suite is using every 
single feature of the existing unittest module either, though.

It's also an issue of support: I
want the freedom to add functionality to the stdlib's test suite
without worrying about the impact on third-party unittest code.


This makes sense to me, and I won't belabor the point.  I just wanted to 
give a gentle reminder that if this functionality is useful to the 
standard library, it's likely that there are other libraries it would be 
useful to.  Not putting it into the supported library right away makes 
sense, but if this functionality does prove useful, please consider 
making it standard.
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] whitespace normalization

2007-04-25 Thread glyph

On 08:15 pm, [EMAIL PROTECTED] wrote:

;; untabify and clean up lines with just whitespace
(defun baw-whitespace-normalization ()


Looks a lot like (whitespace-cleanup), if you've got the right 
configuration.  A function I've used many times :).
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Add a -z interpreter flag to execute a zip file

2007-07-12 Thread glyph
On 08:41 am, [EMAIL PROTECTED] wrote:
On 7/12/07, Thomas Wouters [EMAIL PROTECTED] wrote:

I disagree with both statements. The bagage is much less than 
zipimport
itself, which has proven to be quite useful. Nevertheless, zipimport 
built
into the interpreter was by no means necessary; current users of it 
could
have readily implemented it themselves, with no changes to Python.

I wonder, is it even necessary to say anything, after:
+1.

?

But, since I so often object to new features, and there is a heavy 
Google bias in the existing survey sample, I would like to say that I 
had a problem several months ago in a _radically_ different environment 
(Twisted running on an embedded system, Zipfile of PYCs used to shave 
off as much disk space and startup time as possible) where having the 
subtleties of a -z flag figured out already would have saved me a 
_ton_ of work.  I was already aware of the shell-header trick, but 
discovering all the environment-setup details was tedious and 
distracting enough to make me give up and switch to writing a bunch of 
hard-to-test /bin/sh code.

It wasn't a bad project by any means, and Python worked out even better 
than expected (we weren't even sure if it would be able to load into the 
memory available, but it turns out that being able to do everything in a 
single process helped a lot) but a -z option would have been that much 
more impressive :).

In fact, I distinctly remember thinking You know, if Python had an 
equivalent to Java's '-jar' option, this would be a whole lot easier.

(Even better, on this _particular_ project, would have been a generic 
run this thing-which-looks-like-a-sys.path-entry standard format, 
which could have been switched for different deployments to a directory, 
a zipfile, or the result of freezing.  Perhaps that's starting to get 
too obscure, though.)
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] frozenset C API?

2007-09-06 Thread glyph
On 05:03 pm, [EMAIL PROTECTED] wrote:
I really don't understand why you would not expose all data in the
certificate.

You mean, providing the entire certificate as a blob? That is planned
(although perhaps not implemented).

Or do you mean expose all data in a structured manner. BECAUSE
IT'S NOT POSSIBLE. Sorry for shouting, but people don't ever get the
notion of extension.

structure is a relative term.  A typical way to deal with extensions 
unknown to the implementation is to provide ways to deal with the 
*extension-specific* parts of the data in question, c.f. 
http://java.sun.com/j2se/1.4.2/docs/api/java/security/cert/X509Extension.html

Exposing the entire certificate object as a blob so that some *other* 
library could parse it *again* seems like just giving up.

However, as to the specific issue of subjectAltName which Chris first 
mentioned: if HTTPS isn't an important specification to take into 
account while designing an SSL layer for Python, then I can't imagine 
what is.  subjectAltName should be directly supported regardless of how 
it deals with unknown extensions.
It seems totally obvious. The data is there for a reason.
I want the subjectAltName. Probably other people want other stuff. Why
cripple it? Please include it all.

That's not possible. You can get the whole thing as a blob, and then
you have to decode it yourself if something you want is not decoded.

Something very much like that is certainly possible, and has been done 
in numerous other places (including the Java implementation linked 
above).  Providing a semantically rich interface to every possible X509 
extension is of course ridiculous, but I don't think that's what anyone 
is actually proposing here.
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] frozenset C API?

2007-09-06 Thread glyph
On 05:15 pm, [EMAIL PROTECTED] wrote:
RFC 2818

If a subjectAltName extension of type dNSName is present, that MUST
be used as the identity. Otherwise, the (most specific) Common Name
field in the Subject field of the certificate MUST be used. Although
the use of the Common Name is existing practice, it is deprecated and
Certification Authorities are encouraged to use the dNSName instead.


Yes, subjectAltName is a big one.  But I think it may be the only
extension I'll expose.  The issue is that I don't see a generic way
of mapping extension X into Python data structure Y; each one needs to
be handled specially.  If you can see a way around this, please speak
up!

Well, I can't speak for Chris, but that will certainly make *me* happier 
:).
I intend to include it all, by giving you a way to pull the full DER
form of the certificate into Python.  But a number of fields in the
certificate have nothing to do with authorization, like the signature,
which has already been used for validation.  So I don't intend to try
to convert them into Python-friendly forms.  Applications which want to
use that information already need to have a more powerful library, like
M2Crypto or PyOpenSSL, available; they can simply work with the DER 
form
of the certificate.

When you say the full DER form, are you simply referring to the full 
blob, or a broken-down representation by key and by extension?

This begs the question: M2Crypto and PyOpenSSL already do what you're 
proposing to do, as far as I can tell, and are, as you say, more 
powerful.  There are issues with each (and issues with the GNU TLS 
bindings too, which I notice you didn't mention...)

Speaking of issues, PyOpenSSL, for example, does not expose 
subjectAltName :).

This has been a long thread, so I may have missed posts where this was 
already discussed, but even if I'm repeating this, I think it deserves 
to be beaten to death.  *Why* are you trying to bring the number of 
(potentially buggy, incomplete) Python SSL bindings to 4, rather than 
adopting one of the existing ones and implementing a simple wrapper on 
top of it?  PyOpenSSL, in particular, is both a popular de-facto 
standard *and* almost completely unmaintained; python's standard library 
could absorb/improve it with little fuss.
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


[Python-Dev] Design and direction of the SSL module (was Re: frozenset C API?)

2007-09-09 Thread glyph
Sorry for the late response.  As always, I have a lot of other stuff 
going on at the moment, but I'm very interested in this subject.

On 6 Sep, 06:15 pm, [EMAIL PROTECTED] wrote:
PyOpenSSL, in particular, is both a popular de-facto
standard *and* almost completely unmaintained; python's standard 
library
could absorb/improve it with little fuss.

Good idea, go for it!  A full wrapper for OpenSSL is beyond the scope
of my ambition; I'm simply trying to add a simple fix to what's
already in the standard library.

I guess I'd like to know two things.  One, what *is* the scope of your 
amibition?  I feel silly for asking, because I am pretty sure that 
somewhere in the beginning of this thread I missed either a proposal, a 
PEP reference, or a ticket number, but I've poked around a little and I 
can't seem to find it.  Can you provide a reference, or describe what it 
is you're trying to do?

Two, what's the scope of the plans for the SSL module in general for 
Python?  I think I misinterpreted several things that you said as the 
plan rather than your own personal requirements: but if in reality, I 
can go for it, I'd really like to help make the stdlib SSL module to 
be a really good, full-featured OpenSSL implementation for Python so we 
can have it elsewhere.  (If I recall correctly you mentioned you'd like 
to use it with earlier Python versions as well...?)

Many of the things that you recommend using another SSL library for, 
like pulling out arbitrary extensions, are incredibly unweildy or flat- 
out broken in these libraries.  It's not that I mind going to a 
different source for this functionality; it's that in many cases, there 
*isn't* another source :).  I think I might have said this already, but 
subjectAltName, for example, isn't exposed in any way by PyOpenSSL.

I didn't particularly want to start my own brand-new SSL wrapper 
project, and contributing to the actively-maintained stdlib 
implementation is a lot more appealing than forking the moribund 
PyOpenSSL.

However, even with lots of help on the maintenance, converting the 
current SSL module into a complete SSL library is a lot of work.  Here 
are the questions that I'd like answers to before starting to think 
seriously about it:

* Is this idea even congruent with the overall goals of other 
developers interested in SSL for Python?  If not, I'm obviously barking 
up the wrong tree.
* Would it be possible to distribute as a separate library?  (I think 
I remember Bill saying something about that already...)
* When would such work have to be completed by to fit into the 2.6 
release?  (I just want a rough estimate, here.)
* Should someone - and I guess by someone I mean me - write up a PEP 
describing this?

My own design for an SSL wrapper - although this simply a Python layer 
around PyOpenSSL - is here:

http://twistedmatrix.com/trac/browser/trunk/twisted/internet/_sslverify.py

This isn't really complete - in particular, the documentation is 
lacking, and it can't implement the stuff PyOpenSSL is missing - but I 
definitely like the idea of having objects for DNs, certificates, CRs, 
keys, key pairs, and the ubiquitous certificate-plus-matching-private- 
key-in-one-file that you need to run an HTTPS server :).  If I am going 
to write a PEP, it will look a lot like that file.

_sslverify was originally designed for a system that does lots of 
automatic signing, so I am particularly interested in it being easy to 
implement a method like  PrivateCertificate.signCertificateRequest - 
it's always such a pain to get all the calls for signing a CR in any 
given library *just so*.
This begs the question: M2Crypto and PyOpenSSL already do what you're
proposing to do, as far as I can tell, and are, as you say, more
powerful.

To clarify my point here, when I say that they already do what you're 
doing, what I mean is, they already wrap SSL, and you are trying to wrap 
SSL :).
I'm trying to give the application the ability to do some level of
authorization without requiring either of those packages.

I'd say why wouldn't you want to require either of those packages? but 
actually, I know why you wouldn't want to, and it's that they're bad. 
So, given that we don't want to require them, wouldn't it be nice if we 
didn't need to require them at all? :).
Like being
able to tell who's on the other side of the connection :-).  Right
now, I think the right fields to expose are

I don't quite understand what you mean by right fields.  Right fields 
for what use case?  This definitely isn't right for what I want to use 
SSL for.
  subject (I see little point to exposing issuer),

This is a good example of what I mean.  For HTTPS, the relationship 
between the subject and the issuer is moot, but in my own projects, the 
relationship is very interesting.  Specifically, properties of the 
issuer define what properties the subject may have, in the verification 
scheme for Vertex ( http://divmod.org/trac/wiki/DivmodVertex ).  (On the 

Re: [Python-Dev] Declaring setters with getters

2007-10-31 Thread glyph
As long as we're all tossing out ideas here, my 2¢.  I vastly prefer 
this:


On 02:43 am, [EMAIL PROTECTED] wrote:

On 10/31/07, Fred Drake [EMAIL PROTECTED] wrote:



   @property.set
   def attribute(self, value):
   self._ignored = value


to this:

 @property.set(attribute)
 def attribute(self, value):
 self._ignored = value


since I don't see any additional expressive value in the latter, and it 
provides an opportunity to make a mistake.  The decorator syntax's main 
value, to me, is eliminating the redundancy in:


   def foo():
   ...
   foo = bar(foo)

eliminating the possibility of misspelling foo one of those three 
times. and removing a lot of finger typing.


The original proposal here re-introduces half of this redundancy.  I 
think I can see why Guido did it that way - it makes the implementation 
a bit more obvious - but decorators are already sufficiently magic 
that I wouldn't mind a bit more to provide more convenience, in what is 
apparently just a convenience mechanism.


And, since everyone else is sharing their personal current way of 
idiomatically declaring dynamic properties, here's mine; abusing the 
class statement instead of decorators:


   from epsilon.descriptor import attribute
   class Stuff(object):
   class foo(attribute):
   you can put a docstring in the obvious place
   def set(self, value):
   print 'set foo!'
   self._foo = value + 4
   def get(self):
   return self._foo + 3

   s = Stuff()
   s.foo = 0
   print 's.foo:', s.foo

I'd be glad of a standard, accepted way to do this though, since it's 
really just a spelling issue and it would be nice to reduce the learning 
curve between all the different libraries which define dynamic 
attributes.
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Declaring setters with getters

2007-11-01 Thread glyph

On 02:01 pm, [EMAIL PROTECTED] wrote:

On 10/31/07, [EMAIL PROTECTED] [EMAIL PROTECTED] wrote:

As long as we're all tossing out ideas here, my 2�.  I vastly prefer
this:
@property.set
to this:
  @property.set(attribute)



I don't approve of it. It has always been and will always
continue to be my position that these are semantically unkosher,
because it means that you can't wrap them in convenience functions or
invoke them in different contexts, and that means that the semantics
are hard to explain.


Point taken.

If you really want another argument, repeating the property name
actually does have an additional use case: you can have a read-only
property with a corresponding read-write property whose name differs.


I don't actually have this use-case, but it does make the actual 
semantics of the provided argument a bit clearer to me.  It's not an 
implementation detail of fusing the properties together, it's just 
saying which property to get the read accessor from.


This is a minor nit, as with all decorators that take an argument, it 
seems like it sets up a hard-to-debug error condition if you were to 
accidentally forget it:


   @property
   def foo(): ...
   @property.set
   def foo(): ...

would leave you with 'foo' pointing at something that wasn't a 
descriptor at all.  Is there a way to make that more debuggable?
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Signals+Threads (PyGTK waking up 10x/sec).

2007-12-08 Thread glyph
On 05:20 pm, [EMAIL PROTECTED] wrote:
The best solution I can think of is to add a new API that takes a
signal and a file descriptor and registers a C-level handler for that
signal which writes a byte to the file descriptor. You can then create
a pipe, connect the signal handler to the write end, and add the read
end to your list of file descriptors passed to select() or poll(). The
handler must be written in C in order to avoid the race condition
referred to by Glyph (signals arriving after the signal check in the
VM main loop but before the select()/poll() system call is entered
will not be noticed until the select()/poll() call completes).

This paragraph jogged my memory.  I remember this exact solution being 
discussed now, a year ago when I was last talking about these issues.

There's another benefit to implementing a write-a-byte C signal handler. 
Without this feature, it wouldn't make sense to have passed the 
SA_RESTART flag to sigaction, because and GUIs written in Python could 
have spent an indefinite amount of time waiting to deliver their signal 
to Python code.  So, if you had to handle SIGCHLD in Python, for 
example, calls like file().write() would suddenly start raising a new 
exception (EINTR).  With it, you could avoid a whole class of subtle 
error-handling code in Twisted programs.
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Signals+Threads (PyGTK waking up 10x/sec).

2007-12-09 Thread glyph

On 12:21 am, [EMAIL PROTECTED] wrote:
Anyway, I would still like to discuss this on #python-dev Monday.
Adam, in what time zone are you? (I'm PST.) Who else is interested?

I'm also interested.  I'm EST, but my schedule is very flexible (I'm on 
IRC pretty much all day for work anyway).  Just let me know when it is.
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Return type of round, floor, and ceil in 2.6

2008-01-04 Thread glyph
On 4 Jan, 10:45 pm, [EMAIL PROTECTED] wrote:
[GvR to Tim]
Do you have an opinion as to whether we should
adopt round-to-even at all (as a default)?

For the sake of other implementations (Jython, etc) and for ease of 
reproducing the results with other tools (Excel, etc), the simplest 
choice is int(x+0.5).  That works everywhere, it is easy to explain, it 
runs fast, and it is not hard to get right.

I agree for the default.  Except the part where Excel, Jython, and 
Python's current round, actually use sign(x) * int(abs(x)+0.5), so maybe 
it's not *completely* easy to get right ;-).

Having other rounding methods *available*, though, would be neat.  The 
only application I've ever worked on where I cared about the difference, 
the user had to select it (since accounting requirements differ by 
jurisdiction and, apparently, by bank preference).  Having a standard 
way to express this (especially if it worked across different numeric 
types, but perhaps I digress) would be pleasant.  Implementing 
stochastic rounding and banker's rounding oneself, while not exactly 
hard, is a drag.
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Return type of round, floor, and ceil in 2.6

2008-01-05 Thread glyph

On 04:54 pm, [EMAIL PROTECTED] wrote:
On Jan 4, 2008 10:16 PM,  [EMAIL PROTECTED] wrote:

Having other rounding methods *available*, though, would be neat.  The
only application I've ever worked on where I cared about the 
difference,
the user had to select it (since accounting requirements differ by
jurisdiction and, apparently, by bank preference).  Having a standard
way to express this (especially if it worked across different numeric
types, but perhaps I digress) would be pleasant.  Implementing
stochastic rounding and banker's rounding oneself, while not exactly
hard, is a drag.

The decimal module already supports rounding modes in its context. For
other types, perhaps converting to decimal might be good enough?

Yes, that's the right thing to do.  I had missed it.  After all it is 
decimal rounding I want, and any financial applications I'm going to 
write these days are using decimals already for all the usual reasons.

At first I didn't realize why I'd missed this feature.  While the 
rounding *modes* are well documented, though, after 20 minutes of 
reading documentation I still haven't found a method or function that 
simply rounds a decimal to a given significant digit.  Is there one, 
should there be one, or is the user simply meant to use Context.quantize 
with appropriate arguments?
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP: Lazy module imports and post import hook

2008-01-08 Thread glyph
On 01:09 am, [EMAIL PROTECTED] wrote:
Neal Becker wrote:
Christian Heimes wrote:

  I've attached the first public draft of my first PEP. A working 
patch
  against the py3k branch is available at 
http://bugs.python.org/issue1576
 
  Christian

Note also that mercurial has demandimport
http://www.selenic.com/mercurial/wiki/

And that bzr has lazy_import (inspired by mercurial's demandimport):

and very recently, I implemented similar functionality myself (though it 
isn't in use in Twisted yet):

http://codebrowse.launchpad.net/~glyph/+junk/pyexport/files

Something that I notice about every other implementation of this 
functionality is that they're all in Python.  But the proposed 
implementation here is written in C, and therefore more prone to 
crashing bugs.  Looking at the roundup log, I can see that several 
refcounting bugs have already been found and fixed.  Perhaps the post- 
import hooks, being a modification to the import mechanism itself, needs 
to be in C, but given that lazy imports *have* been implemented before 
without using C code (and without post import hooks), I can't see why 
they need to be in the same PEP.

Also, as is my custom, I'd like to suggest that, rather than designing 
something new, one of the third-party implementations be adopted (or at 
least its interface) since these have been demonstrated to work in real 
application code already.  Happily, I can escape the charge of bias this 
time since I don't think my own implementation should be taken seriously 
in this case :).
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP: per user site-packages directory

2008-01-14 Thread glyph
On 12:08 pm, [EMAIL PROTECTED] wrote:
So if I'm using the --user option, where would scripts be installed?
Would this be:

Windows: %APPDATA%/Python/Python26/bin
Mac: ~/Library/Python/2.6/bin
Unix: ~/.local/lib/python2.6/bin

I'd like to be able to switch between several versions of my user
installation simply by changing a link. (On the Mac I'm doing this by
relinking ~/Library/Python to different directories.)

I think the relevant link to change here would be ~/.local.

I have personally been using the ~/.local convention for a while, and I 
believe ~/.local/bin is where scripts should go.  Python is not the only 
thing that can be locally installed, and the fact that it's 
~/.local/lib/python2.6/site-packages  suggests that ~/.local has the 
same layout as /usr (or /usr/local, for those who use that convention).
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Monkeypatching idioms -- elegant or ugly?

2008-01-15 Thread glyph
On 03:37 pm, [EMAIL PROTECTED] wrote:
I think it's useful to share these recipes, if only to to establish
whether they have been discovered before, or to decide whether they
are worthy of a place in the standard library. I didn't find any
relevant hits on the ASPN Python cookbook.

from somewhere import someclass

class newclass(someclass):
__metaclass__ = monkeypatch_class
def method1(...): ...
def method2(...): ...
...

I've expressed this one before as class someclass(reopen(someclass)):, 
but have thankfully never needed to actually use that in a real program. 
It's been a helpful tool in explaining to overzealous Ruby-ists that 
reopenable classes are not as unique as they think.

My feelings on monkeypatching is that it *should* feel a little gross 
when you have to do it, so the code I've written that does 
monkeypatching for real is generally a bit ugly.
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 370, open questions

2008-01-17 Thread glyph

On 07:55 am, [EMAIL PROTECTED] wrote:
A CC of the mail goes to the authors of setuptools, virtual python,
working env and virtual env. What's your opinion on the PEP? Do you 
have
some input and advice for me?

I wrote a similar tool called Combinator 
(http://divmod.org/trac/wiki/DivmodCombinator) which Divmod uses quite 
intensively.  I think a significant portion of the Twisted community 
also uses it for development.  Luckily you don't need to CC me specially 
as I'm here already ;-).  One of the features it provides is a user 
site-packages directory much in the same style that the PEP proposes, so 
I'll comment on my experiences maintaining it.
The PEP 370 (http://www.python.org/dev/peps/pep-0370) per user site
packages directory has several open questions:

* Are the directories for Windows, Mac and Unix fine?
* Mac: Should framework and non-framework builds of Python use
  the same directories?

One of the problems we've encountered over and over again with 
Combinator is that MacOS has a dual personality.  Its personality is not 
just an issue of framework vs. non-framework build, but a fundamental 
question of is this platform a UNIX or is it not a UNIX.

An example: Let's say you have a GNU autotools project in C, which we'll 
call Xxx, and a set of Python bindings, PyXxx.  Combinator deals with 
this by using ~/.local, and providing scripts to set up PATH and 
LD_LIBRARY_PATH to point to ~/.local/bin and ~/.local/lib, respectively. 
I'm not suggesting that Python come with such a tool (although it might 
be handy, at least until this convention catches on with distributors), 
but it should at least make it clear how one would achieve the following 
straightforwardly:

  cd xxx-0.2.4
  ./configure --prefix ~/.local
  make
  make install
  cd ../pyxxx-0.0.1
  python setup.py install --prefix ~/.local

Using Combinator, the user is now fully set up to import xxx.  But 
only if they're using Linux, because I made the same mistake (which we 
will probably be correcting at some point soon) of using Python's 
*existing* per-user site-packages directory of ~/Library/Python on the 
mac, and not adding ~/.local.

On the Mac, our user now has a problem: given that ~/Library/Python/ 
doesn't follow the /usr or /usr/local style filesystem layout, 
autotools-based projects will not build their libraries in the right 
places using a single command-line option.  (I guess maybe you could do 
something with --bindir and --sbindir and --exec-prefix or whatever, I 
don't know.  I tend to get bored and wander off when I have to pass more 
than 3 options to a configure script.)

A user familiar with the Mac alone might know what to do at this point 
(to be honest, I don't!), but I do know that people familiar with both 
platforms are confused by this apparently gratuitous inconsistency. 
They follow familiar procedure on a Linux box, it works great, then they 
do the same thing on a Mac (with the same shell, an apparently similar 
version of Python) and it doesn't.  Keep in mind that ./configure 
--prefix is familiar procedure from a lot more places than Combinator. 
For example, on shared hosting where I didn't have root, I've used this 
same trick without Combinator, building Python itself with --prefix 
~/.local and editing .bashrc to modify the appropriate env vars.

My suggestion would be to have *both* ~/.local *and* ~/Library/Python be 
honored on the Mac, because there really isn't much harm in doing so. 
Perhaps it would make sense for non-framework builds to not honor 
~/Library/Python, but I am pretty certain, based on experience fielding 
requests from confused users, that framework builds should still honor 
~/.local/lib/python.../site-packages.

Windows has this problem less because everything has to be done so 
completely differently.

In any event, the really important thing from my perspective is that the 
PEP should explain how this very common use-case for per-user 
installation of libraries can be accomplished on each of the big three 
platforms.  This explanation should be put somewhere that users can 
easily find it when they are building libraries.

I don't know what the right way to do this on Windows is though, so I 
can't offer much help there.  Something to do with MinGW and intense 
suffering, I would guess.
* The patch also adds a usecustomize hook to site. Is it useful and
  should it stay?

Should this be usercustomize?  I thought it was a typo but I see the 
same typo in the PEP.  I have often wished for something like this for 
debugging, and it might have other uses, but there should be a caution 
against abuse :).
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 370, open questions

2008-01-17 Thread glyph

On 12:02 pm, [EMAIL PROTECTED] wrote:

On 17 Jan, 2008, at 9:40, [EMAIL PROTECTED] wrote:

On 07:55 am, [EMAIL PROTECTED] wrote:


The framework build of Python definitly targets the upper layer of the 
OSX stack, not just the Unix core. This sometimes causes confusion, 
but mostly from developers wandering over from a Linux system that 
complain that OSX isn't Linux.


The framework build targets both layers, as I understand it - and it 
sounds like you're saying it too, since it's not just the UNIX core. 
Solaris isn't Linux either, by the way.  These conventions hold across 
far more than one operating system :).
Note that even Darwin is not Linux, there are several differences that 
cause problems for naive porters. Two of those: Darwin uses different 
binary formats for shared libraries and plugins;  the darwin linker 
hardcodes the full path to shared libraries into executables (without 
a runtime search path).


Distutils should take care of this distinction in Python's case, no? 
Xxx's autotools generate a shared library, PyXxx's setup.py generates a 
plugin (or dylib, is that still what they're called these days?).
An example: Let's say you have a GNU autotools project in C, which 
we'll
call Xxx, and a set of Python bindings, PyXxx.  Combinator deals 
with

this by using ~/.local, and providing scripts to set up PATH and
LD_LIBRARY_PATH to point to ~/.local/bin and ~/.local/lib, 
respectively.


~/Library/Combinator would be a better installation root on OSX, that 
location fits better into guidelines from Apple and also avoids 
completely hiding the Combinator data from the user.


I disagree, but Combinator's a complex beast and has a lot of other 
moving parts which need to live in specific places.  Its main purpose is 
to manage your import path to easily switch between different 
development branches of multiple projects, and so most of its data is 
already in locations that the user has specified.


A key thing about ~/.local in this case is that it isn't specific to 
Combinator.  It's any per-user installed dependency libraries for 
development purposes, not necessarily on Combinator-managed projects, 
and not even necessarily Python projects.
This is probably off-topic for python-dev, but how is combinator 
different from zc.buildout and virtualenv?


We are definitely veering in that direction, but it probably bears a 
brief description just so folks here can see how it does and doesn't 
apply to the PEP.  zc.buildout and virtualenv are primarily 
heterogeneous deployment tools, with development being just a different 
type of deployment.  They're ways of installing Python packages into an 
isolated, configured environment.


Combinator is primarily a development tool.  Although I've used it as a 
deployment tool (as you can use zc.buildout as a development tool) 
that's not really its focus.  Combinator has no installation step for 
most projects.  ~/.local is a special case, reserved for common 
unchanging dependencies that require building; most code managed by 
Combinator lives in ~/Projects/YourProject/trunk or 
~/Projects/YourProject/branches.  (I *used* to be a Mac guy.  Can you 
tell? :-))


The idea with zc.buildout is you are installing application A which 
requires library X version Q, and installing application B which 
requires library X version M; you want to keep those separated.  The 
idea with combinator is that you are *developing* application A, and you 
want to make sure that it continues working with both version Q and M, 
so you can easily do


   chbranch X releases/Q  # the most common combinator command
   trial a.test
   chbranch X releases/M
   trial a.test

It also has specific integration with subversion for creating and 
merging branches.  chbranch will try to check out releases/Q if there 
isn't already such a directory present, based on svn metadata.  When you 
create a branch to start working on a feature, your environment is 
automatically reconfigured to import code from that branch.  When you 
merge a branch to trunk, it is adjusted to load code from the merged 
trunk again.  Hopefully some day soon it will also have integration with 
bazaar too.
Why? Just because users can't remember on which platform they are 
developing ;-)? That said, there's a libpython.a file in the framework 
build for basicly that reason: enough users complained about there not 
being a python library they could link to that it was easier to add a 
symlink then trying to educate everyone.


The system installation of Python on the mac has a nod in this direction 
as well.  /usr/bin/python is also present, as is /usr/lib/pythonX.Yÿ0Cas 
symlinks between the two locations.

You could teach Combinator about running configure scripts ;-).


Better yet, perhaps somebody should teach configure about MacOS, and 
about per-user installation ;).  But the real question here is not what 
Combinator should do, but what Python should do.
This is not a gratuitous inconsistency.  The Mac 

Re: [Python-Dev] PEP 370, open questions

2008-01-17 Thread glyph
On 12:19 pm, [EMAIL PROTECTED] wrote:
[EMAIL PROTECTED] wrote:

MacOS isn't the only platform that has this problem.  I use cygwin 
under
Windows, and I wish Python (whether or not a cygwin build) would also
use ~/.local.

I would like to agree.  MacOS has an advantage here though.  Windows, 
even without cygwin, but doubly so once cygwin gets involved, has a 
schizophrenic idea of what ~ means.  (I believe other messages in this 
thread have made reference to issues with expanduser.)  You definitely 
are going to have a hard time getting non-cygwin and cygwin python to 
agree on the location of your home directory.  I rather brutally 
modified my last Windows installation to get those two values to line 
up, and it required surgery for each user created, and caused a number 
of interesting issues with various tools writing fake cygwin UNIX 
paths and others thinking they were Windows drive-letter paths...
Windows has this problem less because everything has to be done so
completely differently.

True, except that cygwin tries hard to make it look like Unix.  I'd
rather Python under Windows really look like Unix, but I'm probably in 
a
minority.  And yes, I do share computers (physically and through
Terminal Server), so per-user local libraries would be nice.

If you can figure out how to make this work without raising any of the 
bogeymen I just made reference to, I would be strongly in favor.
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 370, open questions

2008-01-17 Thread glyph
On 12:26 pm, [EMAIL PROTECTED] wrote:
On Thu, 17 Jan 2008 13:09:34 +0100, Christian Heimes [EMAIL PROTECTED] 
wrote:

The uid and gid tests aren't really required. They just provide an 
extra
safety net if a user forgets to add the -s flag to a suid app.

It's not much of a safety net if PYTHONPATH still allows injection of
arbitrary code.  It's just needless additional complexity for no 
benefit.

By confusing users' expectations, it may actually be *worse* to add this 
safety net than to do nothing.  It should be obvious right now that 
tightly controlling the environment is a requirement of any suid Python 
code.  However, talking about different behavior in the case of 
differing euid and uid might confuse some developers and/or 
administrators into thinking that Python was doing all it needed to. 
There's also the confusion that the value of $HOME is actually the 
relevant thing for controlling user-installed imports, not the (E)UID.

I think it would be good to have a look at the security implications of 
this and other environment-dependent execution, including $PYTHONPATH 
and $PYTHONSTARTUP, in a separate PEP.  It might be good to change the 
way some of these things work, but in either case it would be good to 
have an unambiguous declaration of the *expected* security properties 
and potential attack vectors against the Python interpreter, for both 
developers and system administrators.
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] What to do for bytes in 2.6?

2008-01-17 Thread glyph
On 04:43 am, [EMAIL PROTECTED] wrote:
Just being able to (voluntarily! on a
per-module basis!) use a different type name and literal style for
data could help forward-looking programmers get started on making the
distinction clear, thus getting ready for 3.0 without making the jump
just yet (or maintaining a 2.6 and a 3.0 version of the same package
easily, using 2to3 to automatically generate the 3.0 version from the
2.6 code base).

Yes!  Yes!  Yes!  A thousand times yes!  :-D

This is *the* crucial feature which will make porting large libraries 
like Twisted to 3.0 even possible.  Thank you, Guido.

To the critics of this idea: libraries which process text, if they are 
meant to be correct, will need to deal explicitly with the issue of what 
data-types they believe to be text, what methods they will call on them, 
and how they deal with them.  You cannot get away from this.  It is not 
an issue reserved for the pure future of 3.0; if your code doesn't 
handle these types correctly now, it has bugs in it *now*.  (In fact I 
am fixing some code with just such a bug in it right now :).)

It is definitely possible to make your library code do the right thing 
for different data types, continue to support str literals in 2.6, and 
eventually require text / unicode input (after an appropriate 
deprecation period, of course).  And it will be a lot easier if the 
translations imposed by 2to3 are as minimal as possible.
Note that I believe that the -3 flag should not change semantics -- it
should only add warnings. Semantic changes must either be backwards
compatible or be requested explicitly with a __forward__ import (which
2to3 can remove).

This also sounds great.
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] What to do for bytes in 2.6?

2008-01-19 Thread glyph

On 19 Jan, 07:32 pm, [EMAIL PROTECTED] wrote:
There is no way to know whether that return value means text or data
(plenty of apps legitimately read text straight off a socket in 2.x),

IMHO, this is a stretch of the word legitimately ;-).  If you're 
reading from a socket, what you're getting are bytes, whether they're 
represented by str() or bytes(); correct code in 2.x must currently do a 
.decode(ascii) or .decode(charmap) to legitimately identify the 
result as text of some kind.

Now, ad-hoc code with a fast and loose definition of text can still 
read arrays of bytes off a socket without specifying an encoding and get 
away with it, but that's because Python's unicode implementation has 
thus far been very forgiving, not because the data is cleanly text yet. 
Why can't we get that warning in -3 mode just the same from something 
read from a socket and a b literal?  I've written lots of code that 
aggressively rejects str() instances as text, as well as unicode 
instances as bytes, and that's in code that still supports 2.3 ;).
Really, the pure aliasing solution is just about optimal in terms of
bang per buck. :-)

Not that I'm particularly opposed to the aliasing solution, either.  It 
would still allow writing code that was perfectly useful in 2.6 as well 
as 3.0, and it would avoid disturbing code that did checks of type(). 
It would just remove an opportunity to get one potentially helpful 
warning.
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] What to do for bytes in 2.6?

2008-01-19 Thread glyph

On 04:26 am, [EMAIL PROTECTED] wrote:

On Jan 19, 2008 5:54 PM,  [EMAIL PROTECTED] wrote:

On 19 Jan, 07:32 pm, [EMAIL PROTECTED] wrote:


Starting with the most relevant bit before getting off into digressions 
that may not interest most people:

Why can't we get that warning in -3 mode just the same from something
read from a socket and a b literal?



If you really want this, please think through all the consequences,
and report back here. While I have a hunch that it'll end up giving
too many false positives and at the same time too many false
negatives, perhaps I haven't thought it through enough. But if you
really think this'll be important for you, I hope you'll be willing to
do at least some of the thinking.


While I stand by my statement that unicode is the Right Way to do text 
in python, this particular feature isn't really that important, and I 
can see there are cases where it might cause problems or make life more 
difficult.  I suspect that I won't really know whether I want the 
warning anyway before I've actually tried to port any nuanced, real 
text-processing code to 3.0, and it looks like it's going to be a little 
while before that happens.  I suspect that if I do want the warning, it 
would be a feature for 2.7, not 2.6, so I don't want to waste a lot of 
everyone's time advocating for it.


Now for a nearly irrelevant digression (please feel free to stop reading 
here):

Now, ad-hoc code with a fast and loose definition of text can still
read arrays of bytes off a socket without specifying an encoding and 
get

away with it, but that's because Python's unicode implementation has
thus far been very forgiving, not because the data is cleanly text 
yet.


I would say that depends on the application, and on arrangements that
client and server may have made off-line about the encoding.


I can see your point.  I think it probably holds better on files and 
streams than on sockets, though - please forgive me if I don't think 
that server applications which require environment-dependent out-of-band 
arrangements about locale are correct :).

In 2.x, text can legitimately be represented as str -- there's even
the locale module to further specify how it is to be interpreted as
characters.


I'm aware that this specific example is kind of a ridiculous stretch, 
but it's the first one that came to mind.  Consider 
len(u'é'.encode('utf-8').rjust(5).decode('utf-8')).  Of course 
unicode.rjust() won't do the right thing in the case of surrogate pairs, 
not to mention RTL text, but it still handles a lot more cases than 
str.rjust(), since code points behave a lot more like characters than 
code units do.

Sure, this doesn't work for full unicode, and it doesn't work for all
protocols used with sockets, but claiming that only fast and loose
code ever uses str to represent text is quite far from reality -- this
would be saying that the locale module is only for quick and dirty
code, which just ain't so.


It would definitely be overreaching to say all code that uses str is 
quick and dirty.  But I do think that it fits into one of two 
categories: quick and dirty, or legacy.  locale is an example of a 
legacy case for which there is no replacement (that I'm aware of).  Even 
if I were writing a totally unicode-clean application, as far as I'm 
aware, there's no common replacement for i.e. locale.currency().


Still, locale is limiting.  It's ... uncomfortable to call 
locale.currency() in a multi-user server process.  It would be nice if 
there were a replacement that completely separated encoding issues from 
localization issues.

I believe that a constraint should be that by default (without -3 or a
__future__ import) str and bytes should be the same thing. Or, another
way of looking at this, reads from binary files and reads from sockets
(and other similar things, like ctypes and mmap and the struct module,
for example) should return str instances, not instances of a str
subclass by default -- IMO returning a subclass is bound to break too
much code. (Remember that there is still *lots* of code out there that
uses type(x) is types.StringType) rather than isinstance(x, str),
and while I'd be happy to warn about that in -3 mode if we could, I
think it's unacceptable to break that in the default environment --
let it break in 3.0 instead.)


I agree.  But, it's precisely because this is so subtle that it would be 
nice to have tools which would report warnings to help fix it. 
*Certainly* by default, everywhere that's str in 2.5 should be str 
in 2.6.  Probably even in -3 mode, if the goal there is warnings only. 
However, the feature still strikes me as potentially useful while 
porting.  If I were going to advocate for it, though, it would be as a 
separate option, e.g. --separate-bytes-type.  I say this as separate 
from just trying to run the code on 3.0 to see what happens because it 
seems like the most subtle and difficult aspect of the port to get 
right; it would be nice to be able to 

Re: [Python-Dev] Any Emacs tips for core developers?

2008-02-04 Thread glyph
To say I use emacs would be an understatement.  I *live* in emacs.

On 04:32 pm, [EMAIL PROTECTED] wrote:
I recently upgraded to the emacs 22.1/python.el which I tried *really*
hard to use, but eventually ended up installing python-mode again.
There are a number of problems in the emacs lisp that I was able to
get around, but eventually the bugginess overcame my will:
*R, RE, and RET (i.e. the keystroke shift-r) were bound to commands in
the major mode (meaning you couldn't type an R without triggering
python-send-string). You can comment out this line in python.el to get
around this:

Personally, I have been using GNU Emacs's new python mode since I 
discovered it, and I've never encountered any of the bugs you just 
described.  (Perhaps you are describing bugs that arise from trying to 
use it with XEmacs?)  I have, however, found that it is *less* buggy in 
certain circumstances; it seems to indent parentheses correctly in more 
circumstances, and it isn't confused by triple-quoted strings.  It also 
has functioning support for which-func-mode which python-mode.el doesn't 
seem to (a hack which displays the current scope on the modeline, which 
is very helpful for long classes: I can just glance down and see 
FooBarBaz.bozBuz() rather than needing to hit C-M-r ^class

As always, YMMV.

Also, I use twisted-dev.el for all of my Python development.  I don't 
think I'll ever be able to go back to F9 doing anything but running 
tests for the current buffer.  Apparently there's a ctypes-dev based 
on those hacks in the main Python repository which basically does the 
same thing.  (I'd also strongly recommend binding F5 to 'next-error'. 
It makes hopping around in the error stack nice and easy.)

Finally, for you Ubuntu developers, I'm also using the the pre-release 
XFT GNU emacs, which is very pretty.  So far, despite stern and dire 
warnings, it has had no stability issues:

http://www.emacswiki.org/cgi-bin/wiki/XftGnuEmacs

Look for the PPA deb lines there, and you get a nicely prepackaged, 
policy-compliant version of emacs with no need to build anything 
yourself.

(I've also got a personal collection of hacks that, if anyone likes 
TextMate-style snippets, I'll email you.  It does stuff like turning 
 into \n(indent)\n\n and class  into class (cursor 
here):\n\n(indent)\n\n(indent)\n. I haven't cleaned it up for a 
public release since a lot of people seem to think that automatically 
inserting text is pretty obnoxious and I just don't have the energy for 
that debate.)
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] yield * (Re: Missing operator.call)

2009-02-07 Thread glyph


On 01:00 am, greg.ew...@canterbury.ac.nz wrote:

Guido van Rossum wrote:

We already have yield expressions and they mean something else...


They don't have a * in them, though, and I don't
think the existing meaning of yield as an expression
would carry over into the yield * variant, so there
shouldn't be any conflict.

But if you think there will be a conflict, or that the
similarity would be too confusing, maybe the new
construct should be called something else. I'm
open to suggestions.


I'm *already* regretting poking my head into this particular bike shed, 
but...


has anyone considered the syntax 'yield from iterable'?  i.e.

   def foo():
   yield 1
   yield 2

   def bar():
   yield from foo()
   yield from foo()

list(bar()) - [1, 2, 1, 2]

I suggest this because (1) it's already what I say when I see the 'for' 
construct, i.e. foo then *yield*s all results *from* bar, and (2) no 
new keywords are required.

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Choosing a best practice solution for Python/extension modules

2009-02-21 Thread glyph

On 07:07 pm, br...@python.org wrote:
On Sat, Feb 21, 2009 at 09:17, Jean-Paul Calderone 
exar...@divmod.comwrote:


But there is another issue with this: the pure Python code will never 
call
the extension code because the globals will be bound to _pypickle and 
not

_pickle. So if you have something like::



 # _pypickle
 def A(): return _B()
 def _B(): return -13



 # _pickle
 def _B(): return 42



 # pickle
 from _pypickle import *
 try: from _pickle import *
 except ImportError: pass



This is really the same as any other high-level/low-level
library split.  It doesn't matter that in this case, one
low-level implementation is provided as an extension module.
Importing the low-level APIs from another module and then
using them to implement high-level APIs is a pretty common,
simple, well-understood technique which is quite applicable
here.


But that doesn't provide a clear way, short of screwing with 
sys.modules, to
get at just the pure Python implementation for testing when the 
extensions

are also present. The key point in trying to figure this out is to
facilitate testing since the standard library already uses the import *
trick in a couple of places.


You don't have to screw with sys.modules.  The way I would deal with 
testing this particular interaction would be a setUp that replaces 
pickle._A with _pypickle._A, and a tearDown that restores the original 
one.


Twisted's TestCase has specific support for this.  You would spell it 
like this:


   import _pypickle
   # ...
   testCase.patch(pickle, '_A', _pypickle._A)

You can read more about this method here:

http://python.net/crew/mwh/apidocs/twisted.trial.unittest.TestCase.html#patch
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] asyncore fixes in Python 2.6 broke Zope's version of medusa

2009-03-03 Thread glyph


On 08:46 pm, gu...@python.org wrote:

This seems to be the crux of the problem with asyncore, ever since it
was added to the stdlib -- there's no real API, so every change
potentially breaks something. I wish we could start over with a proper
design under a new name.


Might I suggest reactor... or possibly twisted, as that new name? 
;-)


(Sorry, I was trying to avoid this thread, but that was an opening I 
could drive a truck through).


In all seriousness, I seem to recall that Thomas Wouters was interested 
in doing integrating some portion of Twisted core into the standard 
library as of last PyCon.  I mention him specifically by name in the 
hopes that it will jog his memory.


At the very least, this might serve as a basis for an abstract API for 
asyncore:


http://twistedmatrix.com/documents/8.2.0/api/twisted.internet.interfaces.IProtocol.html
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


  1   2   3   4   >