Re: [Python-Dev] Simple Switch statement

2006-06-24 Thread Phillip J. Eby
At 03:49 PM 6/24/2006 -0700, Raymond Hettinger wrote:
Cases values must be ints, strings, or tuples of ints or strings.

-1.  There is no reason to restrict the types in this fashion.  Even if you 
were trying to ensure marshallability, you could still include unicode and 
longs.  However, there isn't any need for marshallability here, and I would 
like to be able to use switches on types, enumerations, and the like.

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Simple Switch statement

2006-06-24 Thread Phillip J. Eby
At 05:30 PM 6/24/2006 -0700, Raymond Hettinger wrote:
[Phillip Eby]
  I would like to be able to use switches on types, enumerations, and the 
 like.

Be careful about wanting everything and getting nothing.
My proposal is the simplest thing that gets the job done for key use cases 
found
in real code.

It's ignoring at least symbolic constants and types -- which are certainly 
key use cases found in real code.

Besides which, this is Python.  We don't select a bunch of built-in types 
and say these are the only types that work.  Instead, we have protocols 
(like __hash__ and __eq__) that any object may implement.

If you don't want expressions to be implicitly lifted to function 
definition time, you'd probably be better off arguing to require the use of 
explicit 'static' for non-literal case expressions.

(Your reverse mapping, by the way, is a non-starter -- it makes the code 
considerably more verbose and less obvious than a switch statement, even if 
every 'case' has to be decorated with 'static'.)

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Switch statement

2006-06-23 Thread Phillip J. Eby
At 07:51 PM 6/23/2006 +0200, M.-A. Lemburg wrote:
Furthermore, the compiler could do other optimizations on the
const declared names, such as optimizing away global lookups
and turning them into code object constants lookups.

Technically, they'd have to become LOAD_DEREF on cells set up by the module 
level code and attached to function objects.  'marshal' won't be able to 
save function references or other such objects to a .pyc file.

It's interesting that this line of thinking does get us closer to the 
long-desired builtins optimization.  I'm envisioning:

 static __builtin__.*

or something like that.  Hm.  Maybe:

 from __builtin__ import static *

:)

In practice, however, this doesn't work for * imports unless it causes all 
global-scope names with no statically detectable assignments to become 
static.  That could be a problem for modules that generate symbols 
dynamically, like 'opcode' in the stdlib.

OTOH, maybe we could just have a LOAD_STATIC opcode that works like 
LOAD_DEREF but falls back to using globals if the cell is empty.

Interestingly, a side effect of making names static is that they also 
become private and untouchable from outside the module.

Hm.  Did I miss something, or did we just solve builtin lookup 
optimization?  The only problem I see is that currently you can stick a new 
version of 'len()' into a module from outside it, shadowing the 
builtin.  Under this scheme (of making all read-only names in a module 
become closure variables), such an assignment would change the globals, but 
have no effect on the module's behavior, which would be tied to the static 
definitions created at import time.







--
Marc-Andre Lemburg
eGenix.com

Professional Python Services directly from the Source  (#1, Jun 23 2006)
  Python/Zope Consulting and Support ...http://www.egenix.com/
  mxODBC.Zope.Database.Adapter ... http://zope.egenix.com/
  mxODBC, mxDateTime, mxTextTools ...http://python.egenix.com/


::: Try mxODBC.Zope.DA for Windows,Linux,Solaris,FreeBSD for free ! 
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/pje%40telecommunity.com

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Switch statement

2006-06-22 Thread Phillip J. Eby
At 01:08 PM 6/22/2006 +0200, M.-A. Lemburg wrote:
Phillip J. Eby wrote:
  Maybe the real answer is to have a const declaration, not necessarily 
 the
  way that Fredrik suggested, but a way to pre-declare constants e.g.:
 
   const FOO = 27
 
  And then require case expressions to be either literals or constants.  The
  constants need not be computable at compile time, just runtime.  If a
  constant is defined using a foldable expression (e.g. FOO = 27 + 43), then
  the compiler can always optimize it down to a code level
  constant.  Otherwise, it can just put constants into cells that the
  functions use as part of their closure.  (For that matter, the switch
  statement jump tables, if any, can be put in a cell too.)
 
  I don't like first use because it seems to invite tricks.
 
  Okay, then I think we need a way to declare a global as being 
 constant.  It
  seems like all the big problems with switch/case basically amount to us
  trying to wiggle around the need to explicitly declare constants.

I don't think that this would help us much:

If you want the compiler to see that a name binds to a constant,
it would need to have access to the actual value at compile time
(e.g. code object definition time).

No, it wouldn't.  This hypothetical const would be a *statement*, 
executed like any other statement.  It binds a name to a value -- and 
produces an error if the value changes.  The compiler doesn't need to know 
what it evaluates to at runtime; that's what LOAD_NAME or LOAD_DEREF are 
for.  ;)


However, it is common practice to put constants which you'd use
in e.g. parsers into a separate module and you certainly don't
want to have the compiler import the module and apply attribute
lookups.

Not necessary, but I see it does produce a different problem.


This means that you'd have to declare a symbol constant in the
scope where you want to use it as such. Which would result in
long sections of e.g.

const case1
const case2
...
const caseN

Actually, under my proposal it'd be:

const FOO = somemodule.FOO
const BAR = somemodule.BAR

etc.  Which is probably actually worse.  But I see your point.


In the end, making this implicit in the case part of the switch
statement would save us a lot of typing.

However, there's another catch: if we do allow arbitrary expressions
in the case parts we still need to evaluate them at some point:

a. If we do so at compile time, the results may be a lot different
than at execution time (e.g. say you use time.time() in one of the
case value expressions).

We can't do that at compile time.

b. If we evaluate them at code object execution time (e.g. module
import), then we'd run into similar problems, but at least
the compiler wouldn't have to have access to the used symbols.

c. If we evaluate at first-use time, results of the evaluation
become unpredictable and you'd also lose a lot of the
speedup since building the hash table would consume cycles
that you'd rather spend on doing other things.

Assuming that a sequential search takes 1/2N equality tests on average, 
you'll come out ahead by the third switch executions, assuming that the 
time to add a dictionary entry or do a hash lookup is roughly equal to an 
if/else test.  The first execution would put N entries in the dictionary, 
and do 1 lookup.  The second execution does 1 lookup, so we're now at N+2 
operations, vs N operations on average for sequential search.  At the third 
execution, we're at N+3 vs. 2.5N, so for more than 6 entries we're already 
ahead.


d. Ideally, you'd want to create the hash table at compile time
and this is only possible using literals or by telling the
compiler to regard a specific set of globals as constant, e.g.
by passing a dictionary (mapping globals to values) to compile().

I still think that it suffices to assume that an expression produced using 
only symbols that aren't rebound are sufficiently static for use in a case 
expression.  If a symbol is bound by a single import statement (or other 
definition), or isn't bound at all (e.g. it's a builtin), it's easy enough 
to assume that it's going to remain the same.

Combine that compile-time restriction with a first-use build of the 
dictionary, and I think you have the best that we can hope to do in 
balancing implementation simplicity with usefulness and 
non-confusingness.  If it's not good enough, it's not good enough, but I 
don't think there's anything we've thought of so far that comes out with a 
better set of tradeoffs.

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Switch statement

2006-06-22 Thread Phillip J. Eby
At 09:37 AM 6/22/2006 -0700, Guido van Rossum wrote:
On 6/22/06, Phillip J. Eby [EMAIL PROTECTED] wrote:
  This hypothetical const would be a *statement*,
  executed like any other statement.  It binds a name to a value -- and
  produces an error if the value changes.  The compiler doesn't need to know
  what it evaluates to at runtime; that's what LOAD_NAME or LOAD_DEREF are
  for.  ;)

Please think this through more. How do you implement the produces an
error if the value changes part? Is the const property you're
thinking of part of the name or of the object it refers to?

The only way I can see it work is if const-ness is a compile-time
property of names, just like global. But that requires too much
repetition when a constant is imported.

Right; MAL pointed that out in the message I was replying to, and I 
conceded his point.  Of course, if you consider constness to be an implicit 
property of imported names that aren't rebound, the repetition problem goes 
away.

And if you then require all case expressions to be either literals or 
constant names, we can also duck the when does the expression get 
evaluated? question.  The obvious answer is that it's evaluated wherever 
you bound the name, and the compiler can either optimize the switch 
statement (or not), depending on where the assignment took place.  A switch 
that's in a loop or a function call can only be optimized if all its 
constants are declared outside the loop or function body; otherwise it 
degrades to an if/elif chain.

There's actually an in-between possibility, too: you could generate if's 
for constants declared in the loop or function body, and use a dictionary 
for any literals or constants declared outside the loop or function 
body.  The only problem that raises is the possibility of an inner 
constant being equal to an outer constant, creating an ambiguity.  But 
we could just say that nearer constants take precedence over later ones, or 
force you to introduce the cases such that inner constants appear first.

(This approach doesn't really need an explicit const foo=bar declaration, 
though; it just restricts cases to using names that are bound only once in 
the code of the scope they're obtained from.)

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Switch statement

2006-06-22 Thread Phillip J. Eby
I think one of the problems I sometimes have in communicating with you is 
that I think out stuff from top to bottom of an email, and sometimes 
discard working assumptions once they're no longer needed.  We then end up 
having arguments over ideas I already discarded, because you find the 
problems with them faster than I do, and you assume that those problems 
carry through to the end of my message.  :)  So, I'm partially reversing 
the order of my reply, so you can see what I'm actually proposing, before 
the minutiae of responding the objections you raised to stuff I threw out 
either in my previous message or the message before that.   Hopefully this 
will help.


At 10:44 AM 6/22/2006 -0700, Guido van Rossum wrote:
Please, think through the notion of const declarations more before
posting again. Without const declarations none of this can work

Actually, the const declaration part isn't necessary and I already 
discarded the idea in my  previous reply to you, noting that the 
combination of these facets can be made to work without any explicit const 
declarations:

1. case (literal|NAME) is the syntax for equality testing -- you can't 
use an arbitrary expression, not even a dotted name.

2. NAME, if used, must be bound at most once in its defining scope

3. Dictionary optimization can occur only for literals and names not bound 
in the local scope, others must use if-then.

This doesn't require explicit const declarations at all.  It does, 
however, prohibit using import A and then switching on a bunch of A.foo 
values.  You have to from A import foo, bar, baz instead.

If you like this, then you may not need to read the rest of this message, 
because most of your remaining comments and questions were based on an 
assumption that const declarations were necessary.


On 6/22/06, Phillip J. Eby [EMAIL PROTECTED] wrote:
  At 09:37 AM 6/22/2006 -0700, Guido van Rossum wrote:
  On 6/22/06, Phillip J. Eby [EMAIL PROTECTED] wrote:
This hypothetical const would be a *statement*,
executed like any other statement.  It binds a name to a value -- and
produces an error if the value changes.  The compiler doesn't need 
 to know
what it evaluates to at runtime; that's what LOAD_NAME or 
 LOAD_DEREF are
for.  ;)
  
  Please think this through more. How do you implement the produces an
  error if the value changes part? Is the const property you're
  thinking of part of the name or of the object it refers to?
  
  The only way I can see it work is if const-ness is a compile-time
  property of names, just like global. But that requires too much
  repetition when a constant is imported.
 
  Right; MAL pointed that out in the message I was replying to, and I
  conceded his point.  Of course, if you consider constness to be an implicit
  property of imported names that aren't rebound, the repetition problem goes
  away.

Um, technically names are never imported, only objects. Suppose module
A defines const X = 1, and module B imports A. How does the compiler
know that A.X is a constant?

It doesn't.  You have to from A import X.  At that point, you have a name 
that is bound by an import that can be considered constant as long as the 
name isn't rebound later.


  And if you then require all case expressions to be either literals or
  constant names, we can also duck the when does the expression get
  evaluated? question.  The obvious answer is that it's evaluated wherever
  you bound the name, and the compiler can either optimize the switch
  statement (or not), depending on where the assignment took place.

I don't understand what you're proposing. In particular I don't
understand what you mean by wherever you bound the name.

So (evading the import problem for a moment) suppose we have

const T = int(time.time())

def foo(x):
   switch x:
 case T: print Yes
 else: print No

Do you consider that an optimizable switch or not?

Yes.  What I'm trying to do is separate when the dictionary is 
constructed from when the expression is evaluated.  If we restrict the 
names used to names that have at most one binding in their defining scope, 
then we can simply add the dictionary entries whenever the *name is 
bound*.  Ergo, the evaluation time is apparent from simple reading of the 
source - we are never moving the evaluation, only determining how early we 
can add information to the switching dictionary.

Thus, the answer to when is the expression evaluated is when it's 
executed as seen in the source code.  There is thus no magic of either 
first-use or function-definition involved.  What you see is exactly what 
you get.


  A switch
  that's in a loop or a function call can only be optimized if all its
  constants are declared outside the loop or function body; otherwise it
  degrades to an if/elif chain.

What do you mean by a switch in a function call? Syntactically that
makes no sense. Do you mean in a function definition?

Yes, sorry.  I probably copied the slip from your previous post

Re: [Python-Dev] Switch statement

2006-06-22 Thread Phillip J. Eby
At 12:24 PM 6/22/2006 -0700, Guido van Rossum wrote:
OK, I think I see how this works. You pre-compute the expression at
def-time, squirrel it away in a hidden field on the function object,
and assign it to a local each time the statement is executed.

More precisely, I'd say that the computation is moved to function 
definition time and becomes an anonymous free variable.  The body of the 
static expression becomes a LOAD_DEREF of the free variable, rather than 
computation of the expression.

The debug trace will show the function definition going to the lines that 
contain the static expressions, but that's understandable.

I think I like it.  I was confused by what Fredrik meant by const, but 
your renaming it to static makes more sense to me; i.e. it belongs to the 
function, as opposed to each execution of the function.  (Whereas I was 
reading const as meaning immutable or non-rebindable, which made no 
sense in the context.)


Unfortunately this would probably cause people to write

   switch x:
 case static re.DOTALL: ...
 case static re.IGNORECASE: ...

which is just more work to get the same effect as the
def-time-switch-freezing proposal.

Without the static, the reordering of execution isn't obvious.  But 
perhaps that could be lived with, if the explanation was, well, static is 
implied by case.


I'm also unclear on what you propose this would do *without* the
statics. Would it be a compile-time error? Compile the dict each time
the switch is executed? Degenerate to an if/elif chain? Then what if x
is unhashable? What if *some* cases are static and others aren't?

If we allow non-static cases, then they should become ifs that happen 
prior to a dictionary lookup on the remaining static/literal ones.  Or we 
could just say that each adjacent block of static cases is its own 
dictionary lookup, and the rest happen in definition order.  (i.e., you 
replace contiguous static/literal runs with dictionary lookups, and 
everything else is if-elif.)

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Switch statement

2006-06-22 Thread Phillip J. Eby
At 12:54 PM 6/22/2006 -0700, Guido van Rossum wrote:
Summarizing our disagreement, I think you feel that
freeze-on-first-use is most easily explained and understood while I
feel that freeze-at-def-time is more robust. I'm not sure how to get
past this point except by stating that you haven't convinced me... I
think it's time to sit back and wait for someone else to weigh in with
a new argument.

Which I think you and Fredrik have found, if case implies static.  It 
also looks attractive as an addition in its own right, independent of switch.

In any case, my point wasn't to convince you but to make you aware of 
certain costs and benefits that I wasn't sure you'd perceived.  It's clear 
from your response that you *have* perceived them now, so I'm quite 
satisfied by that outcome -- i.e., my goal wasn't to convince you to 
adopt a particular proposal, but rather to make sure you understood and 
considered the ramifications of the ones under discussion.

That being said, there isn't anything to get past; from my POV, the 
discussion is already a success.  :)

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Switch statement

2006-06-21 Thread Phillip J. Eby
At 03:38 AM 6/21/2006 -0500, Ka-Ping Yee wrote:
On Wed, 21 Jun 2006, Phillip J. Eby wrote:
  Well, EIBTI and all that:
 
   switch x:
   case == 1: foo(x)
   case in S: bar(x)
 
  It even lines up nicely.  :)

Hmm, this is rather nice.  I can imagine possible use cases for

 switch x:
 case  3: foo(x)
 case is y: spam(x)
 case == z: eggs(x)

An interesting use case for which this offers no corresponding
syntax is

 case instanceof ClassA: ham(x)

Actually, I was assuming that any other operator besides == and 'in' would 
be relegated to an if-elif chain in the default case, although it's almost 
possible to do that automatically, I suppose.

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Switch statement

2006-06-21 Thread Phillip J. Eby
At 09:16 AM 6/21/2006 -0700, Guido van Rossum wrote:
After thinking about it a bit I think that if it's not immediately
contained in a function, it should be implemented as alternative
syntax for an if/elif chain.

That worries me a little.  Suppose I write a one-off script like this:

for line in sys.stdin:
 words = line.split()
 if words:
 switch words[0]:
 case foo: blah
 case words[-1]: print mirror image!

Then, if I later move the switch into a function, it's not going to mean 
the same thing any more.  If the values are frozen at first use or 
definition time (which are the same thing for module-level code), then I'll 
find the lurking bug sooner.

OTOH, breaking it sooner doesn't seem like such a great idea either; seems 
like a recipe for a newbie-FAQ, actually.  ISTM that the only sane way to 
deal with this would be to ban the switch statement at module level, which 
then seems to be an argument for not including the switch statement at all.  :(

I suppose the other possibility would be to require at compilation time 
that a case expression include only non-local variables.  That would mean 
that you couldn't use *any* variables in a case expression at module-level 
switch, but wording the error message for that to not be misleading might 
be tricky.

I suppose an error message for the above could simply point to the fact 
that 'words' is being rebound in the current scope, and thus can't be 
considered a constant.  This is only an error at the top-level if the 
switch appears in a loop, and the variable is rebound somewhere within that 
loop or is rebound more than once in the module as a whole (including 
'global' assignments in functions).

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Switch statement

2006-06-21 Thread Phillip J. Eby
At 06:41 PM 6/21/2006 +0200, Fredrik Lundh wrote:
Guido van Rossum wrote:

  (Note how I've switched to the switch-for-efficiency camp, since it
  seems better to have clear semantics and a clear reason for the syntax
  to be different from if/elif chains.)

if you're now in the efficiency camp, why not just solve this on the
code generator level ?  given

  var = some expression
  if var == constant:
  ...
  elif var == constant:
  ...

let the compiler use a dispatch table, if it can and wants to.

Two reasons:

1. Having special syntax is an assertion that 'var' will be usable as a 
dictionary key.  Without this assertion, the generated code would need to 
trap hashing failure.

2. Having special syntax is likewise an assertion that the 'constants' will 
remain constant, if they're symbolic constants like:

FOO = foo


___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Switch statement

2006-06-21 Thread Phillip J. Eby
At 09:55 AM 6/21/2006 -0700, Guido van Rossum wrote:
BTW a switch in a class should be treated the same as a global switch.
But what about a switch in a class in a function?

Okay, now my head hurts.  :)

A switch in a class doesn't need to be treated the same as a global switch, 
because locals()!=globals() in that case.

I think the top-level is the only thing that really needs a special case 
vs. the general error if you use a local variable in the expression rule.

Actually, it might be simpler just to always reject local variables -- even 
at the top-level -- and be done with it.

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Switch statement

2006-06-21 Thread Phillip J. Eby
At 10:27 AM 6/21/2006 -0700, Guido van Rossum wrote:
On 6/21/06, Phillip J. Eby [EMAIL PROTECTED] wrote:
  At 09:55 AM 6/21/2006 -0700, Guido van Rossum wrote:
  BTW a switch in a class should be treated the same as a global switch.
  But what about a switch in a class in a function?
 
  Okay, now my head hurts.  :)

Welcome to the club. There's a Monty Python sketch appropriate...

Aha!  So *that's* why Jim Fulton is always going W.  :)


  A switch in a class doesn't need to be treated the same as a global switch,
  because locals()!=globals() in that case.

But that's not the discerning rule in my mind; the rule is, how to
define at function definition time.

Wa!  (i.e., my head hurts again :)


  I think the top-level is the only thing that really needs a special case
  vs. the general error if you use a local variable in the expression rule.

To the contrary, at the top level my preferred semantics don't care
because they don't use a hash.

The strict rules about locals apply when it occurs inside a function,
since then we eval the case expressions at function definition time,
when the locals are undefined. This would normally be good enough, but
I worry (a bit) about this case:

   y = 12
   def foo(x, y):
 switch x:
 case y: print something

which to the untrained observer (I care about untrained readers much
more than about untrained writers!) looks like it would print
something if x equals y, the argument, while in fact it prints
something if x equals 12.

I was thinking this should be rejected due to a local being in the 'case' 
expression.


  Actually, it might be simpler just to always reject local variables -- even
  at the top-level -- and be done with it.

Can't because locals at the top-level are also globals.

But you could also just use literals, and the behavior would then be 
consistent.  But I'm neither so enamored of that solution nor so against 
if/elif behavior that I care to argue further.

One minor point, though: what happens if we generate an if/elif for the 
switch, and there's a repeated case value?  The advantage of still using 
the hash-based code at the top level is that you still get an error for 
duplicating keys.

Ugh.  It still seems like the simplest implementation is to say that the 
lookup table is built at first use and that the case expressions may not 
refer to variables that are known to be bound in the current scope, or 
rebound in the case of the top level.  So the 'case y' example would be a 
compile-time error, as would my silly words example.  But code that only 
used constants at the top level would work.

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Switch statement

2006-06-20 Thread Phillip J. Eby
At 12:26 PM 6/21/2006 +1200, Greg Ewing wrote:
Guido van Rossum wrote:

  But it would be easy enough to define a dict-filling function that
  updates only new values.

Or evaluate the case expressions in reverse order.

-1; stepping through the code in a debugger is going to be weird enough, 
what with the case statements being executed at function definition time, 
without the reverse order stuff.  I'd rather make it an error to list the 
same value more than once; we can just check if the key is present before 
defining that value.

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Switch statement

2006-06-20 Thread Phillip J. Eby
At 10:14 PM 6/20/2006 -0700, Guido van Rossum wrote:
Hm, so this still doesn't help if you write

   case S: ...

(where S is an immutable set or sequence) when you meant

   case in S: ...

so I'm not sure if it's worth the subtleties.

Well, EIBTI and all that:

 switch x:
 case == 1: foo(x)
 case in S: bar(x)

It even lines up nicely.  :)

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Switch statement

2006-06-19 Thread Phillip J. Eby
At 12:28 AM 6/19/2006 -0700, Josiah Carlson wrote:

Phillip J. Eby [EMAIL PROTECTED] wrote:
  At 06:56 PM 6/18/2006 -0700, Josiah Carlson wrote:
  The non-fast version couldn't actually work if it referenced any names,
  given current Python semantics for arbitrary name binding replacements.
 
  Actually, one could consider case expressions to be computed at function
  definition time, the way function defaults are.  That would solve the
  problem of symbolic constants, or indeed any sort of expressions.

Using if/elif/else optimization precludes any non-literal constants, so
we would necessarily have to go with a switch/case for this semantic. It
seems as though it would work well, and wouldn't be fraught with
any of the gotchas that catch users like:
 def fcn(..., dflt={}, dflt2=[]):
...


  An alternate possibility would be to have them computed at first use and
  cached thereafter.
 
  Either way would work, and both would allow multiple versions of the same
  switch statement to be spun off as closures without losing their 
 constant
  nature or expressiveness.  It's just a question of which one is easier to
  explain.  Python already has both types of one-time initialization:
  function defaults are computed at definition time, and modules are only
  loaded once, the first time you import them.

I would go with the former rather than the latter, if only for
flexibility.

There's no difference in flexibility.  In either case, the dictionary 
should be kept in a cell in the function's closure, not with the code 
object.  It would simply be a difference in *when* the values were 
computed, and by which code object.  To be done at function definition 
time, the enclosing code block would have to do it, which would be sort of 
weird from a compiler perspective, and there would be an additional problem 
with getting the line number tables correct.  But that's going to be tricky 
no matter which way it's done.

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Switch statement

2006-06-19 Thread Phillip J. Eby
At 12:10 AM 6/20/2006 +1000, Nick Coghlan wrote:
Caching on first use would be the easiest to explain I think. Something like:

  if jump_dict is NULL:
  jump_dict = {FIRST_CASE  : JUMP_TARGET_1,
   SECOND_CASE : JUMP_TARGET_2,
   THIRD_CASE  : JUMP_TARGET_3}
  jump_to_case(value, jump_dict)
  ELSE_CLAUSE
  jump_to_end()

Sadly, it's not *quite* that simple, due to the fact that co_lnotab must be 
increase in line numbers as bytecode offsets increase.  It would actually 
look more like:

  LOAD_DEREF jumpdictN
  JUMP_IF_FALSE  initfirstcase

do_switch:
  ...

initfirstcase:
  DUP_TOP
  # compute case value
  LOAD_CONST firstcaseoffset
  ROT_THREE
  STORE_SUBSCR
  JUMP_FORWARD initsecondcase

firstcaseoffset:
  first case goes here
  ...

initsecondcase:
  DUP_TOP
  # compute case value
  LOAD_CONST secondcaseoffset
  ROT_THREE
  STORE_SUBSCR
  JUMP_FORWARD initthirdcase

secondcaseoffset:
  second case goes here
  ...

...

initlastcase:
  DUP_TOP
  # compute case value
  LOAD_CONST lastcaseoffset
  ROT_THREE
  STORE_SUBSCR
  JUMP_ABSOLUTE doswitch

lastcaseoffset:
  last case goes here



The above shenanigans are necessary because the line numbers of the code 
for computing the case expressions have to be interleaved with the line 
numbers for the code for the case suites.

Of course, we could always change how co_lnotab works, which might be a 
good idea anyway.  As our compilation techniques become more sophisticated, 
it starts to get less and less likely that we will always want bytecode and 
source code to be in exactly the same sequence within a given code object.

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Switch statement

2006-06-19 Thread Phillip J. Eby
At 09:27 AM 6/19/2006 -0700, Raymond Hettinger wrote:
Guido van Rossum wrote:
Um, is this dogma? Wouldn't a switch statement also be a welcome
addition to the readability? I haven't had the time to follow this
thread (still catching up on my Google 50%) but I'm not sure I agree
with the idea that a switch should only exist for speedup.


A switch-statement offers only a modest readability improvement over 
if-elif chains.  If a proposal introduces a switch-statement but doesn't 
support fast dispatch, then it loses much of its appeal.

I would phrase that a lot differently.  A switch statement is *very* 
attractive for its readability.  The main problem is that if the most 
expressive way to do something in Python is also very slow -- i.e., people 
use it when they should be using a dictionary of functions -- then it adds 
to the Python is slow meme by attractive nuisance.  :)

Therefore, a switch statement should be made to perform at least as well as 
a dictionary of functions, or having it might actually be a bad thing.

In any case, we *can* make it perform as well as a dictionary of functions, 
and we can do it with or without another opcode.  What really needs to be 
decided on (i.e. by the BDFL) is the final syntax of the statement itself, 
and the semantics of evaluation time for the 'case' expressions, either at 
first execution of the switch statement, or at function definition time.

If explaining the evaluation time is too difficult, however, it might be an 
argument against the optimization.  But, I don't think that either 
first-use evaluation or definition-time evaluation are too hard to explain, 
since Python has both kinds of evaluation already.

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] An obscene computed goto bytecode hack for switch :)

2006-06-18 Thread Phillip J. Eby
At 11:23 AM 6/18/2006 -0700, Guido van Rossum wrote:
I'm not in favor of abusing this to generate a computed goto, and I
don't see a need for that -- if we decide to add that (either as
syntax or as an automatic optimization) I see no problem adding a new
bytecode.

Me either -- I suggest simply adding a JUMP_TOP -- but I wanted to point 
out that people wouldn't need to add a new opcode in order to experiment 
with possible switch syntaxes.

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 338 vs PEP 328 - a limitation of the -m switch

2006-06-18 Thread Phillip J. Eby
At 11:18 AM 6/18/2006 -0700, Guido van Rossum wrote:
On 6/18/06, Nick Coghlan [EMAIL PROTECTED] wrote:
  The 'bug fix' solution would be:
 
 1. Change main.c and PySys_SetPath so that '' is NOT prepended to 
 sys.path
  when the -m switch is used
 2. Change runpy.run_module to add a __pkg_name__ attribute if the module
  being executed is inside a package
 3. Change import.c to check for __pkg_name__ if (and only if) 
 __name__ ==
  '__main__' and use __pkg_name__ if it is found.

That's pretty heavy-handed for a pretty esoteric use case. (Except #1,
which I think should be done regardless as otherwise we'd get a
messed-up sys.path.)

Since the -m module is being run as a script, shouldn't it put the module's 
directory as the first entry on sys.path?  I don't think we should change 
the fact that *some* directory is always inserted at the beginning of 
sys.path -- and all the precedents at the moment say script directory, if 
you consider -c and the interactive interpreter to be scripts in the 
current directory.  :)

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 338 vs PEP 328 - a limitation of the -m switch

2006-06-18 Thread Phillip J. Eby
At 02:03 PM 6/18/2006 -0700, Guido van Rossum wrote:
On 6/18/06, Phillip J. Eby [EMAIL PROTECTED] wrote:
  At 11:18 AM 6/18/2006 -0700, Guido van Rossum wrote:
  On 6/18/06, Nick Coghlan [EMAIL PROTECTED] wrote:
The 'bug fix' solution would be:
   
   1. Change main.c and PySys_SetPath so that '' is NOT prepended to
   sys.path
when the -m switch is used
   2. Change runpy.run_module to add a __pkg_name__ attribute if 
 the module
being executed is inside a package
   3. Change import.c to check for __pkg_name__ if (and only if)
   __name__ ==
'__main__' and use __pkg_name__ if it is found.
  
  That's pretty heavy-handed for a pretty esoteric use case. (Except #1,
  which I think should be done regardless as otherwise we'd get a
  messed-up sys.path.)
 
  Since the -m module is being run as a script, shouldn't it put the module's
  directory as the first entry on sys.path?

Yes for a top-level module. No if it's executing a module inside a
package; it's really evil to have a package directory on sys.path.

  I don't think we should change
  the fact that *some* directory is always inserted at the beginning of
  sys.path -- and all the precedents at the moment say script directory, if
  you consider -c and the interactive interpreter to be scripts in the
  current directory.  :)

You have a point about sys.path[0] being special. It could be the
current directory instead of the package directory.

Mightn't that be a security risk, in that it introduces an import hole for 
secure scripts run with -m?  Not that I know of any such scripts existing 
as yet...

If it's not the package directory, perhaps it could be a copy of whatever 
sys.path entry the package was found under - that wouldn't do anything but 
make nearby imports faster.

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Switch statement

2006-06-18 Thread Phillip J. Eby
At 06:56 PM 6/18/2006 -0700, Josiah Carlson wrote:
The non-fast version couldn't actually work if it referenced any names,
given current Python semantics for arbitrary name binding replacements.

Actually, one could consider case expressions to be computed at function 
definition time, the way function defaults are.  That would solve the 
problem of symbolic constants, or indeed any sort of expressions.

An alternate possibility would be to have them computed at first use and 
cached thereafter.

Either way would work, and both would allow multiple versions of the same 
switch statement to be spun off as closures without losing their constant 
nature or expressiveness.  It's just a question of which one is easier to 
explain.  Python already has both types of one-time initialization: 
function defaults are computed at definition time, and modules are only 
loaded once, the first time you import them.

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] An obscene computed goto bytecode hack for switch :)

2006-06-17 Thread Phillip J. Eby
At 01:18 PM 6/17/2006 +0200, Armin Rigo wrote:
Psyco cheats here and emulates a behavior where there is
always exactly one object instead (which can be a tuple), so if a
END_FINALLY sees values not put there in the official way it will just
crash.  PyPy works similarily but always expect three values.

(Hum, Psyco could easily be fixed to support your use case...  For PyPy
it would be harder without performance hit)

I suppose if the code knew it was running under PyPy or Psyco, the code 
could push three items or a tuple?  It's the knowing whether that's the 
case that would be difficult.  :)

I'm a bit surprised, though, since I thought PyPy was supposed to be an 
interpreter of CPython bytecode.  That is, that it runs unmodified Python 
bytecode.

Or are you guys just changing POP_BLOCK's semantics so it puts two extra 
None's on the stack when popping a SETUP_FINALLY block?  [Looks at the code]
Ah, yes.  But you're not emulating the control mechanism.  I see why you're 
saying it would be harder without a performance hit.  I could change the 
bytecode so it would work under PyPy as far as stack levels go, but I'd 
need to also be able to put wrapped unrollers on the stack (which seems 
impossible from within the interpreter), or else PyPy would have to check 
whether the unroller is an integer.

I guess it would probably make better sense to have a JUMP_TOP operation to 
implement the switch statement, and to use that under PyPy, keeping the 
hack only for implementing jump tables in older Python versions.

Anyway, if I do use this for older Python versions, would you accept a 
patch for Psyco to support it?  That would let us have JIT-compiled 
predicate dispatch for older Pythons, which sounds rather 
exciting.  :)  The current version of RuleDispatch is an interpreter that 
follows a tree data structure, but I am working on a new package, 
PEAK-Rules, that is planned to be able to translate dispatch trees directly 
into bytecode, thus removing one level of interpretation.

I do have some other questions, but I suppose this is getting off-topic for 
python-dev now, so I'll jump over to psyco-devel once I've gotten a bit 
further along.  Right now, I've only just got BytecodeAssembler up to 
building simple expression trees, and the computed goto demo.

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] unicode imports

2006-06-16 Thread Phillip J. Eby
At 01:29 AM 6/17/2006 +1000, Nick Coghlan wrote:
Kristján V. Jónsson wrote:
  A cursory glance at import.c shows that the import mechanism is fairly
  complicated, and riddled with char *path thingies, and manual string
  arithmetic.  Do you have any suggestions on a clean way to unicodify the
  import mechanism?

Can you install a PEP 302 path hook and importer/loader that can handle path
entries that are Unicode strings? (I think this would end up being the
parallel implementation you were talking about, though)

If the code that traverses sys.path and sys.path_hooks is itself
unicode-unaware (I don't remember if it is or isn't), then you might be able
to trick it by poking a Unicode-savvy importer directly into the
path_importer_cache for affected Unicode paths.

Actually, you would want to put it in sys.path_hooks, and then instances 
would be placed in path_importer_cache automatically.  If you are adding it 
to the path_hooks after the fact, you should simply clear the 
path_importer_cache.  Simply poking stuff into the path_importer_cache is 
not a recommended approach.


One issue is that the package and file names still have to be valid Python
identifiers, which means ASCII. Unicode would be, at best, permitted only in
the path entries.

If I understand the problem correctly, the issue is that if you install 
Python itself to a Unicode directory, you'll be unable to import anything 
from the standard library.  This isn't about module names, it's about the 
places on the path where that stuff goes.

However, if the issue is that the program works, but it puts unicode 
entries on sys.path, I would suggest simply encoding them to strings using 
the platform-appropriate codec.

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


[Python-Dev] An obscene computed goto bytecode hack for switch :)

2006-06-16 Thread Phillip J. Eby
For folks contemplating what opcodes might need to be added to implement a 
switch statement, it turns out that there is a clever way (i.e. a 
filthy hack) to implement computed jumps in Python bytecode, using 
WHY_CONTINUE and END_FINALLY.

I discovered this rather by accident, while working on my BytecodeAssembler 
package: I was adding validation code to minimize the likelihood of 
generating incorrect code for blocks and loops, and so I was reading 
ceval.c to make sure I knew how those bytecodes worked.

And at some point it dawned on me that an END_FINALLY opcode that sees 
WHY_FINALLY on top of the stack *is actually a computed goto*!  It has to 
be inside a SETUP_LOOP/POP_BLOCK pair, but apart from that it's quite 
straightforward.

So, taking the following example code as a basis for the input:

 def foo(x):
 switch x:
 case 1: return 42
 case 2: return 'foo'
 else:   return 27

I created a proof-of-concept implementation that generated the following 
bytecode for the function:

   0   0 SETUP_LOOP  36 (to 39)
   3 LOAD_CONST   1 (...method get of dict...)
   6 LOAD_FAST0 (x)
   9 CALL_FUNCTION1

  12 JUMP_IF_FALSE   18 (to 33)
  15 LOAD_CONST   2 (...)
  18 END_FINALLY

  19 LOAD_CONST   3 (42)
  22 RETURN_VALUE
  23 JUMP_FORWARD12 (to 38)

  26 LOAD_CONST   4 ('foo')
  29 RETURN_VALUE
  30 JUMP_FORWARD 5 (to 38)

33 POP_TOP
  34 LOAD_CONST   5 (27)
  37 RETURN_VALUE

38 POP_BLOCK

39 LOAD_CONST   0 (None)
  42 RETURN_VALUE

The code begins with a SETUP_LOOP, so that our pseudo-continues will 
work.  As a pleasant side-effect, any BREAK_LOOP operations in any of the 
suites will exit the entire switch block, jumping to offset 39 and the 
function exit.

At offset 3, I load the 'get' method of the switching dictionary as a 
constant -- this was simpler for my proof-of-concept, but a production 
version should probably load the dictionary and then get its 'get' method, 
because methods aren't marshallable and the above code therefore can't be 
saved in a .pyc file.  The remaining code up to offset 12 does a dictionary 
lookup, defaulting to None if the value of the switch expression isn't found.

At offset 12, I check if the jump target is false, and if so I assume it's 
None, and  jump ahead to the else suite.  If it's true, I load a constant 
value equal to the correct value of WHY_CONTINUE for the current Python 
version and fall through to the END_FINALLY.  So the END_FINALLY then pops 
WHY_CONTINUE and the jump target, jumping forward to the correct case branch.

The code that follows is then a series of case suites, each ending with a 
JUMP_FORWARD to the POP_BLOCK that ends the loop.  In this case, however, 
those jumps are never actually taken, but if execution fell out of any of 
the cases, they would proceed to the end this way.

Anyway, the above function actually *runs* in any version of Python back to 
2.3, as long as the LOAD_CONST at offset 15 uses the right value of 
WHY_CONTINUE for that Python version.  Older Python versions are of course 
not going to have a switch statement, but the reason I'm excited about 
this is that I've been wishing for some way to branch within a function in 
order to create fast jump tables for generic functions.  This is pretty 
much what the doctor ordered.

One thing I'm curious about, if there are any PyPy folks listening: will 
tricks like this drive PyPy or Psyco insane?  :)  It's more than idle 
curiosity, as one of my goals for my next generic function system is that 
it should generate bytecode that's usable by PyPy and Psyco for 
optimization or translation purposes.

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Switch statement

2006-06-15 Thread Phillip J. Eby
At 11:45 PM 6/15/2006 +0100, Nicko van Someren wrote:
On 15 Jun 2006, at 11:37, Nick Coghlan wrote:
  ...
  The lack of a switch statement doesn't really bother me personally,
  since I
  tend to just write my state machine type code so that it works off a
  dictionary that I define elsewhere,

Not trying to push more LISP into python or anything, but of course
we could converge your method and the switch statement elegantly if
only we could put whole suites into lamdbas rather than just single
expressions :-)

As has already been pointed out, this

1) adds function call overhead,
2) doesn't allow changes to variables in the containing function, and
3) even if we had a rebinding operator for free variables, we would have 
the overhead of creating closures.

The lambda syntax does nothing to fix any of these problems, and you can 
already use a mapping of closures if you are so inclined.  However, you'll 
probably find that the cost of creating the dictionary of closures exceeds 
the cost of a naive sequential search using if/elif.

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


[Python-Dev] Comparing closures and arguments (was Re: Scoping vs augmented assignment vs sets (Re: 'fast locals' in Python 2.5)

2006-06-14 Thread Phillip J. Eby
At 11:26 AM 6/14/2006 -0700, Josiah Carlson wrote:
Ok, so here's a bit of a benchmark for you.

 def helper(x,y):
 return y

 def fcn1(x):
 _helper = helper
 y = x+1
 for i in xrange(x):
 y = _helper(x,y)

 def fcn2(x):
 y = x+1
 def _helper(x):
 return y
 for i in xrange(x):
 y = _helper(x)


Can you guess which one is faster?  I guessed, but I was wrong ;).

  x = 100
  min([fcn1(x) for i in xrange(10)]), min([fcn2(x) for i in xrange(10)])
(0.5326484985352, 0.5923515014648)

It turns out that passing two arguments to a helper function is actually
faster than passing one argument and pulling a second out of an
enclosing scope.

That claim isn't necessarily supported by your benchmark, which includes 
the time to *define* the nested function 10 times, but calls it only 45 
times!  Try comparing fcn1(1000) and fcn2(1000) - I suspect the results 
will be somewhat closer, but probably still in favor of fcn1.

However, I suspect that the remaining difference in the results would be 
due to the fact that the interpreter loop has a fast path function call 
implementation that doesn't work with closures IIRC.  Perhaps someone who's 
curious might try adjusting the fast path to support closures, and see if 
it can be made to speed them up without slowing down other fast path calls.

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Comparing closures and arguments (was Re: Scoping vs augmented assignment vs sets (Re: 'fast locals' in Python 2.5)

2006-06-14 Thread Phillip J. Eby
At 01:00 PM 6/14/2006 -0700, Josiah Carlson wrote:
  That claim isn't necessarily supported by your benchmark, which includes
  the time to *define* the nested function 10 times, but calls it only 45
  times!  Try comparing fcn1(1000) and fcn2(1000) - I suspect the results
  will be somewhat closer, but probably still in favor of fcn1.

Please re-read the code and test as I have specified.  You seem to have
misunderstood something in there, as in the example I provide, _helper
is called 1,000,000 times (and is defined only once for each call of
fcn2) during each fcn1 or fcn2 call, and _helper is only defined once
inside of fcn2.

Oops.  I misread [fcn2(x) for i in xrange(10)] as [fcn2(x) for x in 
xrange(10)].  The latter is so much more common of a pattern that I guess 
I didn't even *look* at what the loop variable was.  Weird.  I guess the 
mind is a terrible thing.  :)

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Switch statement

2006-06-12 Thread Phillip J. Eby
At 12:44 AM 6/12/2006 +0200, Fredrik Lundh wrote:
the compiler can of course figure that out also for if/elif/else state-
ments, by inspecting the AST.  the only advantage for switch/case is
user syntax...

Not quite true - you'd have to restrict the switch expression in some way, 
so you don't have:

if x.y == 1:
   ...
elif x.y == 2:
   ...

where the compiler doesn't know if getattr(x,'y') is really supposed to 
happen more than once.  But I suppose you could class that as syntax.

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


[Python-Dev] Please stop changing wsgiref on the trunk

2006-06-12 Thread Phillip J. Eby
As requested in PEP 360, please inform me of any issues you find so they 
can be corrected in the standalone package and merged back to the trunk.

I just wasted time cutting an 0.1.1 release of the standalone wsgiref 
package only to find that it doesn't correspond to any particular point in 
the trunk, because people made changes without contacting me or the 
Web-SIG.  I then spent a bunch more time figuring out how to get the 
changes out and merge them back in to the standalone version such that the 
Python trunk has a specific version number of wsgiref.  Please don't do 
this again.

I appreciate the help finding bugs, but I'll probably still be maintaining 
the standalone version of wsgiref for a few years yet.

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] UUID module

2006-06-12 Thread Phillip J. Eby
At 07:24 PM 6/11/2006 -0500, Ka-Ping Yee wrote:
Thomas Heller wrote:
  I don't know if this is the uuidgen you're talking about, but
  on linux there is libuuid:

Thanks!

Okay, that's in there now.  Have a look at http://zesty.ca/python/uuid.py .

Phillip J. Eby wrote:
  By the way, I'd love to see a uuid.uuid() constructor that simply calls the
  platform-specific default UUID constructor (CoCreateGuid or uuidgen(2)),

I've added code to make uuid1() use uuid_generate_time() if available
and uuid4() use uuid_generate_random() if available.  These functions
are provided on Mac OS X (in libc) and on Linux (in libuuid).  Does
that work for you?

Sure - but actually my main point was to have a uuid() call you could use 
to just get whatever the platform's preferred form of GUID is, without 
having to pick what *type* you want.

The idea being that there should be some call you can make that will always 
give you something reasonably unique, without being overspecified as to the 
type of uuid.  That way, people can be told to use uuid.uuid() to get 
unique IDs for use in their programs, without having to get into what types 
of UUIDs do what.

Perhaps that isn't feasible, or is a bad idea for some other reason, but my 
main point was to have a call that means get me a good unique ID.  :)

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] FYI: wsgiref is now checked in

2006-06-12 Thread Phillip J. Eby
At 03:22 PM 6/10/2006 -0400, Tim Peters wrote:
This may be because compare_generic_iter() uses `assert` statements,
and those vanish under -O.  If so, a test shouldn't normally use
`assert`.  On rare occasions it's appropriate, like test_struct's:

 if x  0:
 expected += 1L  self.bitsize
 assert expected  0

That isn't testing any of struct's functionality, it's documenting and
verifying a fundamental _belief_ of the test author's:  the test
itself is buggy if that assert ever triggers.  Or, IOW, it's being
used for what an assert statement should be used for :-)

Thanks for the bug report; I've fixed these problems in the standalone 
version (0.1.2 on the cheeseshop) and in the Python 2.5 trunk.

Web-SIG folks take note: wsgiref.validate is based on paste.lint, so 
paste.lint has the same problem.  That is, errors won't be raised if the 
code is run with -O.

As a side effect of fixing the problems,  I found that some of the 
wsgiref.validate (aka paste.lint) asserts have improperly computed 
messages.  Instead of getting an explanation of the problem, you'll instead 
get a different error at the assert.  I fixed these in wsgiref.validate, 
but the underlying problems presumably still exist in paste.lint.

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Please stop changing wsgiref on the trunk

2006-06-12 Thread Phillip J. Eby
At 09:04 AM 6/12/2006 -0700, Guido van Rossum wrote:
IOW I think PEP 360 is an unfortunate historic accident, and we would
be better off without it. I propose that we don't add to it going
forward, and that we try to get rid of it as we can.

4 of the 6 modules in PEP 360 were added to Python in 2.5, so if you want 
to get rid of it, *now* would be the time.

There is an approach that would address this issue and others relating to 
external packages, but it would require changes to how Python is 
built.  That is, I would propose a directory to contain 
externally-maintained packages, each with their own setup.py.  These 
packages could be built and installed with Python, but would then also be 
separately-distributable.

Alternately, such packages could be done using svn:externals tied to 
specific release versions of the external packages.

This idea would address the needs of external maintainers (having a single 
release history) while still allowing Python developers to modify the code 
(if the external package is in Python's SVN repository).

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


[Python-Dev] External Package Maintenance (was Re: Please stop changing wsgiref on the trunk)

2006-06-12 Thread Phillip J. Eby
At 09:43 AM 6/12/2006 -0700, Guido van Rossum wrote:
On 6/12/06, Phillip J. Eby [EMAIL PROTECTED] wrote:
At 09:04 AM 6/12/2006 -0700, Guido van Rossum wrote:
 IOW I think PEP 360 is an unfortunate historic accident, and we would
 be better off without it. I propose that we don't add to it going
 forward, and that we try to get rid of it as we can.

4 of the 6 modules in PEP 360 were added to Python in 2.5, so if you want
to get rid of it, *now* would be the time.

I'm all for it.

While I am an enthusiastic supporter of several of those additions, I
am *not* in favor of the special status granted to software
contributed by certain developers, since it is a burden for all other
developers.

While I won't claim to speak for the other authors, I would guess that they 
have the same reason for wanting that status as I do: to be able to 
maintain an external release for their existing users with older versions 
of Python, until Python-in-the-field catches up with Python-in-development.

Right now, the effective industry-deployed version of Python is 2.3 - maybe 
2.2 if you have a lot infrastructure in Python, and 2.1 if you support Jython.


I also suspect that the external linking will continue to cause a
burden for Python developers -- upgrading to a newer version of the
external package would require making sure that no changes made by
Python developers in the previous release bundle are lost in the new
release bundle.

I'd be willing to live with e.g. moving wsgiref to an Externals/wsgiref 
subdirectory of the main Python tree, *without* svn:externals, and simply 
bumping its version number in that directory to issue snapshots.

This would be no different from the current situation (in terms of svn 
usage for core developers), except that I could go to one directory to get 
an svn log and review what other people did to the code and docs.  Right 
now, I've got to track at least three different directories to know what 
somebody did to wsgiref in the core.


I personally think that, going forward, external maintainers should
not be granted privileges such as are being granted by PEP 360, and an
inclusion of a package in the Python tree should be considered a
fork for all practical purposes. If an external developer is not
okay with such an arrangement, they shouldn't contribute.

This is going to make it tougher to get good contributions, where good 
means has existing users and a maintainer committed to supporting them.


Perhaps issues like these should motivate us to consider a different
source control tool. There's a new crop of tools out that could solve
this by having multiple repositories that can be sync'ed with each
other. This sounds like an important move towards world peace!

First we'd need to make Python's build process support building external 
libraries in the first place.  If we did that, we could solve the problem 
in SVN right now, as long as maintainers were willing to move their 
project's main repository to Python's repository.

If I understand correctly, the main thing it would require is that Python's 
setup.py invoke all the Externals/*/setup.py files.

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] External Package Maintenance (was Re: Please stop changing wsgiref on the trunk)

2006-06-12 Thread Phillip J. Eby
At 10:42 AM 6/12/2006 -0700, Guido van Rossum wrote:
Sure, but this doesn't require the draconian I-and-I-only own the
code approach that you have.

If there were only one version and directory tree to maintain to do both 
the Python trunk and the external version, I wouldn't mind other people 
making changes.   It's the synchronization that's a PITA, especially 
because of the directory layout.

If we had Externals/ I would just issue snapshots from there.


And is that such a big deal? Now that wsgiref is being distributed
with Python 2.5, it shouldn't evlove at a much faster pace than Python
2.5, otherwise it would defeat the purpose of having it in 2.5. (And
isn't it just a reference implementation? Why would it evolve at all?)

This is backwards: I'm not the one who evolved it, other Python devs 
did!  :)  I want Python 2.5 to distribute some version of wsgiref that is 
precisely the same as *some* public wsgiref release, so that PEP 360 will 
have accurate info and so that people who want a particular wsigref release 
can specify a sane version number, to avoid the kind of skew we used to 
have with micro-releases (e.g. 2.2.2).


If I understand correctly, the main thing it would require is that Python's
setup.py invoke all the Externals/*/setup.py files.

I guess that's one way of doing it. But perhaps Python's setup.py
should not bother at all, and this is up to the users.

However, if Python's setup.py did this, then external developers would get 
the benefit (and discipline) of the buildbots and testing.  That seems like 
a good thing to me.

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Dropping externally maintained packages (Was:Please stop changing wsgiref on the trunk)

2006-06-12 Thread Phillip J. Eby
At 03:29 PM 6/12/2006 -0400, Tim Peters wrote:
That's all ordinary everyday maintenance, and, e.g., there is no
mechanism to exempt anything in a checkout tree from reindent.py or
PyChecker complaints.

In addition, not shown above is that I changed test_wsgiref.py to stop
a test failure under -O.  Given that we're close to the next Python
release, and test_wsgiref was the only -O test failure, I wasn't going
to let that stand.  I did wait ~30 hours between emailing about the
problem and fixing it, but I like to whittle down my endless todo list
too 0.4 wink.

Your fix masked one of the *actual* problems, which was that 
wsgiref.validate (contributed by Ian Bicking) was also using asserts to 
check for validation failures.  This required a more extensive fix.  (See 
my reply to your problem report.)

Your post about the error was on Friday afternoon; I had a corrected 
version on Sunday evening, but I couldn't check it in because nobody told 
me about any of the ordinary everyday maintenance they were doing, and I 
had to figure out how to merge the now-divergent trees.

The whitespace changes I expected, since you previously told me about 
reindent.py.  The other changes I did not expect, since my original message 
about the checkin requested that people at least keep me informed of 
changes (as does PEP 360), so I thought that people would abide by that or 
at least notify me if they found it necessary to make a change to e.g. fix 
a test.  Your email about the test problem didn't say you were making any 
changes.

Regardless of everyday maintenance, my point was that I understood one 
procedure to be in effect, per PEP 360.  If nobody's expected to actually 
pay any attention to that procedure, there's no point in having the 
PEP.  Or if everyday maintenance is expected to be exempt, the PEP should 
reflect that.  Assuming that everybody knows which rules do and don't count 
is a non-starter on a project the size of Python.

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] External Package Maintenance (was Re: Please stop changing wsgiref on the trunk)

2006-06-12 Thread Phillip J. Eby
[posting back to python-dev in case others also perceived my original 
message as impolite]

At 01:25 PM 6/12/2006 -0700, Guido van Rossum wrote:
Oh, and the tone of your email was *not* polite. Messages starting
with I wasted an hour of my time are not polite pretty much by
definition.

Actually, I started out with please -- twice, after having previously 
asked please in advance.  I've also seen lots of messages on Python-Dev 
where Tim Peters wrote about having wasted time due to other folks not 
following established procedures, and I tried to emulate his tone.  I guess 
I didn't do a very good job, but not everybody is as funny as Tim is.  :)

Usually he manages to make it seem as though he would really be happy to 
give up his nights and weekends but that sadly, he just doesn't have any 
more time right at this particular moment.  A sort of it's not you, it's 
me thing.  I guess I just left out that particular bit of 
sleight-of-mouth.  :)

Anyway, will anyone who was offended by the original message please pretend 
that it was delightfully witty and written by Tim instead?  Thanks.  ;)

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] External Package Maintenance

2006-06-12 Thread Phillip J. Eby
A01:01 PM 6/12/2006 -0700, Guido van Rossum wrote:
I think I pretty much did already -- going forward, I'd like to see
that contributing something to the stdlib means that from then on
maintenance is done using the same policies and guidelines as the rest
of the stdlib (which are pretty conservative as far as new features
go), and not subject to the original contributor's veto or prior
permission. Rolling back changes without discussion is out of the
question.

I think there's some kind of misunderstanding here.  I didn't ask for veto 
or prior permission.  I just want to keep the external release in sync.

I also didn't roll anything back, at least not intentionally.  I was trying 
to merge the Python changes into the external release, and vice 
versa.  Two-way syncing is difficult and error-prone, especially when 
you're thinking you only need to do a one-way sync!  So if I managed to 
roll something back *un*intentionally in the process last night, I would 
hope someone would let me know.

That was my sole complaint: I requested a particular change process to 
ensure that syncing would be one-way, from wsgiref to Python.  If it has to 
be the other way, from Python to wsgiref, so be it.  However, my impression 
from PEP 360 was that the way I was asking for was the One Obvious Way of 
doing it.

This is not now, nor was it ever a control issue; I'd appreciate it if 
you'd stop implying that control has anything to do with it.  At most, it's 
a widespread ignorance and/or misunderstanding as to the optimum way of 
handling stdlib packages with external distribution.

It sounds like Barry has a potentially workable way of managing it that 
might reasonably be blessed as the One Obvious Way, and I'm certainly 
willing to try it.  I'd still rather have a Packages/ directory, but 
beggars can't be choosers.  However, if this is to be the One Obvious Way, 
it should be documented in a PEP as part of the how packages get in the 
stdlib and how they're maintained.

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] External Package Maintenance

2006-06-12 Thread Phillip J. Eby
At 03:42 PM 6/12/2006 -0400, Edward C. Jones wrote:
Guido van Rossum wrote:
   developers contributing code without wanting
   to give up control are the problem.

That hits the nail on the head.

Actually it's both irrelevant and insulting.

I just want changes made by the Python core developers to be reflected in 
the external releases.  I'd be more than happy to move the external release 
to the Python SVN server if that would make it happen.

If there was only one release point for the package, I would've had no 
problem with any of the changes made by Tim or AMK or anybody else.  The 
control argument is a total red herring.  If I had an issue with the 
actual changes themselves, I'd address it on checkins or dev, as is normal!

The nail here is simply that maintaining two versions of a package is 
awkward if changes are being made in both places.  I'd love to have only 
one place in which wsgiref is maintained, but Python's current directory 
layout doesn't allow me to put all of wsgiref in one place.

And if we hit *that* nail on the head (instead of hitting the external 
authors on theirs), it is a win for all the external contributors.

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Dropping externally maintained packages (Was:Please stop changing wsgiref on the trunk)

2006-06-12 Thread Phillip J. Eby
At 12:28 AM 6/13/2006 +0200, Martin v. Löwis wrote:
If you remember that this is the procedure: sure. However, if the
maintainer of a package thinks (and says) somebody edited my code,
this should not happen again, then I really think the code is better
not part of the Python distribution.

The this should not happen again in this case was the *merge problem*, 
not the *editing*.  There is a significant difference between the two.

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] External Package Maintenance

2006-06-12 Thread Phillip J. Eby
At 12:56 AM 6/13/2006 +0200, Martin v. Löwis wrote:
Fredrik Lundh wrote:
  I just want changes made by the Python core developers to be reflected in
  the external releases.
 
  and presumably, the reason for that isn't that you care about your ego,
  but that you care about your users.

For that, yes. However, the reason to desire that no changes are made
to Python's wsgiref is just that he wants to reduce the amount of work
he has to do to keep the sources synchronized - which reduces his amount
of work, but unfortunately increases the amount of work to be done for
the other python-dev committers.

I see *now* why that would appear to be the case.  However, my previous 
assumption was that if somebody found a bug, they'd tell me about it and 
I'd do the work of fixing it, updating the tests, etc.  In other words, I 
was willing to do *all* the work, for changes that made sense to wsgiref.

What I didn't really get until now is that people might be making 
Python-wide changes that don't have anything to do with wsgiref per se, and 
that is the place where the increased work comes in.

This should definitely be explained to authors who are donating libraries 
to the stdlib, because from my perspective it seemed to me that I was 
graciously volunteering to be responsible for *all* the work related to 
wsgiref.

(And yes, I understand now why it doesn't actually work that way.)

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] External Package Maintenance (was Re: Please stop changing wsgiref on the trunk)

2006-06-12 Thread Phillip J. Eby
At 12:09 AM 6/13/2006 +0100, Steve Holden wrote:
Phillip J. Eby wrote:
  Anyway, will anyone who was offended by the original message please 
 pretend
  that it was delightfully witty and written by Tim instead?  Thanks.  ;)
 
I wonder what the hell's up with Tim. He's been really crabby lately ...

It's probably all that time he's been spending tracking down the wsgiref 
test failures.  ;-)

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Dropping externally maintained packages (Was:Please stop changing wsgiref on the trunk)

2006-06-12 Thread Phillip J. Eby
At 01:36 AM 6/13/2006 +0200, Martin v. Löwis wrote:
 From that, I can only conclude that you requested that people should
not make changes again without contacting you or the Web-SIG.

Indeed I was -- back when I was under the mistaken impression that PEP 360 
actually meant what it appeared to say about other packages added in 
2.5.  In *that* universe, what I said made perfect sense.  :-)

And if we *were* in an alternate hypothetical universe where, say, instead 
of PEP 360 not really being followed, it was the unit testing policy, then 
we would all be yelling at Tim for being rude about people breaking the 
tests, when he should just expect that people will break tests, because 
after all, they have to change the *code*, and it's not reasonable to 
expect them to change the *tests* too.

So, to summarize, it's all actually Tim's fault, but only in a parallel 
universe where nobody believes in unit testing.  ;-)

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] External Package Maintenance (was Re: Please stopchanging wsgiref on the trunk)

2006-06-12 Thread Phillip J. Eby
At 02:00 AM 6/13/2006 +0200, Giovanni Bajo wrote:
IMO, the better way is exactly this you depicted: move the official 
development
tree into this Externals/ dir *within* Python's repository. Off that, you can
have your own branch for experimental work, from which extract your own
releases, and merge changes back and forth much more simply (since if they
reside on the same repository, you can use svnmerge-like features to find out
modifications and whatnot).

Yes, that's certainly what seems ideal for me as an external developer.  I 
don't know if it addresses the core developers' concerns, though, since it 
would mean having Python code that lives outside of the Lib/ subtree, tests 
that live under other places thatn Lib/test, and documentation source that 
lives outside of Doc/.  But if those aren't showstoppers then it seems like 
a winner to do it for 2.6.

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] External Package Maintenance

2006-06-12 Thread Phillip J. Eby
At 01:49 AM 6/13/2006 +0200, Martin v. Löwis wrote:
Phillip J. Eby wrote:
  This should definitely be explained to authors who are donating
  libraries to the stdlib, because from my perspective it seemed to me
  that I was graciously volunteering to be responsible for *all* the work
  related to wsgiref.

It's not only about python-wide changes. It is also for regular error
corrections: whenever I commit a bug fix that somebody contributed, I
now have to understand the code, and the bug, and the fix.

Again, my point was that I was volunteering to do all of those things for 
wsgiref.


Under PEP 360, I have to do all of these, *plus* checking PEP 360 to determine
whether I will step on somebodies' toes. I also have to consult PEP 291,
of course, to find out whether the code has additional compatibility
requirements.

In the wsgiref case, you mustn't forget PEP 333 either, actually.  :)


So ideally, I would like to see the external maintainers state we can
deal with occasional breakage arising from somebody forgetting the
procedures. This would scale, as it would put the responsibility
for the code on the shoulders of the maintainer. It appears that Thomas
Heller says this would work for him, and it worked for bsddb and
PyXML.

I've also already said I can use Barry's approach, making the Python SVN 
repository version the primary home of wsgiref and taking snapshots to make 
releases from.  I didn't realize that cross-directory linkages of that sort 
were allowed, or I'd have done it that way in the first place.  Certainly 
it would've been a more effective use of my time to do so.  :)

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] UUID module

2006-06-10 Thread Phillip J. Eby
At 11:16 AM 6/10/2006 -0500, Ka-Ping Yee wrote:
On Sat, 10 Jun 2006, Thomas Heller wrote:
  [some nice ctypes code]

Done.  Works like a charm.  Thanks for providing the code!

On Sat, 10 Jun 2006, Phillip J. Eby wrote:
  Also, for Python 2.5, these imports could probably be replaced with a
  ctypes call, though I'm not experienced enough w/ctypes to figure out what
  the call should be.

Happily, we have *the* ctypes guru here, and he's solved the problem
for Windows at least.

  Similarly, for the _uuidgen module, you've not included the C source for
  that module or the setup.py incantations to build it.

Yes, the idea was that even though _uuidgen isn't included with core
Python, users would magically benefit if they installed it (or if they
happen to be using Python in a distribution that includes it);

_uuidgen is actually peak.util._uuidgen; as far as I know, that's the only 
place you can get it.


  it's
the same idea with the stuff that refers to Win32 extensions.  Is the
presence of _uuidgen sufficiently rare that i should leave out
uuidgen_getnode() for now, then?

Either that, or we could add the code in to build it.  PEAK's setup.py does 
some relatively simple platform checks to determine whether you're on a BSD 
that has it.

The other alternative is to ask the guru nicely if he'll provide another 
ctypes snippet to call the uuidgen(2) system call if present.  :)

By the way, I'd love to see a uuid.uuid() constructor that simply calls the 
platform-specific default UUID constructor (CoCreateGuid or uuidgen(2)), if 
available, before falling back to one of the Python implementations.  Most 
of my UUID-using application code doesn't really care what type of UUID it 
gets, and if the platform has an efficient mechanism, it'd be convenient to 
use it.


___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] [Web-SIG] wsgiref doc draft; reviews/patches wanted

2006-06-09 Thread Phillip J. Eby
At 02:56 PM 6/7/2006 -0400, Joe Gregorio wrote:
Phillip,

1. It's not really clear from the abstract 'what' this library
provides. You might want
to consider moving the text from 1.1 up to the same level as the abstract.

Done.


2.  In section 1.1 you might want to consider dropping the sentence:
Only authors
 of web servers and programming frameworks need to know every detail...
 It doesn't offer any concrete information and just indirectly
  makes WSGI look complicated.

That bit was taken from AMK's draft; I'm going to trust his intuition here 
as to why he thought it was desirable to say this.


3. From the abstract:  Having a standard interface makes it easy to use a
   WSGI-supporting application with a number of different web servers.

  is a little akward, how about:

 Having a standard interface makes it easy to use an application
 that supports WSGI with a number of different web servers.

Done.


4. I believe the order of submodules presented is important and think that
they should be listed with 'handlers' and 'simple_server' first:

I agree that the order is important, but I intentionally chose the current 
order to be a gentle slope of complexity, from the near-trivial functions 
on up to the server/handler framework last.  I'm not sure what ordering 
principle you're suggesting to use instead.


5. You might consider moving 'headers' into 'util'. Of course, you could
 go all the way in simplifying and move 'validate' in there too.

Not and maintain backward compatibility.  There is, after all, code in the 
field using these modules for about a year and a half now.

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


[Python-Dev] FYI: wsgiref is now checked in

2006-06-09 Thread Phillip J. Eby
The checked-in code substantially matches the public 0.1 release of 
wsgiref.  There are some minor changes to the docs and the test module, but 
these have also been made in the SVN trunk of wsgiref's home repository, so 
that future releases don't diverge too much.  The plan is to continue to 
maintain the standalone version and update the stdlib from it as 
appropriate, although I don't know of anything that would be changing any 
time soon.

The checkin includes a wsgiref.egg-info file, so if you have a program that 
uses setuptools to depend on wsgiref being installed, setuptools under 
Python 2.5 should detect that the stdlib already includes wsgiref.

Thanks for all the feedback and assistance and code contributions.

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] wsgiref doc draft; reviews/patches wanted

2006-06-07 Thread Phillip J. Eby
At 08:38 AM 6/7/2006 -0400, A.M. Kuchling wrote:
On Tue, Jun 06, 2006 at 06:49:45PM -0400, Phillip J. Eby wrote:
  Source: http://svn.eby-sarna.com/svnroot/wsgiref/docs

Minor correction: svn://svn.eby-sarna.com/svnroot/wsgiref/docs
(at least, http didn't work for me).

Oops...  I meant:

http://svn.eby-sarna.com/wsgiref/docs/

Kind of garbled up the svn: and http: URLs; the HTTP one is for ViewCVS.


The docs look good, and I think they'd be ready to go in.

--amk
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/pje%40telecommunity.com

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Stdlib Logging questions (PEP 337 SoC)

2006-06-06 Thread Phillip J. Eby
At 10:13 AM 6/6/2006 -0400, Jim Jewett wrote:
On 6/5/06, Phillip J. Eby [EMAIL PROTECTED] wrote:

I notice you've completely avoided the question of whether this should be
being done at all.

As far as I can tell, this PEP hasn't actually been discussed.  Please
don't waste time changing modules for which there is no consensus that this
*should* be done.

Under a specific PEP number, no.  The concept of adding logging to the
stdlib, yes, periodically.  The typical outcome is that some people
say why bother, besides it would slow things down and others say
yes, please.

All the conversations I was able to find were limited to the topic of 
changing modules that *do logging*, not modules that have optional 
debugging output, nor adding debugging output to modules that do not have 
it now.  I'm +0 at best on changing modules that do logging now (not debug 
output or warnings, *logging*).  -1 on everything else.


You may be reading too much ambition into the proposal.

Huh?  The packages are all listed right there in the PEP.


For pkgutil in particular, the change is that instead of writing to
stderr (which can scroll off and get lost), it will write to the
errorlog.  In a truly default setup, that still ends up writing to
stderr.

If anything, that pkgutil code should be replaced with a call to 
warnings.warn() instead.


The difference is that if a sysadmin does want to track problems, the
change can now be made in one single place.

Um, what?  You mean, one place per Python application instance, I 
presume.  Assuming that the application allows you to configure the logging 
system, and doesn't come preconfigured to do something else.


   Today, turning on that
instrumentation would require separate changes to every relevant
module, and requires you to already know what/where they are.

And thus ensures that it won't be turned on by accident.

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] wsgiref documentation

2006-06-05 Thread Phillip J. Eby
At 08:08 AM 6/5/2006 -0400, A.M. Kuchling wrote:
I had the start of an outline in sandbox/wsgiref-docs, but am not
working on them at the moment because no one is willing to say if the
list of documented classes is complete (or includes too much).

Huh?  This is the first I've heard of it.

I was already working on some documentation in my local tree, though, so 
I've now started merging your work into it and checked in a snapshot at:

 http://svn.eby-sarna.com/wsgiref/docs/

I'll merge more into it later.  If anyone has any part of the remaining 
stuff that they'd like to volunteer to document, please let me know so I 
don't duplicate your work.  Thanks.

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Stdlib Logging questions (PEP 337 SoC)

2006-06-04 Thread Phillip J. Eby
At 09:27 PM 6/4/2006 -0400, Jim Jewett wrote:
Jackilyn is adding logging to several stdlib modules for the Google
Summer of Code (PEP 337), and asked me to review her first few
changes.

That PEP doesn't appear to have been approved, and I don't recall any 
discussion on Python-Dev.  I also couldn't find any in the archives, except 
for some brief discussion regarding a *small fraction* of the huge list of 
modules in PEP 337.

I personally don't see the value in adding this to anything but modules 
that already do some kind of logging.  And even some of the modules listed 
in the PEP that do some kind of output, I don't really see what the use 
case for using the logging module is.  (Why does timeit need a logger, for 
example?)


There were a few comments that I felt I should double-check with
Python-dev first, in case my own intuition is wrong.

For reference, she is adding the following prologue to several modules:

 import logging
 _log = logging.getLogger('py.NAME')

where NAME is the module name

If this *has* to be added to the modules that don't currently do any 
logging, can we please delay the import until it's actually needed?  i.e., 
until after some logging option is enabled?  I don't really like the 
logging module myself and would rather it were not imported as a side 
effect of merely using shlex or pkgutil!


(5)  Should she clean up other issues when touching a module?

In general, stdlib code isn't updated just for style reasons,

Which is a good enough reason, IMO, to vote -1 on the PEP if it's not pared 
back to reflect *only* modules with a valid use case for logging.

I think it would be a good idea to revisit the module list.  I can see a 
reasonable case for the BaseHTTP stuff and asyncore needing a logging 
framework, if you plan to make them part of some larger framework -- the 
configurability would be a plus, even if I personally don't like the way 
the logging module does configuration.  But most of the other modules, I 
just don't see why something more complex than prints are desirable.  As of 
Python 2.5, if you want stdout or stderr temporarily redirected, it's easy 
enough to wrap your calls in a with: block.


___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] feature request: inspect.isgenerator

2006-06-01 Thread Phillip J. Eby
At 09:26 AM 6/1/2006 +, Michele Simionato wrote:
Terry Reedy tjreedy at udel.edu writes:
  To me, another obvious way is isinstance(object, gentype) where
  gentype = type(i for i in []) # for instance
  which should also be in types module.

No, that check would match generator objects, not generators tout court.
On a related notes, inspect.isfunction gives True on a generator, such
as

def g(): yield None

This could confuse people, however I am inclined to leave things as they are.
Any thoughts?

Yes, I think the whole concept of inspecting for this is broken.  *Any* 
function can return a generator-iterator.  A generator function is just a 
function that happens to always return one.

In other words, the confusion is in the idea of introspecting for this in 
the first place, not that generator functions are of FunctionType.  The 
best way to avoid the confusion is to avoid thinking that you can 
distinguish one type of function from another without explicit guidance 
from the function's author.

I'm -0 on having an isgenfunc(), but -1 on changing isfunction.  +1 on 
making the code flags available.  -1 on changing any other inspect stuff to 
handle generators specially.  They are not special and should not be 
treated specially - they are just functions that happen to always return 
generator-iterators -- and that is an *implementation detail* of the 
function.  Pushing that information out to introspection or doc is a bad 
idea in the general case.

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] SF patch #1473257: Add a gi_code attr to generators

2006-06-01 Thread Phillip J. Eby
At 09:53 PM 5/31/2006 +0200, Collin Winter wrote:
Hi Phillip,

Do you have any opinion on this patch (http://python.org/sf/1473257),
which is assigned to you?

I didn't know it was assigned to me.  I guess SF doesn't send any 
notifications, and neither did Georg, so your email is the very first time 
that I've heard of it.

I don't have any opinion, but perhaps Python-Dev does?

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] bug in PEP 318

2006-05-30 Thread Phillip J. Eby
At 08:56 PM 5/30/2006 +0200, Alexander Bernauer wrote:
Hi

I found two bugs in example 4 of the PEP 318 [1]. People on #python
pointed me to this list. So here is my report. Additionally I appended
an afaics correct implementation for this task.

[1] http://www.python.org/dev/peps/pep-0318/

Bug 1)
The decorator accepts gets the function which is returned by the
decorator returns. This is the function new_f which is defined
differently from the function func. Because the first has an
argument count of zero, the assertion on line 3 is wrong.

The simplest fix for this would be to require that returns() be used 
*before* accepts().


Bug 2)
The assertion on line 6 does not work correctly for tuples. If the
second argument of isinstance is a tuple, the function returns true,
if the first argument is an instance of either type of the tuple.

This is intentional, to allow saying that the given argument may be (for 
example) an int or a float.  What it doesn't support is nested-tuple 
arguments, but that's a reasonable omission given the examples' nature as 
examples.


 def check(*args, **kwds):
 checktype(args, self.types)
 self.func(*args, **kwds)

This needs a 'return', since it otherwise loses the function's return value.


To be honest, I didn't understand what the purpose of setting
func_name is, so I left it out.  If its neccessary please feel free to
correct me.

It's needed for Python documentation tools such as help(), pydoc, and so on 
display something more correct for the decorated function's documentation, 
although also copying the __doc__ attribute would really also be required 
for that.


In contrast to tuples lists and dictionaries are not inspected. The
reason is that I don't know how to express: the function accepts a list
of 3 or more integers or alike. Perhaps somebody has an idea for this.

I think perhaps you've mistaken the PEP examples for an attempt to 
implement some kind of typechecking feature.  They are merely examples to 
show an idea of what is possible with decorators, nothing more.  They are 
not even intended for anybody to actually use!


I wonder, how it can be, that those imho obvious mistakes go into a PEP
and stay undetected there for almost 3 years.

That's because nobody uses them; a PEP example is not intended or required 
to be a robust production implementation of the idea it sketches.  They are 
proofs-of-concept, not source code for a utility.  If they were intended to 
be used, they would be in a reference library or in the standard library, 
or otherwise offered in executable form.

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] [Web-SIG] Adding wsgiref to stdlib

2006-05-22 Thread Phillip J. Eby
It's not clear to me whether this means that Ian can just relicense his 
code for me to slap into wsgiref and thence into Python by virtue of my own 
PSF contribution form and the compatible license, or whether it means Ian 
has to sign a form too.

At 09:25 PM 5/22/2006 -0700, Guido van Rossum wrote:
This explains what to do, and which license to use:

http://www.python.org/psf/contrib/

--Guido

On 5/22/06, Ian Bicking [EMAIL PROTECTED] wrote:
Phillip J. Eby wrote:
  At 02:32 PM 4/28/2006 -0500, Ian Bicking wrote:
  I'd like to include paste.lint with that as well (as wsgiref.lint or
  whatever).  Since the last discussion I enumerated in the docstring all
  the checks it does.  There's still some outstanding issues, mostly where
  I'm not sure if it is too restrictive (marked with @@ in the source).
  It's at:
 
 http://svn.pythonpaste.org/Paste/trunk/paste/lint.py
 
  Ian, I see this is under the MIT license.  Do you also have a PSF
  contributor agreement (to license under AFL/ASF)?  If not, can you place
  a copy of this under a compatible license so that I can add this to the
  version of wsgiref that gets checked into the stdlib?

I don't have a contributor agreement.  I can change the license in
place, or sign an agreement, or whatever; someone should just tell me
what to do.


--
Ian Bicking  |  [EMAIL PROTECTED]  |  http://blog.ianbicking.org
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/guido%40python.org


--
--Guido van Rossum (home page: http://www.python.org/~guido/)

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] context guards, context entry values, context managers, context contexts

2006-05-04 Thread Phillip J. Eby
At 11:20 PM 5/4/2006 +0200, Fredrik Lundh wrote:
fwiw, I just tested

 http://pyref.infogami.com/with

on a live audience, and most people seemed to grok the context
guard concept quite quickly.

note sure about the context entry value term, though.  anyone
has a better idea ?

guarded value?  That works for files, at least.

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] global variable modification in functions [Re: elimination of scope bleeding of iteration variables]

2006-05-01 Thread Phillip J. Eby
At 07:32 AM 5/1/2006 -0700, Guido van Rossum wrote:
On 4/30/06, Ben Wing [EMAIL PROTECTED] wrote:
  [1] ideally, change this behavior, either for 2.6 or 3.0.  maybe have a
  `local' keyword if you really want a new scope.
  [2] until this change, python should always print a warning in this
  situation.
  [3] the current 'UnboundLocal' exception should probably be more
  helpful, e.g. suggesting that you might need to use a `global foo'
  declaration.

You're joking right?

While I agree that item #1 is a non-starter, it seems to me that in the 
case where the compiler statically knows a name is being bound in the 
module's globals, and there is a *non-argument* local variable being bound 
in a function body, the odds are quite high that the programmer forgot to 
use global.  I could almost see issuing a warning, or having a way to 
enable such a warning.

And for the case where the compiler can tell the variable is accessed 
before it's defined, there's definitely something wrong.  This code, for 
example, is definitely missing a global and the compiler could in 
principle tell:

 foo = 1

 def bar():
 foo+=1

So I see no problem (in principle, as opposed to implementation) with 
issuing a warning or even a compilation error for that code.  (And it's 
wrong even if the snippet I showed is in a nested function definition, 
although the error would be different.)

If I recall correctly, the new compiler uses a control-flow graph that 
could possibly be used to determine whether there is a path on which a 
local could be read before it's stored.

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] More on contextlib - adding back a contextmanager decorator

2006-05-01 Thread Phillip J. Eby
At 08:29 PM 5/1/2006 +1000, Nick Coghlan wrote:
'localcontext' would probably work as at least an interim name for such a 
function.

   with decimal.localcontext() as ctx:
   # use the new context here

And the as ctx should be unnecessary for most use cases, if localcontext 
has an appropriately designed API.

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] unittest argv

2006-05-01 Thread Phillip J. Eby
At 06:11 PM 5/1/2006 +0100, John Keyes wrote:
On 5/1/06, Guido van Rossum [EMAIL PROTECTED] wrote:
  Wouldn't this be an incompatible change? That would make it a no-no.
  Providing a dummy argv[0] isn't so hard is it?

It would be incompatible with existing code, but that code is
already broken (IMO) by passing a dummy argv[0].  I don't
think fixing it would affect much code, because normally
people don't specify the '-q' or '-v' in code, it is almost
exclusively used on the command line.

Speak for yourself - I have at least two tools that would have to change 
for this, at least one of which would have to grow version testing code, 
since it's distributed for Python 2.3 and up.  That's far more wasteful 
than providing an argv[0], which is already a common requirement for main 
program functions in Python.

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] introducing the experimental pyref wiki

2006-05-01 Thread Phillip J. Eby
At 11:37 AM 5/1/2006 -0700, Guido van Rossum wrote:
Agreed. Is it too late to also attempt to bring Doc/ref/*.tex
completely up to date and remove confusing language from it? Ideally
that's the authoritative Language Reference -- admittedly it's been
horribly out of date but needn't stay so forever.

Well, I added stuff for PEP 343, but PEP 342 (yield expression plus 
generator-iterator methods) hasn't really been added yet, mostly because I 
was unsure of how to fit it in without one of those first let's explain it 
how it was, then how we changed it sort of things.  :(

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] methods on the bytes object

2006-04-30 Thread Phillip J. Eby
At 08:22 AM 4/30/2006 -0700, Guido van Rossum wrote:
Still, I expect that having a bunch of string-ish methods on bytes
arrays would be convenient for certain types of data handling. Of
course, only those methods that don't care about character types would
be added, but that's a long list: startswith, endswith, index, rindex,
find, rfind, split, rsplit, join, count, replace, translate.

I've often wished *lists* had startswith and endswith, and somewhat less 
often wished they had split or rsplit.  Those seem like things that are 
generally applicable to sequences, not just strings or bytes.

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] More on contextlib - adding back a contextmanager decorator

2006-04-30 Thread Phillip J. Eby
At 09:53 AM 4/30/2006 -0700, Guido van Rossum wrote:
I have a counter-proposal: let's drop __context__. Nearly all use
cases have __context__ return self. In the remaining cases, would it
really be such a big deal to let the user make an explicit call to
some appropriately named method? The only example that I know of where
__context__ doesn't return self is the decimal module. So the decimal
users would have to type

   with mycontext.some_method() as ctx:# ctx is a clone of mycontext
   ctx.prec += 2
   BODY

The implementation of some_method() could be exactly what we currently
have as the __context__ method on the decimal.Context object. Its
return value is a decimal.WithStatementContext() instance, whose
__enter__() method returns a clone of the original context object
which is assigned to the variable in the with-statement (here 'ctx').

This even has an additional advantage -- some_method() could have
keyword parameters to set the precision and various other context
parameters, so we could write this:

   with mycontext.some_method(prec=mycontext.prec+2):
   BODY

Note that we can drop the variable too now (unless we have another
need to reference it). An API tweak for certain attributes that are
often incremented or decremented could reduce writing:

   with mycontext.some_method(prec_incr=2):
   BODY

But what's an appropriate name for some_method?  Given that documentation 
is the sore spot that keeps us circling around this point, doesn't this 
just push the problem to finding a name to use in place of 
__context__?  And not only for this use case, but for others?

After all, for any library that has a notion of the current X, it seems 
reasonable to want to be able to say with some_X to mean use some_X as 
the current X for this block.  And it thus seems to me that people will 
want to have something like:

 def using(obj):
 if hasattr(obj,'__context__'):
 obj = obj.__context__()
 return obj

so they can do with using(some_X), because with some_X.using() or with 
some_X.as_current() is awkward.

If you can solve the naming issue for these use cases (and I notice you 
punted on that issue by calling it some_method), then +1 on removing 
__context__.  Otherwise, I'm -0; we're just fixing one 
documentation/explanation problem (that only people writing contexts will 
care about) by creating others (that will affect the people *using* 
contexts too).


___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Problem with inspect and PEP 302

2006-04-30 Thread Phillip J. Eby
At 12:13 PM 4/30/2006 +0200, Georg Brandl wrote:
Recently, the inspect module was updated to conform with PEP 302.

Now this is broken:

  import inspect
  inspect.stack()

The traceback shows clearly what's going on. However, I don't know how
to resolve the problem.

The problem as that 'string' and 'stdin' filenames were producing an 
infinite regress.  I've checked in a fix.

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] More on contextlib - adding back a contextmanager decorator

2006-04-30 Thread Phillip J. Eby
At 08:08 PM 4/30/2006 -0700, Guido van Rossum wrote:
If you object against the extra typing, we'll first laugh at you
(proposals that *only* shave a few characters of a common idiom aren't
all that popular in these parts), and then suggest that you can spell
foo.some_method() as foo().

Okay, you've moved me to at least +0 for dropping __context__.  I have only 
one object myself that has a non-self __context__, and it doesn't have a 
__call__, so none of my code breaks beyond the need to add parentheses in a 
few places.  ;)

As for decimal contexts, I'm thinking maybe we should have a 
decimal.using(ctx=None, **kw) function, where ctx defaults to the current 
decimal context, and the keyword arguments are used to make a modified 
copy, seems like a reasonable best way to implement the behavior that 
__context__ was added for.  And then all of the existing special machinery 
can go away and be replaced with a single @contextfactory.

(I think we should stick with @contextfactory as the decorator name, btw, 
even if we go back to calling __enter__/__exit__ things context managers.)

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] 2.5 open issues

2006-04-28 Thread Phillip J. Eby
At 07:38 AM 4/28/2006 -0700, Guido van Rossum wrote:
On 4/28/06, A.M. Kuchling [EMAIL PROTECTED] wrote:
- wsgiref to the standard library
  (Owner: Phillip Eby)

I still hope this can go in; it will help web framework authors do the
right thing long term.

I doubt I'll have time to write documentation for it before alpha 3.  If 
it's okay for the docs to wait for one of the beta releases -- or better 
yet, if someone could volunteer to create rough draft documentation that I 
could just then edit --  then it shouldn't be a problem getting it in and 
integrated.

However, just to avoid the sort of thing that happened with setuptools, I 
would suggest, Guido, that you make a last call for objections on the 
Web-SIG, which has previously voiced more criticism of wsgiref than the 
Distutils-SIG ever had about setuptools.  Granted, most of the Web-SIG 
comments were essentially feature requests, but some complained about the 
presence of the handler framework.  Anyway, after the setuptools flap I'm a 
little shy of checking in a new library without a little more visible 
process.  :)

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] 2.5 open issues

2006-04-28 Thread Phillip J. Eby
At 11:54 AM 4/28/2006 -0400, A.M. Kuchling wrote:
On Fri, Apr 28, 2006 at 11:02:07AM -0400, Phillip J. Eby wrote:
  I doubt I'll have time to write documentation for it before alpha 3.  If
  it's okay for the docs to wait for one of the beta releases -- or better
  yet, if someone could volunteer to create rough draft documentation that I
  could just then edit --  then it shouldn't be a problem getting it in and
  integrated.

Barring some radical new thing in alpha3, the heavy lifting of the
What's New is done so I'm available to help with documentation.
(The functional programming howto can wait a little while longer.)  I
assume all we need is the module-level docs for the LibRef?

So, what's the scope of the proposed addition?  Everything in the
wsgiref package, including the simple_server module?

Yes.  simple_server is, coincidentally, the most controversial point on the 
Web-SIG, in that some argue for including a better web server.  However, 
nobody has come forth and said, Here's my web server, it's stable and I 
want to put it in the stdlib, so the discussion wound down in general 
vagueness the last time it was brought up.

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Internal documentation for egg formats now available

2006-04-28 Thread Phillip J. Eby
(Thank you, by the way, for actually reading some of the documentation 
before writing this post, and for asking questions instead of jumping to 
conclusions.)


At 06:43 PM 4/28/2006 +0200, M.-A. Lemburg wrote:
I've now found this section in the documentation which seems to
have the reason:

http://peak.telecommunity.com/DevCenter/EasyInstall#compressed-installation

Apart from the statement because Python processes zipfile entries on
sys.path much faster than it does directories. being wrong,

Measure it.  Be sure to put the directories or zipfiles *first* on the 
path, not last.  The easiest way to do so accurately would be to 
easy_install some packages first using --zip-ok and then using 
--always-unzip, and compare the two.

Finally, try installing them --multi-version, and compare that, to get the 
speed when none of the packages are explicitly put on sys.path.


it looks like all you'd have to do, is make --always-unzip the
default.

You mean, all that *you'd* have to do is put it in your distutils 
configuration to make it the default for you, if for some reason you have a 
lot of programs that resemble python -c 'pass' in their import behavior.  :)


Another nit which seems to have been introduced in 0.6a11:
you now prepend egg directory entries to other sys.path entries,
instead of appending them.

What's the reason for that ?

It eliminates the possibility of conflicts with system-installed packages, 
or packages installed using the distutils, and provides the ability to 
install stdlib upgrades.


Egg directory should really be treated just like any other
site-package package and not be allowed to override stdlib
modules and packages

Adding them *after* site-packages makes it impossible for a user to install 
a local upgrade to a system-installed package in site-packages.

One problem that I think you're not taking into consideration, is that 
setuptools does not overwrite packages except with an identical 
version.  It thus cannot replace an existing raw installation of a 
package by the distutils, except by deleting it (which it used to support 
via --delete-conflicting) or by installing ahead of the conflict on sys.path.

Since one of the problems with using --delete-conflicting was that users 
often don't have write access to site-packages, it's far simpler to just 
organize sys.path so that eggs always take precedence over their parent 
directory.  Thus, eggs in site-packages get precedence over site-packages 
itself, and eggs on PYTHONPATH get precedence over PYTHONPATH.


without explicit user action by
e.g. adjusting PYTHONPATH.

Installing to a PYTHONPATH directory *is* an explicit user 
action.  Installing something anywhere is an explicit user request: I'd 
like this package to be importable, please.  If you don't want what you 
install to be importable by default, use --multi-version, which installs 
packages but doesn't put them on sys.path until you ask for them at runtime.

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Adding wsgiref to stdlib

2006-04-28 Thread Phillip J. Eby
At 11:03 AM 4/28/2006 -0700, Guido van Rossum wrote:
(I'm asking Phillip to post the URL for the current
source; searching for it produces multiple repositories.)

Source browsing: http://svn.eby-sarna.com/wsgiref/
Anonymous SVN:   svn://svn.eby-sarna.com/svnroot/wsgiref


___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] [Web-SIG] Adding wsgiref to stdlib

2006-04-28 Thread Phillip J. Eby
At 02:32 PM 4/28/2006 -0500, Ian Bicking wrote:
Guido van Rossum wrote:
  PEP 333 specifies WSGI, the Python Web Server Gateway Interface v1.0;
  it's written by Phillip Eby who put a lot of effort in it to make it
  acceptable to very diverse web frameworks. The PEP has been well
  received by web framework makers and users.
 
  As a supplement to the PEP, Phillip has written a reference
  implementation, wsgiref. I don't know how many people have used
  wsgiref; I'm using it myself for an intranet webserver and am very
  happy with it. (I'm asking Phillip to post the URL for the current
  source; searching for it produces multiple repositories.)
 
  I believe that it would be a good idea to add wsgiref to the stdlib,
  after some minor cleanups such as removing the extra blank lines that
  Phillip puts in his code. Having standard library support will remove
  the last reason web framework developers might have to resist adopting
  WSGI, and the resulting standardization will help web framework users.

I'd like to include paste.lint with that as well (as wsgiref.lint or
whatever).  Since the last discussion I enumerated in the docstring all
the checks it does.  There's still some outstanding issues, mostly where
I'm not sure if it is too restrictive (marked with @@ in the source).
It's at:

http://svn.pythonpaste.org/Paste/trunk/paste/lint.py

+1, but lose the unused 'global_conf' parameter and 'make_middleware' 
functions.


I think another useful addition would be some prefix-based dispatcher,
similar to paste.urlmap (but probably a bit simpler):
http://svn.pythonpaste.org/Paste/trunk/paste/urlmap.py

I'd rather see something a *lot* simpler - something that just takes a 
dictionary mapping names to application objects, and parses path segments 
using wsgiref functions.  That way, its usefulness as an example wouldn't 
be obscured by having too many features.  Such a thing would still be quite 
useful, and would illustrate how to do more sophisticated 
dispatching.  Something more or less like:

 from wsgiref.util import shift_path_info

 # usage:
 #main_app = AppMap(foo=part_one, bar=part_two, ...)

 class AppMap:
 def __init__(self, **apps):
 self.apps = apps

 def __call__(self, environ, start_response):
 name = shift_path_info(environ)
 if name is None:
 return self.default(environ, start_response)
 elif name in self.apps:
 return self.apps[name](environ,start_response)
 return self.not_found(environ, start_response)

 def default(self, environ, start_response):
 self.not_found(environ, start_response)

 def not_found(self, environ, start_response):
 # code to generate a 404 response here

This should be short enough to highlight the concept, while still providing 
a few hooks for subclassing.

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] [Web-SIG] Adding wsgiref to stdlib

2006-04-28 Thread Phillip J. Eby
At 01:19 PM 4/28/2006 -0700, Guido van Rossum wrote:
It still looks like an application of WSGI, not part of a reference
implementation. Multiple apps looks like an advanced topic to me; more
something that the infrastructure (Apache server or whatever) ought to
take care of.

I'm fine with a super-simple implementation that emphasizes the concept, 
not feature-richness.  A simple dict-based implementation showcases both 
the wsgiref function for path shifting, and the idea of composing an 
application out of mini-applications.  (The point is to demonstrate how 
people can compose WSGI applications *without* needing a framework.)

But I don't think that this demo should be a prefix mapper; people doing 
more sophisticated routing can use Paste or Routes.

If it's small enough, I'd say to add this mapper to wsgiref.util, or if 
Guido is strongly set against it being in the code, we should at least put 
it in the documentation as an example of how to use 'shift_path_info()' in 
wsgiref.util.

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] [Web-SIG] Adding wsgiref to stdlib

2006-04-28 Thread Phillip J. Eby
At 04:04 PM 4/28/2006 -0500, Ian Bicking wrote:
I don't see why not to use prefix matching.  It is more consistent with
the handling of the default application ('', instead of a method that
needs to be overridden), and more general, and the algorithm is only
barely more complex and not what I'd call sophisticated.  The default
application handling in particular means that AppMap isn't really useful
without subclassing or assigning to .default.

Prefix matching wouldn't show off anything else in wsgiref,

Right, that would be taking away one of the main reasons to include it.

To make the real dispatcher, I'd flesh out what I wrote a little bit, to 
handle the default method in a more meaningful way, including the 
redirect.  All that should only add a few lines, however.

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] [Web-SIG] Adding wsgiref to stdlib

2006-04-28 Thread Phillip J. Eby
At 05:47 PM 4/28/2006 -0500, Ian Bicking wrote:
It will still be only a couple lines less than prefix matching.

That's beside the point.  Prefix matching is inherently a more complex 
concept, and more likely to be confusing, without introducing much in the 
way of new features.  If I want to dispatch /foo/bar, why not just use:

 AppMap(foo=AppMap(bar=whatever))


So, I don't see prefix matching as introducing anything that's worth the 
extra complexity.  If somebody needs a high-performance prefix matcher, 
they can get yours from Paste.

If I was going to include a more sophisticated dispatcher, I'd add an 
ordered regular expression dispatcher, since that would support use cases 
that the simple or prefix dispatchers would not, but it would also support 
the prefix cases without nesting.


Another issue with your implementation is the use of keyword arguments for 
the path mappings, even though path mappings have no association with 
keyword arguments or valid Python identifiers.

That was for brevity; it should probably also take a mapping argument.

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] [Web-SIG] Adding wsgiref to stdlib

2006-04-28 Thread Phillip J. Eby
At 04:34 PM 4/28/2006 -0700, Titus Brown wrote:
Hi, Phillip,

I'm getting this error when I run the tests, with both Python 2.3 and
2.4:

==
FAIL: testHeaderFormats (wsgiref.tests.test_handlers.HandlerTests)
--
Traceback (most recent call last):
   File /disk/u/t/dev/misc/wsgiref/src/wsgiref/tests/test_handlers.py, 
 line 205, in testHeaderFormats
 (stdpat%(version,sw), h.stdout.getvalue())
AssertionError: ('HTTP/1.0 200 OK\\r\\nDate: \\w{3} \\w{3} [ 0123]\\d 
\\d\\d:\\d\\d:\\d\\d \\d{4}\\r\\nServer: FooBar/1.0\r\nContent-Length: 
0\\r\\n\\r\\n', 'HTTP/1.0 200 OK\r\nDate: Fri, 28 Apr 2006 23:28:11 
GMT\r\nServer: FooBar/1.0\r\nContent-Length: 0\r\n\r\n')

--

This is probably due to Guido's patch to make the Date: header more RFC 
compliant.  I'll take a look at it this weekend.


On a separate note, what are you actually proposing to include?  It'd be
good to remove the TODO list, for example, unless those are things To Be
Done before adding it into Python 2.5.

Well, it looks like the validate bit will be going in, and we're talking 
about what to put in router, so that'll take care of half the list right 
there.  :)

The other two items can wait, unless somebody wants to contribute them.



Will it be added as 'wsgi' or 'wsgiref?

I assumed it would be wsgiref, which would allow compatibility with 
existing code that uses it.


I'd also personally suggest putting anything intended for common use
directly under the top level, i.e.

 wsgiref.WSGIServer

vs

 wsgiref.simple_server.WSGIServer

I'm against this, because it would force the handlers and simple_server 
modules to be imported, even for programs not using them.


And, finally, is there any documentation?

Only the docstrings.  Contributions are more than welcome.


___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Dropping __init__.py requirement for subpackages

2006-04-27 Thread Phillip J. Eby
At 03:48 PM 4/27/2006 +0200, Bernhard Herzog wrote:
Gustavo Carneiro [EMAIL PROTECTED] writes:

Now the problem.  Suppose you have the source package python-foo-bar,
  which installs $pythondir/foo/__init__.py and $pythondir/foo/bar.py.  This
  would make a module called foo.bar available.  Likewise, you can have the
  source package python-foo-zbr, which installs 
 $pythondir/foo/__init__.py and
  $pythondir/foo/zbr.py.  This would make a module called foo.zbr 
 available.
 
The two packages above install the file $pythondir/foo/__init__.py.  If
  one of them adds some content to __init__.py, the other one will overwrite
  it.  Packaging these two packages for e.g. debian would be extremely
  difficult, because no two .deb packages are allowed to intall the same 
 file.
 
One solution is to generate the __init__.py file with post-install hooks
  and shell scripts.  Another solution would be for example to have only
  python-foo-bar install the __init__.py file, but then python-foo-zbr would
  have to depend on python-foo-bar, while they're not really related.

Yet another solution would be to put foo/__init__.py into a third
package, e.g. python-foo, on which both python-foo-bar and
python-foo-zbr depend.

Or you can package them with setuptools, and declare foo to be a namespace 
package.  If installing in the mode used for building RPMs and debs, there 
will be no __init__.py.  Instead, each installs a .pth file that ensures a 
dummy package object is created in sys.modules with an appropriate 
__path__.  This solution is packaging-system agnostic and doesn't require 
any special support from the packaging tool.

(The downside, however, is that neither foo.bar nor foo.zbr's __init__.py 
will be allowed to have any content, since in some installation scenarios 
there will be no __init__.py at all.)

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Internal documentation for egg formats now available

2006-04-27 Thread Phillip J. Eby
At 06:47 PM 4/27/2006 +0200, M.-A. Lemburg wrote:
Just read that you are hijacking site.py for setuptools'
just works purposes.

hijacking isn't the word I'd use; wrapping is what it actually 
does.  The standard site.py is executed, there is just some pre- and 
post-processing of sys.path.


Please be aware that by allowing .pth files in all PYTHONPATH
directories you are opening up a security hole - anyone with
write-permission to one of these .pth files could manipulate
other user's use of Python.

FUD.  Write access to a PYTHONPATH-listed directory already implies 
complete control over the user's Python environment.  This doesn't 
introduce any issues that aren't implicit in the very existence of PYTHONPATH.


That's the reason why only site-packages .pth files are
taken into account, since normally only root has write
access to this directory.

False.  On OS X, Python processes any .pth files that are found in the 
~/Library/Python2.x/site-packages directory.  (Which means, by the way, 
that OS X users don't need most of these hacks; they just point their 
install directory to their personal site-packages, and it already Just 
Works.  Setuptools only introduces PYTHONPATH hacks to make this work on 
*other* platforms.)


The added startup time for scanning PYTHONPATH for .pth
files and processing them is also not mentioned in the
documentation. Every Python invocation would have to pay
for this - regardless of whether eggs are used or not.

FUD again.  This happens if and only if:

1. You used easy_install to install a package in such a way that the .pth 
file is required (i.e., you don't choose multi-version mode or 
single-version externally-managed mode)

2. You include the affected directory in PYTHONPATH

So the idea that every Python invocation would have to pay for this is 
false.  People who care about the performance have plenty of options for 
controlling this.

Is there a nice HOWTO that explains what to do if you care more about 
performance than convenience?  No.  Feel free to contribute one.


I really wish that we could agree on an installation format
for package (meta) data that does *not* rely on ZIP files.

There is one already, and it's used if you select single-version 
externally-managed mode explicitly, or if you install using --root.


All the unnecessary magic that you have in your design would
just go away - together with most of the issues people on this
list have with it.

Would you settle for a way to make a one-time ~/.pydistutils.cfg entry that 
will make setuptools act the way you want it to?  That is, a way to make 
setuptools-based packages default to --single-version-externally-managed 
mode for installation on a given machine or machine(s)?  That way, you'll 
never have to wonder whether a package uses setuptools or not, you can just 
setup.py install and be happy.

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] traceback.py still broken in 2.5a2

2006-04-27 Thread Phillip J. Eby
At 11:38 AM 4/27/2006 -0700, Guido van Rossum wrote:
The change below was rolled back because it broke other stuff. But IMO
it is actually necessary to fix this,

Huh?  The change you showed wasn't reverted AFAICT; it's still on the trunk.


  otherwise those few exceptions
that don't derive from Exception won't be printed correctly by the
traceback module:

It looks like the original change (not the change you listed) should've 
been using issubclass(etype, BaseException).  (I only reverted the 
removal of 'isinstance()', which was causing string exceptions to break.)

Anyway, looks like a four-letter fix (i.e. add Base there), unless there 
was some other problem I'm missing?

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Internal documentation for egg formats now available

2006-04-27 Thread Phillip J. Eby
At 09:54 PM 4/27/2006 +0200, M.-A. Lemburg wrote:
Note that I was talking about the .pth file being
writable, not the directory.

Please stop this vague, handwaving FUD.  You have yet to explain how this 
situation is supposed to arise.  Is there some platform on which files with 
a .pth extension are automatically writable by users *when .py files are 
not also*?

If not, then you are talking out of your... um, hat.  If files are writable 
by other users by default, then .py files are too.  Once again: *no new 
vector*.


Even if they are not
writable by non-admins, the .pth files can point
to directories which are.

Uh huh.  And how does that happen, exactly?  Um, the user puts them 
there?  What is your point, exactly?  That people can do things insecurely 
if they're allowed near a computer?


Here's a HOWTO to optimize startup time, without loosing
convenience:

I meant, a HOWTO for setuptools users who care about this, although at the 
moment I have heard only from one -- and you're not, AFAIK, actually a 
setuptools user.



No, I'm talking about a format which has the same if not
more benefits as what you're trying to achieve with the
.egg file approach, but without all the magic and hacks.

It's not like this wouldn't be possible to achieve.

That may or may not be true.  Perhaps if you had participated in the 
original call to the distutils-sig for developing such a format (back in 
December 2004), perhaps the design would've been more to your liking.

Oh wait...  you did:

http://mail.python.org/pipermail/distutils-sig/2004-December/004351.html

And if you replace 'syspathtools.use()' in that email, with 
'pkg_resources.require()', then it describes *exactly how setuptools 
works  with .egg directories today*.

Apparently, you hadn't yet thought of any the objections that you are now 
raising to *the very scheme that you yourself proposed*, until somebody 
else took the trouble to actually implement it.

And now you want to say that I never listen to or implement your 
proposals?  Please.  Your email is the first documentation on record of how 
this system works!


Not really.

Then I won't bother adding it, since nobody else asked for it.  But don't 
ever say that I didn't offer you a real solution to the behavior you 
complained about.

Meanwhile, I will take your choice as prima facie evidence that the things 
you are griping about have no real impact on you, and you are simply trying 
to make other people conform to your views of how things should be, 
rather than being interested in solving actual problems.

It also makes it clear that your opinion about setuptools default 
installation behavior isn't relevant to any future Python-Dev discussion 
about setuptools' inclusion in the standard library, because it's:

1. Obviously not a real problem for you (else you'd accept the offer of a 
feature that would permanently solve it for you)

2. Not something that anybody else has complained about since I made --root 
automatically activate distutils compatibility

In short, the credibility of your whining about this point and my supposed 
failure to accomodate you is now thoroughly debunked.  I added an option to 
enable this behavior, I made other options enable the behavior where it 
could be reasonably assumed valid, and I offered you an option you could 
use to globally disable it for *any* package using setuptools, so that it 
would never affect you again.

(And all of this... to disable the behavior that implements a scheme that 
you yourself proposed as the best way to do this sort of thing!)

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Addressing Outstanding PEPs

2006-04-26 Thread Phillip J. Eby
At 11:55 PM 4/25/2006 -0700, Neal Norwitz wrote:
   S   243  Module Repository Upload Mechanism   Reifschneider

This one needs to be withdrawn or updated - it totally doesn't match the 
implementation in Python 2.5.


   S   302  New Import Hooks JvR, Moore

Unless somebody loans me the time machine, I won't be able to finish all of 
PEP 302's remaining items before the next alpha.  Some, like the idea of 
moving the built-in logic to sys.meta_path, can't be done until Py3K 
without having adverse impact on deployed software.


   S   334  Simple Coroutines via SuspendIteration   Evans

IIRC, this one's use cases can be met by using a coroutine trampoline as 
described in PEP 342.


   S   345  Metadata for Python Software Packages 1.2Jones

This one should not be accepted in its current state; too much 
specification of syntax, too little of semantics.  And the specifications 
that *are* there, are often wrong.  For example, the strict version syntax 
wouldn't even support *Python's own* version numbering scheme, let alone 
the many package versioning schemes in actual use.

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Dropping __init__.py requirement for subpackages

2006-04-26 Thread Phillip J. Eby
At 10:16 AM 4/26/2006 -0700, Guido van Rossum wrote:
So I have a very simple proposal: keep the __init__.py requirement for
top-level pacakages, but drop it for subpackages.

Note that many tools exist which have grown to rely on the presence of 
__init__ modules.  Also, although your proposal would allow imports to work 
reasonably well, tools that are actively looking for packages would need to 
have some way to distinguish package directories from others.

My counter-proposal: to be considered a package, a directory must contain 
at least one module (which of course can be __init__).  This allows the is 
it a package? question to be answered with only one directory read, as is 
the case now.  Think of it also as a nudge in favor of flat is better than 
nested.

This tweak would also make it usable for top-level directories, since the 
mere presence of a 'time' directory wouldn't get in the way of anything.

The thing more likely to have potential for problems is that many Python 
projects have a test directory that isn't intended to be a package, and 
thus may interfere with imports from the stdlib 'test' package.  Whether 
this is really a problem or not, I don't know.

But, we could treat packages without __init__ as namespace packages.  That 
is, set their __path__ to encompass similarly-named directories already on 
sys.path, so that the init-less package doesn't interfere with other 
packages that have the same name.

This would require a bit of expansion to PEP 302, but probably not 
much.  Most of the rest is existing technology, and we've already begun 
migrating stdlib modules away from doing their own hunting for __init__ and 
other files, towards using the pkgutil API.

By the way, one small precedent for packages without __init__: setuptools 
generates such packages using .pth files when a package is split between 
different distributions but are being installed by a system packaging 
tool.  In such cases, *both* parts of the package can't include an 
__init__, because the packaging tool (e.g. RPM) is going to complain that 
the shared file is a conflict.  So setuptools generates a .pth file that 
creates a module object with the right name and initializes its __path__ to 
point to the __init__-less directory.


This should be a small change.

Famous last words.  :)  There's a bunch of tools that it's not going to 
work properly with, and not just in today's stdlib.  (Think documentation 
tools, distutils extensions, IDEs...)

Are you sure you wouldn't rather just write a GoogleImporter class to fix 
this problem?  Append it to sys.path_hooks, clear sys.path_importer_cache, 
and you're all set.  For that matter, if you have only one top-level 
package, put the class and the installation code in that top-level 
__init__, and you're set to go.

And that approach will work with Python back to version 2.3; no waiting for 
an upgrade (unless Google is still using 2.2, of course).

Let's see, the code would look something like:

class GoogleImporter:
def __init__(self, path):
if not os.path.isdir(path):
raise ImportError(Not for me)
self.path = os.path.realpath(path)

 def find_module(self, fullname, path=None):
 # Note: we ignore 'path' argument since it is only used via 
meta_path
 subname = fullname.split(.)[-1]
 if os.path.isdir(os.path.join(self.path, subname)):
 return self
 path = [self.path]
 try:
 file, filename, etc = imp.find_module(subname, path)
 except ImportError:
 return None
 return ImpLoader(fullname, file, filename, etc)

 def load_module(self, fullname):
 import sys, new
 subname = fullname.split(.)[-1]
 path = os.path.join(self.path, subname)
 module = sys.modules.setdefault(fullname, new.module(fullname))
 module.__dict__.setdefault('__path__',[]).append(path)
 return module

 class ImpLoader:
 def __init__(self, fullname, file, filename, etc):
 self.file = file
 self.filename = filename
 self.fullname = fullname
 self.etc = etc

 def load_module(self, fullname):
 try:
 mod = imp.load_module(fullname, self.file, self.filename, 
self.etc)
 finally:
 if self.file:
 self.file.close()
 return mod

 import sys
 sys.path_hooks.append(GoogleImporter)
 sys.path_importer_cache.clear()

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Dropping __init__.py requirement for subpackages

2006-04-26 Thread Phillip J. Eby
At 02:07 PM 4/26/2006 -0400, Phillip J. Eby wrote:
  def find_module(self, fullname, path=None):
  # Note: we ignore 'path' argument since it is only used via
meta_path
  subname = fullname.split(.)[-1]
  if os.path.isdir(os.path.join(self.path, subname)):
  return self
  path = [self.path]
  try:
  file, filename, etc = imp.find_module(subname, path)
  except ImportError:
  return None
  return ImpLoader(fullname, file, filename, etc)

Feh.  The above won't properly handle the case where there *is* an __init__ 
module.  Trying again:

  def find_module(self, fullname, path=None):
  subname = fullname.split(.)[-1]
  path = [self.path]
  try:
  file, filename, etc = imp.find_module(subname, path)
  except ImportError:
  if os.path.isdir(os.path.join(self.path, subname)):
  return self
  else:
  return None
  return ImpLoader(fullname, file, filename, etc)

There, that should only fall back to __init__-less handling if there's no 
foo.py or foo/__init__.py present.

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Dropping __init__.py requirement for subpackages

2006-04-26 Thread Phillip J. Eby
At 11:50 AM 4/26/2006 -0700, Guido van Rossum wrote:
I'm not sure what you mean by one directory read. You'd have to list
the entire directory, which may require reading more than one block if
the directory is large.

You have to do this to find an __init__.py too, don't you?  Technically, 
there's going to be a search for a .pyc or .pyo first, anyway.  I'm just 
saying you can stop as soon as you hit an extension that's in 
imp.get_suffixes().


  Are you sure you wouldn't rather just write a GoogleImporter class to fix
  this problem?

No, because that would require more setup code with a requirement to
properly enable it, etc., etc., more failure modes, etc., etc.

I don't understand.  I thought you said you had only *one* top-level 
package.  Fix *that* package, by putting the code in its __init__.py.  Job 
done.


   Append it to sys.path_hooks, clear sys.path_importer_cache,
  and you're all set.  For that matter, if you have only one top-level
  package, put the class and the installation code in that top-level
  __init__, and you're set to go.

I wish it were that easy.

Well, if there's more than one top-level package, put the code in a module 
called google_imports and import google_import in each top-level 
package's __init__.py.

I'm not sure I understand why a solution that works with released versions 
of Python that allows you to do exactly what you want, is inferior to a 
hypothetical solution for an unreleased version of Python that forces 
everybody else to update their tools.

Unless of course the problem you're trying to solve is a political one 
rather than a technical one, that is.  Or perhaps it wasn't clear from my 
explanation that my proposal will work the way you need it to, or I 
misunderstand what you're trying to do.

Anyway, I'm not opposed to the idea of supporting this in future Pythons, 
but I definitely think it falls under the but sometimes never is better 
than RIGHT now rule where 2.5 is concerned.  :)  In particular, I'm 
worried that you're shrugging off the extent of the collateral damage here, 
and I'd be happiest if we waited until 3.0 before changing this particular 
rule -- and if we changed it in favor of namespace packages, which will 
more closely match naive user expectations.

However, the fix the tools argument is weak, IMO.  Zipfile imports have 
been a fairly half-assed feature for 2.3 and 2.4 because nobody took the 
time to make the *rest* of the stdlib work with zip imports.  It's not a 
good idea to change machinery like this without considering at least what's 
going to have to be fixed in the stdlib.  At a minimum, pydoc and distutils 
have embedded assumptions regarding __init__ modules, and I wouldn't be 
surprised if ihooks, imputil, and others do as well.  If we can't keep the 
stdlib up to date with changes in the language, how can we expect anybody 
else to keep their code up to date?

Finally, as others have pointed out, requiring __init__ at the top level 
just means that this isn't going to help anybody but Google.  ISTM that in 
most cases outside Google, Python newbies are more likely to be creating 
top-level packages *first*, so the implicit __init__ doesn't help them.

So, to summarize:

1. It only really helps Google
2. It inconveniences others who have to update their tools in order to 
support people who end up using it (even if by accident)
3. It's not a small change, unless you leave the rest of the stdlib 
unreviewed for impacts
4. It could be fixed at Google by adding a very small amount of code to the 
top of your __init__.py files (although apparently this is prevented for 
mysterious reasons that can't be shared)

What's not to like?  ;)

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Dropping __init__.py requirement for subpackages

2006-04-26 Thread Phillip J. Eby
At 09:56 PM 4/26/2006 +0200, Martin v. Löwis wrote:
Phillip J. Eby wrote:
  My counter-proposal: to be considered a package, a directory must contain
  at least one module (which of course can be __init__).  This allows the 
 is
  it a package? question to be answered with only one directory read, as is
  the case now.  Think of it also as a nudge in favor of flat is better 
 than
  nested.

I assume you want

import x.y

to fail if y is an empty directory (or non-empty, but without .py
files). I don't see a value in implementing such a restriction.

No, I'm saying that tools which are looking for packages and asking, Is 
this directory a package? should decide no in the case where it contains 
no modules.

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Dropping __init__.py requirement for subpackages

2006-04-26 Thread Phillip J. Eby
At 04:33 PM 4/26/2006 -0400, Joe Smith wrote:
It seems to me that the right way to fix this is to simply make a small
change to the error message.
On a failed import, have the code check if there is a directory that would
have been  the requested package if
it had contained an __init__ module. If there is then append a message like
You might be missing an __init__.py file.

It might also be good to check that the directory actually contained python
modules.

This is a great idea, but might be hard to implement in practice with the 
current C implementation of import, at least for the general case.

But if we're talking about subpackages only, the common case is a 
one-element __path__, and for that case there might be something we could do.

(The number of path items is relevant because the existence of a 
correctly-named but init-less directory should not stop later path items 
from being searched, so the actual error occurs far from the point where 
the empty directory would've been detected.)

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Dropping __init__.py requirement for subpackages

2006-04-26 Thread Phillip J. Eby
At 01:49 PM 4/26/2006 -0700, Guido van Rossum wrote:
OK, forget it. I'll face the pitchforks.

I'm disappointed though -- it sounds like we can never change anything
about Python any more because it will upset the oldtimers.

I know exactly how you feel.  :)

But there's always Python 3.0, and if we're refactoring the import 
machinery there, we can do this the right way, not just the right now
way.  ;)  IMO, if Py3K does this, it can and should be inclusive of 
top-level packages and assemble __path__ using all the sys.path 
entries.  If we're going to break it, let's break it all the way.  :)

I'm still really curious why the importer solution (especially if tucked 
away in a Google-defined sitecustomize) won't work, though.

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Dropping __init__.py requirement for subpackages

2006-04-26 Thread Phillip J. Eby
At 01:10 AM 4/27/2006 +0200, Thomas Wouters wrote:

On 4/27/06, Guido van Rossum mailto:[EMAIL PROTECTED][EMAIL PROTECTED] 
wrote:
I'd worry that it'll cause complaints when the warning is incorrect
and a certain directory is being skipped intentionally. E.g. the
string directory that someone had. Getting a warning like this can
be just as upsetting to newbies!

I don't think getting a spurious warning is as upsetting as getting no 
warning but the damned thing just not working. At least you have something 
to google for. And the warning includes the original line of source that 
triggered it *and* the directory (or directories) it's complaining about, 
which is quite a lot of helpful hints.

+1.  If the warning is off-base, you can rename the directory or suppress 
the warning.

As for the newbie situation, ISTM that this warning will generally come 
close enough in time to an environment change (new path entry, newly 
created conflicting directory) to be seen as informative.  The only time it 
might be confusing is if you had just added an import foo after having a 
foo directory sitting around for a while.

But even then, the warning is saying, hey, it looked like you might have 
meant *this* foo directory...  if so, you're missing an __init__.  So, at 
that point I rename the directory...  or maybe add the __init__ and break 
my code.  So then I back it out and put up with the warning and complain to 
c.l.p, or maybe threaten Guido with a pitchfork if I work at Google.  Or 
maybe just a regular-sized fork, since the warning is just annoying.  :)


Alrighty then. The list has about 12 hours to convince me (and you) that 
it's a bad idea to generate that warning. I'll be asleep by the time the 
trunk un-freezes, and I have a string of early meetings tomorrow. I'll get 
to it somewhere in the afternoon :)

I like the patch in general, but may I suggest PackageWarning or maybe 
BrokenPackageWarning instead of ImportWarning as the class name?


___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Dropping __init__.py requirement for subpackages

2006-04-26 Thread Phillip J. Eby
At 04:57 PM 4/26/2006 -0700, Guido van Rossum wrote:
On 4/26/06, Delaney, Timothy (Tim) [EMAIL PROTECTED] wrote:
  Possibly. Perhaps it would be useful to have `is_package(dirname)`,
  `is_rootpackage(dirname)` and `is_subpackage(dirname)` functions
  somewhere (pkgutils?).

YAGNI. Also note that not all modules or packages are represented by
pathnames -- they could live in zip files, or be accessed via whatever
other magic an import handler users.

FYI, pkgutil in 2.5 has utilities to walk a package tree, starting from 
sys.path or a package __path__, and it's PEP 302 compliant.  pydoc now uses 
this in place of directory inspection, so that documenting zipped packages 
works correctly.

These functions aren't documented yet, though, and probably won't be until 
next week at the earliest.

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Updated context management documentation

2006-04-25 Thread Phillip J. Eby
At 12:08 AM 4/26/2006 +1000, Nick Coghlan wrote:
Secondly, the documentation now shows an example
of a class with a close() method using contextlib.closing directly as its own
__context__() method.

Sadly, that would only work if closing() were a function.  Classes don't 
get turned into methods, so you'll need to change that example to use:

 def __context__(self):
 return closing(self)

instead.

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Must objects with __enter__/__exit__ also supply __context__?

2006-04-25 Thread Phillip J. Eby
At 11:37 AM 4/25/2006 -0700, Guido van Rossum wrote:
But what's the use case? Have we actually got an example where it
makes sense to use the thing with __enter__ and __exit__ methods in
a with-statement, other than the (many) examples where the original
__context__ method returns self?

Objects returned by @contextfactory-decorated functions must have __enter__ 
and __exit__ (so @contextfactory can be used to define __context__ methods) 
*and* they must also have __context__, so they can be used directly in a 
with statement.

I think that in all cases where you want this (enter/exit implies context 
method availability), it's going to be the case that you want the context 
method to return self, just as iterating an object with a next() method 
normally returns that object.

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


[Python-Dev] Internal documentation for egg formats now available

2006-04-25 Thread Phillip J. Eby
Please see 
http://svn.python.org/projects/sandbox/trunk/setuptools/doc/formats.txt for 
source, or http://peak.telecommunity.com/DevCenter/EggFormats for an 
HTML-formatted version.

Included are summary descriptions of the formats of all of the standard 
metadata produced by setuptools, along with pointers to the existing 
manuals that describe the syntax used for representing requirements, entry 
points, etc. as text.  The .egg, .egg-info, and .egg-link formats and 
layouts are also specified, along with the filename syntax used to embed 
project/version/Python version/platform metadata.  Last, but not least, 
there are detailed explanations of how resources (such as C extensions) are 
extracted on-the-fly and cached, how C extensions get imported from 
zipfiles, and how EasyInstall works around the limitations of Python's 
default sys.path initialization.

If there's anything else you'd like in there, please let me know.

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Must objects with __enter__/__exit__ also supply __context__?

2006-04-25 Thread Phillip J. Eby
At 04:18 PM 4/25/2006 -0700, Guido van Rossum wrote:
But the question remains,
under what circumstances is it convenient to call __context__()
explicit, and pass the result to a with-statement?

Oh.  I don't know of any; I previously asked the same question myself.  I 
just eventually answered myself with I don't care; we need to require 
self-returning __context__ on execution context objects so that the 
documentation can appear vaguely sane.  :)

So, I don't know of any non-self-returning use cases for __context__ on an 
object that has __enter__ and __exit__.  In fact, I suspect that we could 
strengthen the requirements to say that:

1. If you have __enter__ and __exit__, you MUST have a self-returning 
__context__

2. If you don't have __enter__ and __exit__, you MUST NOT have a 
self-returning __context__

#2 is obvious since you can't use the object with with otherwise.  #1 
reflects the fact that it doesn't make any sense to take the context of a 
context.

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 343 update (with statement context terminology)

2006-04-25 Thread Phillip J. Eby
At 04:31 PM 4/25/2006 -0700, Aahz wrote:
Right -- I've already been chastised for that.  Unless someone has a
better idea, I'm going to call it a wrapper.

Better idea: just delete the parenthetical about a namespace and leave the 
rest of your text alone, at least until the dust settles.  I thought your 
original text was perfect except for the namespace thing.

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Must objects with __enter__/__exit__ also supply __context__?

2006-04-25 Thread Phillip J. Eby
At 05:20 PM 4/25/2006 -0700, Guido van Rossum wrote:
I would augment #1 to clarify that if you have __enter__ and __exit__
you may not have __context__ at all; if you have all three,
__context__ must return self.

Well, requiring the __context__ allows us to ditch the otherwise complex 
problem of why @contextfactory functions' return value is usable by with, 
without having to explain it separately.  So, I don't think there's any 
reason to provide an option; there should be Only One Way To Do It.

Well, actually, two.  That is, you can have one method or all three.  Two 
is right out.  :)

(ObMontyPython: Wait, I'll come in again...)

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Internal documentation for egg formats now available

2006-04-25 Thread Phillip J. Eby
At 04:41 PM 4/25/2006 -0700, Brent Fulgham wrote:
Included are summary descriptions of the formats of all of the standard
metadata produced by setuptools, along with pointers to the existing
manuals that describe the syntax used for representing requirements, entry
points, etc. as text.  The .egg, .egg-info, and .egg-link formats and
layouts are also specified

I also follow the chicken Scheme mailing list, and initially though this was
a mistaken reference to http://www.call-with-current-continuation.org/eggs/.

Is there any concern that the use of 'egg' might cause some confusion?

Not for the software, anyway.  As long as nobody asks EasyInstall to 
install something from that page, and as long as they're not in the habit 
of installing Scheme extensions to their Python directories, everything 
will be fine.  :)

Just for the heck of it, I tried asking easy_install to install some of the 
stuff on that page, and it griped about most of the eggs listed on that 
page not having version numbers, and then it barfed with a ZipImportError 
after downloading the .egg and observing that it was not a valid zip 
file.  (It hadn't actually installed the egg file yet, so no changes were 
made to the Python installation.)

So, also just for the heck of it, I'm tempted to add some code to 
easy_install to notice when it encounters a tarball .egg (which is what the 
Scheme/Chicken eggs are), and maybe have it explain that Scheme eggs aren't 
Python eggs, perhaps in humorous fashion.  If I did add such code, you 
might even call it an easter egg, I suppose...  ;)

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Must objects with __enter__/__exit__ also supply __context__?

2006-04-25 Thread Phillip J. Eby
At 08:09 PM 4/25/2006 -0700, Guido van Rossum wrote:
On 4/25/06, Phillip J. Eby [EMAIL PROTECTED] wrote:
  At 05:20 PM 4/25/2006 -0700, Guido van Rossum wrote:
  I would augment #1 to clarify that if you have __enter__ and __exit__
  you may not have __context__ at all; if you have all three,
  __context__ must return self.
 
  Well, requiring the __context__ allows us to ditch the otherwise complex
  problem of why @contextfactory functions' return value is usable by with,
  without having to explain it separately.

Really? I thought that that was due to the magic in the decorator (and
in the class it uses).

Actually, I got that explanation backwards above.  What I meant is that the 
hard thing to explain is why you can use @contextfactory to define a 
__context__ method.  All other examples of @contextfactory are perfectly 
fine, it's only the fact that you can use it to define a __context__ method.

See, if @contextfactory functions return a thing *with* a __context__ 
method, how is that usable with with?  It isn't, unless the thing also 
happens to have __enter__/__exit__ methods.  This was the hole in the 
documentation that caused Nick to seek to revisit the decorator name in the 
first place.


But that *still* doesn't explain why we are recommending that
everything providing __enter__/__exit__ must also provide __context__!

Because it means that users never have to worry about which kind of object 
they have.  Either you pass a 1-method object to with, or a 3-method 
object to with.  If you have a 2-method object, you can never pass it to 
with.

Here's the thing: you're going to have 1-method objects and you're going to 
have 3-method objects, we know that.  But the only time a 2-method object 
is useful is if it's a tag-along to a 1-method object.  It's easier from a 
documentation perspective to just say you can have one method or three, 
and not get into this whole well, you can also have two, but only if you 
use it with a one.  And if you rule out the existence of the two-method 
variant, you don't have to veer into any complex questions of when you 
should use three methods instead of two, because the answer is simply 
always use three.

This isn't a technical problem, in other words.  I had exactly the same POV 
as you on this until I read enough of Nick's rants on the subject to 
finally see it from the education perspective; it's easier to explain two 
things, one of which is a subtype of the other, versus explaining two 
orthogonal things that sometimes go together and sometimes don't, and the 
reasons that you might or might not want to put them together are tricky to 
explain.

Of course, this approach opens a new hole, which is how to deal with people 
asking why does it have to have a __context__ method if it's never 
called.  So far, Nick's answer is because we said so (aka deliberate 
design decision), which isn't great, but at least it's honest.  :)

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Must objects with __enter__/__exit__ also supply __context__?

2006-04-25 Thread Phillip J. Eby
At 11:29 PM 4/25/2006 -0400, Phillip J. Eby wrote:
See, if @contextfactory functions return a thing *with* a __context__
method, how is that usable with with?  It isn't, unless the thing also
happens to have __enter__/__exit__ methods.  This was the hole in the
documentation that caused Nick to seek to revisit the decorator name in the
first place.

Argh.  I seem to be tongue-tied this evening.  What I mean is, if 
@contextfactory functions' return value is usable as a with expression, 
that means it must have a __context__ method.  But, if you are using 
@contextfactory to *define* a __context__ method, the return value should 
clearly have __enter__ and __exit__ methods.

What this means is that if we describe the one method and the two methods 
as independent things, there is no *single* name we can use to describe the 
return value of a @contextfactory function.  It's a wave and a particle, so 
we either have to start talking about wavicles or have some detailed 
explanation of why @contextfactory function return values are both waves 
and particles at the same time.

However, if we say that particles are a kind of wave, and have all the same 
features as waves but just add a few others, then we can simply say 
@contextfactory functions return particles, and the waviness is implied, 
and all is right in the world.  At least, until AMK comes along and asks 
why you can't separate the particleness from the waveness, which was what 
started this whole thing in the first place...  :)


___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 343 update (with statement context terminology)

2006-04-24 Thread Phillip J. Eby
At 10:26 AM 4/24/2006 +0100, Paul Moore wrote:
OK. At this point, the discussion seems to have mutated from a
Phillip vs Nick debate to a Paul vs Nick debate.

I only stepped aside so that other people would chime in.  I still don't 
think the new terminology makes anything clearer, and would rather see 
tweaks to address the one minor issue of @contextmanager producing an 
object that's also a context than a complete reworking of the 
documentation.  That was the only thing that was unclear in the a1 
terminology and docs, and it's an extremely minor point that could easily 
be addressed.

Throwing away an intuitive terminology because of a minor implementation 
issue in favor of a non-intutitive terminology that happens to be 
super-precise seems penny-wise and pound-foolish to me.

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 343 update (with statement context terminology)

2006-04-24 Thread Phillip J. Eby
At 04:48 AM 4/25/2006 +1000, Nick Coghlan wrote:
Using two names to describe three different things isn't intuitive for 
anybody.

Um, what three things?  I only count two:

1. Objects with __context__
2. Objects with __enter__ and __exit__

What's the third thing?

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 343 update (with statement context terminology)

2006-04-24 Thread Phillip J. Eby
At 12:24 PM 4/24/2006 -0700, Aahz wrote:
On Mon, Apr 24, 2006, Phillip J. Eby wrote:
  At 04:48 AM 4/25/2006 +1000, Nick Coghlan wrote:
 
 Using two names to describe three different things isn't intuitive for
 anybody.
 
  Um, what three things?  I only count two:
 
  1. Objects with __context__
  2. Objects with __enter__ and __exit__
 
  What's the third thing?

The actual context that's used during the execution of BLOCK.  It does
not exist as a concrete object,

Um, huh?  It's a thing but it's not an object?  I'm lost now.  I don't see 
why we should introduce a concept that has no concrete existence into 
something that's hard enough to explain when you stick to the objects that 
actually exist.  :)

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 343 update (with statement context terminology)

2006-04-24 Thread Phillip J. Eby
At 12:49 PM 4/24/2006 -0700, Aahz wrote:
On Mon, Apr 24, 2006, Phillip J. Eby wrote:
  At 12:24 PM 4/24/2006 -0700, Aahz wrote:
 On Mon, Apr 24, 2006, Phillip J. Eby wrote:
  At 04:48 AM 4/25/2006 +1000, Nick Coghlan wrote:
 
 Using two names to describe three different things isn't intuitive for
 anybody.
 
  Um, what three things?  I only count two:
 
  1. Objects with __context__
  2. Objects with __enter__ and __exit__
 
  What's the third thing?
 
 The actual context that's used during the execution of BLOCK.  It does
 not exist as a concrete object,
 
  Um, huh?  It's a thing but it's not an object?  I'm lost now.  I don't see
  why we should introduce a concept that has no concrete existence into
  something that's hard enough to explain when you stick to the objects that
  actually exist.  :)

Let's go back to a pseudo-coded with statement:

 with EXPRESSION [as NAME]:
 BLOCK

What happens while BLOCK is being executed?  Again, here's what I said
originally:

 EXPRESSION returns a value that the with statement uses to create a
 context (a special kind of namespace).  The context is used to
 execute the BLOCK.  The block might end normally, get terminated by
 a break or return, or raise an exception. No matter which of those
 things happens, the context contains code to clean up after the
 block.

Do you have an alternate proposal for describing this that works well for
newbies?

No, I like your phrasing -- but it's quite concrete.  EXPRESSION returns a 
value (object w/__context__) used to create a context (object w/__enter__ 
and __exit__).

That's only two things.  There is no *third* thing here.

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


<    1   2   3   4   5   6   7   8   9   >