Re: [Python-Dev] PEP 343 and __with__

2005-10-04 Thread Guido van Rossum
On 10/4/05, Jason Orendorff [EMAIL PROTECTED] wrote:
 Right after I sent the preceding message I got a funny feeling I'm
 wasting everybody's time here.  I apologize.  Guido's original concern
 about speedy C implementation for locks stands.  I don't see a good
 way around it.

OK. Our messages crossed, so you can ignore my response. Let's spend
our time implementing the PEPs as they stand, then see what else we
can do with the new APIs.

--
--Guido van Rossum (home page: http://www.python.org/~guido/)
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Python 2.5 and ast-branch

2005-10-05 Thread Guido van Rossum
On 10/5/05, Nick Coghlan [EMAIL PROTECTED] wrote:
 Anyway, the question is: What do we want to do with ast-branch? Finish
 bringing it up to Python 2.4 equivalence, make it the HEAD, and only then
 implement the approved PEP's (308, 342, 343) that affect the compiler? Or
 implement the approved PEP's on the HEAD, and move the goalposts for
 ast-branch to include those features as well?

 I believe the latter is the safe option in terms of making sure 2.5 is a solid
 release, but doing it that way suggests to me that the ast compiler would need
 to be held over until 2.6, which would be somewhat unfortunate.

 Given that I don't particularly like that answer, I'd love for someone to
 convince me I'm wrong ;)

Given the total lack of response, I have a different suggestion. Let's
*abandon* the AST-branch. We're fooling ourselves believing that we
can ever switch to that branch, no matter how theoretically better it
is.

--
--Guido van Rossum (home page: http://www.python.org/~guido/)
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 343 and __with__

2005-10-06 Thread Guido van Rossum
Just a quick note. Nick convinced me that adding __with__ (without
losing __enter__ and __exit__!) is a good thing, especially for the
decimal context manager. He's got a complete proposal for PEP changes
which he'll post here. After a brief feedback period I'll approve his
changes and he'll check them into the PEP.

My apologies to Jason for missing the point he was making; thanks to
Nick for getting it and turning it into a productive change proposal.

--
--Guido van Rossum (home page: http://www.python.org/~guido/)
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Lexical analysis and NEWLINE tokens

2005-10-06 Thread Guido van Rossum
I think it is a relic from the distant past, when the lexer did
generate NEWLINE for every blank line. I think the only case where you
can still get a NEWLINE by itself is in interactive mode. This code is
extremely convoluted and may be buggy in end cases; this could explain
why you get a continuation prompt after entering a comment in
interactive mode...

--Guido

On 10/6/05, Michael Hudson [EMAIL PROTECTED] wrote:
 Matthew F. Barnes [EMAIL PROTECTED] writes:

  I posted this question to python-help, but I think I have a better chance
  of getting the answer here.
 
  I'm looking for clarification on when NEWLINE tokens are generated during
  lexical analysis of Python source code.  In particular, I'm confused about
  some of the top-level components in Python's grammar (file_input,
  interactive_input, and eval_input).
 
  Section 2.1.7 of the reference manual states that blank lines (lines
  consisting only of whitespace and possibly a comment) do not generate
  NEWLINE tokens.  This is supported by the definition of a suite, which
  does not allow for standalone or consecutive NEWLINE tokens.
 
  suite ::= stmt_list NEWLINE | NEWLINE INDENT statement+ DEDENT

 I don't have the spare brain cells to think about your real problem
 (sorry) but something to be aware of is that the pseudo EBNF of the
 reference manual is purely descriptive -- it is not actually used in
 the parsing of Python code at all.  Among other things this means it
 could well just be wrong :/

 The real grammar is Grammar/Grammar in the source distribution.

 Cheers,
 mwh

 --
   The Internet is full.  Go away.
   -- http://www.disobey.com/devilshat/ds011101.htm
 ___
 Python-Dev mailing list
 Python-Dev@python.org
 http://mail.python.org/mailman/listinfo/python-dev
 Unsubscribe: 
 http://mail.python.org/mailman/options/python-dev/guido%40python.org



--
--Guido van Rossum (home page: http://www.python.org/~guido/)
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Python 2.5 and ast-branch

2005-10-06 Thread Guido van Rossum
On 10/6/05, Neil Schemenauer [EMAIL PROTECTED] wrote:
 Nick Coghlan [EMAIL PROTECTED] wrote:
  If we kill the branch for now, then anyone that wants to bring up the idea
  again can write a PEP first

 I still have some (very) small hope that it can be finished.  If we
 don't get it done soon then I fear that it will never happen.  I had
 hoped that a SoC student would pick up the task or someone would ask
 for a grant from the PSF.  Oh well.

  A strategy that may work out better is [...]

 Another thought I've had recently is that most of the complexity
 seems to be in the CST to AST translator.  Perhaps having a parser
 that provided a nicer CST might help.

Dream on, Neil... Adding more work won't make it more likely to happen.

The only alternative to abandoning it that I see is to merge it back
into main NOW, using the time that remains us until the 2.5 release to
make it robust. That way, everybody can help out (and it may motivate
more people).

Even if this is a temporary regression (e.g. PEP 342), it might be
worth it -- but only if there are at least two people committed to
help out quickly when there are problems.

--
--Guido van Rossum (home page: http://www.python.org/~guido/)
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Python 2.5 and ast-branch

2005-10-06 Thread Guido van Rossum
[Kurt]
  Unless I'm missing something, we would need to merge HEAD to the AST
  branch once more to pick up the changes in MAIN since the last merge,
  and then make sure everything in the AST branch is passing the test
  suite.  Otherwise we risk having MAIN broken for awhile following a
  merge.

[Raymond]
 IMO, merging to the head is a somewhat dangerous strategy that doesn't
 have any benefits.  Whether done on the head or in the branch, the same
 amount of work needs to be done.

 If the stability of the head is disrupted, it may impede other
 maintenance efforts because it is harder to test bug fixes when the test
 suites are not passing.

Well, at some point it will HAVE to be merged into the head. The
longer we wait the more painful it will be. If we suffer a week of
instability now, I think that's acceptable, as long as all developers
are suitably alerted, and as long as the AST team works towards
resolving the issues ASAP.

I happen to agree with Kurt that we should first merge the head into
the branch; then the AST team can work on making sure the entire test
suite passes; then they can merge back into the head.

BUT this should only be done with a serious commitment from the AST
team (I think Neil and Jeremy are offering this -- I just don't know
how much time they will have available, realistically).

My main point is, we should EITHER abandon the AST branch, OR force a
quick resolution. I'm willing to suffer a week of instability in head
now, or in a week or two -- but I'm not willing to wait again.

Let's draw a line in the sand. The AST team (which includes whoever
will help) has up to three weeks to het the AST branch into a position
where it passes all the current unit tests merged in from the head.
Then they merge it into the head after which we can accept at most a
week of instability in the head. After that the AST team must remain
available to resolve remaining issues quickly.

How does this sound to the non-AST-branch developers who have to
suffer the inevitable post-merge instability? I think it's now or
never -- waiting longer isn't going to make this thing easier (not
with several more language changes approved: with-statement, extended
import, what else...)

What does the AST team think?

--
--Guido van Rossum (home page: http://www.python.org/~guido/)
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Sourceforge CVS access

2005-10-07 Thread Guido van Rossum
I will, if you tell me your sourceforge username.

On 10/7/05, Nick Coghlan [EMAIL PROTECTED] wrote:
 Could one of the Sourceforge powers-that-be grant me check in access so I can
 update PEP 343 directly?


--
--Guido van Rossum (home page: http://www.python.org/~guido/)
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Extending tuple unpacking

2005-10-07 Thread Guido van Rossum
On 10/7/05, Gustavo Niemeyer [EMAIL PROTECTED] wrote:
 Not sure if this has been proposed before, but one thing
 I occasionally miss regarding tuple unpack is being able
 to do:

   first, second, *rest = something

 Also in for loops:

   for first, second, *rest in iterator:
   pass

 This seems to match the current meaning for starred
 variables in other contexts.

Someone should really write up a PEP -- this was just discussed a week
or two ago.

I personally think this is adequately handled by writing:

  (first, second), rest = something[:2], something[2:]

I believe that this wish is an example of hypergeneralization -- an
incorrect generalization based on a misunderstanding of the underlying
principle.

Argument lists are not tuples [*] and features of argument lists
should not be confused with features of tuple unpackings.

[*] Proof: f(1) is equivalent to f(1,) even though (1) is an int but
(1,) is a tuple.

--
--Guido van Rossum (home page: http://www.python.org/~guido/)
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] test_cmd_line failure on Kubuntu 5.10 with GCC 4.0

2005-10-08 Thread Guido van Rossum
On 10/8/05, Neal Norwitz [EMAIL PROTECTED] wrote:
 On 10/8/05, Nick Coghlan [EMAIL PROTECTED] wrote:
  Hye-Shik Chang wrote:
   On 10/8/05, Nick Coghlan [EMAIL PROTECTED] wrote:
  
  Anyone else seeing any problems with test_cmd_line? I've got a few 
  failures in
  test_cmd_line on Kubuntu 5.10 with GCC 4.0 relating to a missing \n 
  line ending.
 
  If I explicitly write Ctrl-D to the subprocess's stdin for the tests which
  open the interpreter, then the tests pass. So it looks like some sort of
  buffering problem with standard out not getting flushed before the test 
  tries
  to read the data.

 Sorry, that's a new test I added recently.  It works for me on gentoo.
  The test is very simple and shouldn't be hard to fix.  Can you fix
 it?  I assume Guido (or someone) added you as a developer.  If not, if
 you can give me enough info, I can try to fix it.

I guess Neil's test was expecting at least one line of output from
python at all times, but on most systems it is completely silent when
the input is empty. I fixed the test (also in 2.4) to allow empty
input as well as input ending in \n.

--
--Guido van Rossum (home page: http://www.python.org/~guido/)
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 342 suggestion: start(), __call__() and unwind_call() methods

2005-10-08 Thread Guido van Rossum
On 10/7/05, Piet Delport [EMAIL PROTECTED] wrote:
 Earlier this week, i proposed legalizing return Result inside a generator,
 and making it act like raise StopIteration( Result ), for exactly this 
 reason.

 IMHO, this is an elegant and straightforward extension of the current
 semantics of returns inside generators, and is the final step toward making
 generator-based concurrent tasks[1] look just like the equivalent synchronous
 code (with the only difference, more-or-less, being the need for appropriate
 yield keywords, and a task runner/scheduler loop).

 This change would make a huge difference to the practical usability of these
 generator-based tasks.  I think they're much less likely to catch on if you
 have to write raise StopIteration( Result ) (or _return( Result )) all the
 time.

 [1] a.k.a. coroutines, which i don't think is an accurate name, anymore.

Before we do this I'd like to see you show some programming examples
that show how this would be used. I'm having a hard time understanding
where you would need this but I realize I haven't used this paradigm
enough to have a good feel for it, so I'm open for examples.

At least this makes more sense than mapping return X into yield X;
return as someone previously proposed. :)

--
--Guido van Rossum (home page: http://www.python.org/~guido/)
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 342 suggestion: start(), __call__() and unwind_call() methods

2005-10-08 Thread Guido van Rossum
 Guido van Rossum wrote:
  Before we do this I'd like to see you show some programming examples
  that show how this would be used. I'm having a hard time understanding
  where you would need this but I realize I haven't used this paradigm
  enough to have a good feel for it, so I'm open for examples.

On 10/8/05, Nick Coghlan [EMAIL PROTECTED] wrote:
 It would be handy when the generators are being used as true pseudothreads
 with a scheduler like the one I posted earlier in this discussion. It allows
 these pseudothreads to call each other by yielding the call as a lambda or
 partial function application that produces a zero-argument callable. The
 called pseudothread can then yield as many times as it wants (either making
 its own calls, or just being a well-behaved member of a cooperatively MT
 environment), and then finally returning the value that the original caller
 requested.

 Using 'return' for this is actually a nice idea, and if we ever do make it
 legal to use 'return' in generators, these are the semantics it should have.

 However, I'm not sure its something we should be adding *right now* as part of
 PEP 342 - writing raise StopIteration and raise StopIteration(result), and
 saying that a generator includes an implied raise StopIteration after its
 last line of code really isn't that difficult to understand, and is completely
 explicit about what is going on.

 My basic concern is that I think replacing raise StopIteration with return
 and raise StopIteration(EXPR) with return EXPR would actually make such
 code easier to write at the expense of making it harder to *read*, because the
 fact that an exception is being raised is obscured. Consider the following two
 code snippets:

def function():
   try:
  return
   except StopIteration:
  print We never get here.

def generator():
   yield
   try:
  return
   except StopIteration:
  print But we would get here!

Right.  Plus, Piet also remarked that the value is silently ignored
when the generator is used in a for-loop. Since that's likely to be
the majority of generators, I'd worry that accepting return X would
increase the occurrence of bugs caused by someone habitually writing
return X where they meant yield X. (Assuming there's another yield
in the generator, otherwise it wouldn't be a generator and the error
would reveal itself very differently.)

 So, instead of having return automatically map to raise StopIteration
 inside generators, I'd like to suggest we keep it illegal to use return
 inside a generator, and instead add a new attribute result to StopIteration
 instances such that the following three conditions hold:

   # Result is None if there is no argument to StopIteration
   try:
  raise StopIteration
   except StopIteration, ex:
  assert ex.result is None

   # Result is the argument if there is exactly one argument
   try:
  raise StopIteration(expr)
   except StopIteration, ex:
  assert ex.result == ex.args[0]

   # Result is the argument tuple if there are multiple arguments
   try:
  raise StopIteration(expr1, expr2)
   except StopIteration, ex:
  assert ex.result == ex.args

 This precisely parallels the behaviour of return statements:
return # Call returns None
return expr# Call returns expr
return expr1, expr2# Call returns (expr1, expr2)

This seems a bit overdesigned; I'd expect that the trampoline
scheduler could easily enough pick the args tuple apart to get the
same effect without adding another attribute unique to StopIteration.
I'd like to keep StopIteration really lightweight so it doesn't slow
down its use in other places.

--
--Guido van Rossum (home page: http://www.python.org/~guido/)
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Proposed changes to PEP 343

2005-10-09 Thread Guido van Rossum
On 10/9/05, Anders J. Munch [EMAIL PROTECTED] wrote:
 Nick Coghlan wrote:
  Anders J. Munch wrote:
  
  Note that __with__ and __enter__ could be combined into one with no
  loss of functionality:
  
  abc,VAR = (EXPR).__with__()
  
  
  They can't be combined, because they're invoked on different objects.
  

 Sure they can.  The combined method first does what __with__ would
 have done to create abc, and then does whatever abc.__enter__ would
 have done.  Since the type of 'abc' is always known to the author of
 __with__, this is trivial.

I'm sure it can be done, but I find this ugly API design. While I'm
not keen on complicating the API, the decimal context example has
convinced me that it's necessary. The separation into __with__ which
asks EXPR for a context manager and __enter__ / __exit__ which handle
try/finally feels right. An API returning a tuple is asking for bugs.

--
--Guido van Rossum (home page: http://www.python.org/~guido/)
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] New PEP 342 suggestion: result() and allow return with arguments in generators (was Re: PEP 342 suggestion: start(), __call__() and unwind_call() methods)

2005-10-09 Thread Guido van Rossum
On 10/9/05, Nick Coghlan [EMAIL PROTECTED] wrote:
 Sometimes I miss the obvious. There's a *much*, *much* better place to store
 the return value of a generator than on the StopIteration exception that it
 raises when it finishes. Just save the return value in the *generator*.

 And then provide a method on generators that is the functional equivalent of:

  def result():
  # Finish the generator if it isn't finished already
  for step in self:
  pass
  return self._result # Return the result saved when the block finished

 It doesn't matter that a for loop swallows the StopIteration exception any
 more, because the return value is retrieved directly from the generator.

Actually, I don't like this at all. It harks back to earlier proposals
where state was stored on the generator (e.g. PEP 288).

 I also like that this interface could still be used even if the work of
 getting the result is actually farmed off to a separate thread or process
 behind the scenes.

That seems an odd use case for generators, better addressed by
creating an explicit helper object when the need exists. I bet that
object will need to exist anyway to hold other information related to
the exchange of information between threads (like a lock or a Queue).

Looking at your example, I have to say that I find the trampoline
example from PEP 342 really hard to understand. It took me several
days to get it after Phillip first put it in the PEP, and that was
after having reconstructed the same functionality independently. (I
have plans to replace or augment it with a different set of examples,
but haven't gotten the time. Old story...) I don't think that
something like that ought to be motivating generator extensions. I
also think that using a thread for async I/O is the wrong approach --
if you wanted to use threads shou should be using threads and you
wouldn't be dealing with generators. There's a solution that uses
select() which can handle as many sockets as you want without threads
and without the clumsy polling (is it ready yet? is it ready yet? is
it ready yet?).

I urge you to leave well enough alone. There's room for extensions
after people have built real systems with the raw material provided by
PEP 342 and 343.

--
--Guido van Rossum (home page: http://www.python.org/~guido/)
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] defaultproperty (was: Re: RFC: readproperty)

2005-10-09 Thread Guido van Rossum
On 10/9/05, Jim Fulton [EMAIL PROTECTED] wrote:
 Based on the discussion, I think I'd go with defaultproperty.

Great.

 Questions:

 - Should this be in builtins, alongside property, or in
a library module? (Oleg suggested propertytools.)

 - Do we need a short PEP?

I think so. From the responses I'd say there's at most lukewarm
interest (including from me). You might also want to drop it and just
add it to your personal (or Zope's) library.

--
--Guido van Rossum (home page: http://www.python.org/~guido/)
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Extending tuple unpacking

2005-10-10 Thread Guido van Rossum
On 10/10/05, Nick Coghlan [EMAIL PROTECTED] wrote:
 It also works for situations where the first n items are mandatory, the rest
 are optional. This usage was brought up in the context of a basic line
 interpreter:

cmd, *args = input.split()

That's a really poor example though.  You really don't want a line
interpreter to bomb if the line is empty!

 Another usage is to have a Python function which doesn't support keywords for
 its positional arguments (to avoid namespace clashes in the keyword dict), but
 can still unpack the mandatory arguments easily:

def func(*args, **kwds):
arg1, arg2, *rest = args # Unpack the positional arguments

Again, I'd be more comfortable if this was preceded by a check for
len(args) = 2.

I should add that I'm just -0 on this. I think proponents ought to
find better motivating examples that aren't made-up.

Perhaps Raymond's requirement would help -- find places in the
standard library where this would make code more
readable/maintainable.

--
--Guido van Rossum (home page: http://www.python.org/~guido/)
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] New PEP 342 suggestion: result() and allow return with arguments in generators (was Re: PEP 342 suggestion: start(), __call__() and unwind_call() methods)

2005-10-10 Thread Guido van Rossum
On 10/10/05, Nick Coghlan [EMAIL PROTECTED] wrote:
 I'm starting to think we want to let PEP 342 bake for at least one release
 cycle before deciding what (if any) additional behaviour should be added to
 generators.

Yes please!

--
--Guido van Rossum (home page: http://www.python.org/~guido/)
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 3000 and exec

2005-10-10 Thread Guido van Rossum
My idea was to make the compiler smarter so that it would recognize
exec() even if it was just a function.

Another idea might be to change the exec() spec so that you are
required to pass in a namespace (and you can't use locals() either!).
Then the whole point becomes moot.

On 10/10/05, [EMAIL PROTECTED] [EMAIL PROTECTED] wrote:
  This might be minor-- but I didn't see anyone mentioning it so far.
  If `exec` functionality is to be provided, then I think it still
  should be a keyword for the parser to know; currently bytecode
  generation is affected if `exec` is present.  Even if that changes
  for Python 3k (we don't know yet), the paragraph for exec should be
  annotated with a note about this issue.

 Brett But the PEP says that 'exec' will become a function and thus no
 Brett longer become a built-in, so changing the grammar is not needed.

 I don't think that was the OP's point though it might not have been terribly
 clear.  Today, the presence of the exec statement in a function changes how
 non-local load instructions are generated.  Consider f and g with their
 dis.dis output:

  def f(a):
 ...   exec import %s % a
 ...   print q
 ...
  def g(a):
 ...   __import__(a)
 ...   print q
 ...
  dis.dis(f)
   2   0 LOAD_CONST   1 ('import %s')
   3 LOAD_FAST0 (a)
   6 BINARY_MODULO
   7 LOAD_CONST   0 (None)
  10 DUP_TOP
  11 EXEC_STMT

   3  12 LOAD_NAME1 (q)
  15 PRINT_ITEM
  16 PRINT_NEWLINE
  17 LOAD_CONST   0 (None)
  20 RETURN_VALUE
  dis.dis(g)
   2   0 LOAD_GLOBAL  0 (__import__)
   3 LOAD_FAST0 (a)
   6 CALL_FUNCTION1
   9 POP_TOP

   3  10 LOAD_GLOBAL  2 (q)
  13 PRINT_ITEM
  14 PRINT_NEWLINE
  15 LOAD_CONST   0 (None)
  18 RETURN_VALUE

 If the exec statement is replaced by a function, how will the bytecode
 generator know that q should be looked up using LOAD_NAME instead of
 LOAD_GLOBAL?  Maybe it's a non-issue, but even if so, a note to that affect
 on the wiki page might be worthwhile.

 Skip
 ___
 Python-Dev mailing list
 Python-Dev@python.org
 http://mail.python.org/mailman/listinfo/python-dev
 Unsubscribe: 
 http://mail.python.org/mailman/options/python-dev/guido%40python.org



--
--Guido van Rossum (home page: http://www.python.org/~guido/)
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Pythonic concurrency

2005-10-10 Thread Guido van Rossum
On 10/10/05, Greg Ewing [EMAIL PROTECTED] wrote:
 I'm wondering whether Python threads should be
 non-preemptive by default. Preemptive threading is
 massive overkill for many applications. You don't
 need it, for example, if you just want to use threads
 to structure your program, overlap processing with I/O,
 etc.

I recall using a non-preemptive system in the past; in Amoeba, to be precise.

Initially it worked great.

But as we added more powerful APIs to the library, we started to run
into bugs that were just as if you had preemptive scheduling: it
wouldn't always be predictable whether a call into the library would
need to do I/O or not (it might use some sort of cache) so it would
sometimes allow other threads to run and sometimes not. Or a change to
the library would change this behavior (making a call that didn't use
to block into sometimes-blocking).

Given the tendency of Python developers to build layers of
abstractions I don't think it will help much.

 Preemptive threading would still be there as an option
 to turn on when you really need it.

 Or perhaps there could be a priority system, with a
 thread only able to be preempted by a thread of higher
 priority. If you ignore priorities, all your threads
 default to the same priority, so there's no preemption.
 If you want a thread that can preempt others, you give
 it a higher priority.

If you ask me, priorities are worse than the problem they are trying to solve.

--
--Guido van Rossum (home page: http://www.python.org/~guido/)
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Extending tuple unpacking

2005-10-10 Thread Guido van Rossum
On 10/10/05, Ron Adam [EMAIL PROTECTED] wrote:
 The problem is the '*' means different things depending on where it's
 located.  In a function def, it means to group or to pack, but from the
 calling end it's used to unpack.  I don't expect it to change as it's
 been a part of Python for a long time and as long as it's only used with
 argument passing it's not too difficult to keep straight.

 My concern is if it's used outside of functions, then on the left hand
 side of assignments, it will be used to pack, but if used on the right
 hand side it will be to unpack.  And if it becomes as common place as I
 think it will, it will present confusing uses and or situations where
 you may have to think, oh yeah, it's umm... unpacking here and umm...
 packing there, but multiplying there.  The point is it could be a
 stumbling block, especially for new Python users.  So I think a certain
 amount of caution should be in order on this item.  At least check that
 it's doesn't cause confusing situations.

This particular concern, I believe, is a fallacy. If you squint the
right way, using *rest for both packing and unpacking is totally
logical. If

a, b, *rest = (1, 2, 3, 4, 5)

puts 1 into a, 2 into b, and (3, 4, 5) into rest, then it's totally
logical and symmetrical  if after that

x = a, b, *rest

puts (1, 2, 3, 4, 5) into x.

BTW, what should

[a, b, *rest] = (1, 2, 3, 4, 5)

do? Should it set rest to (3, 4, 5) or to [3, 4, 5]? Suppose the
latter. Then should we allow

[*rest] = x

as alternative syntax for

rest = list(x)

? And then perhaps

*rest = x

should mean

rest = tuple(x)

Or should that be disallowed and would we have to write

*rest, = x

analogous to singleton tuples?

There certainly is a need for doing the same from the end:

*rest, a, b = (1, 2, 3, 4, 5)

could set rest to (1, 2, 3), a to 4, and b to 5. From there it's a
simple step towards

a, b, *rest, d, e = (1, 2, 3, 4, 5)

meaning

a, b, rest, d, e = (1, 2, (3,), 4, 5)

and so on. Where does it stop?

BTW, and quite unrelated, I've always felt uncomfortable that you have to write

f(a, b, foo=1, bar=2, *args, **kwds)

I've always wanted to write that as

f(a, b, *args, foo=1, bar=2, **kwds)

but the current grammar doesn't allow it.

Still -0 on the whole thing,

--
--Guido van Rossum (home page: http://www.python.org/~guido/)
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Making Queue.Queue easier to use

2005-10-11 Thread Guido van Rossum
On 10/11/05, Nick Coghlan [EMAIL PROTECTED] wrote:
 The multi-processing discussion reminded me that I have a few problems I run
 into every time I try to use Queue objects.

 My first problem is finding it:

 Py from threading import Queue # Nope
 Traceback (most recent call last):
File stdin, line 1, in ?
 ImportError: cannot import name Queue
 Py from Queue import Queue # Ah, there it is

I don't think that's a reason to move it.

 from sys import Queue
ImportError: cannon import name Queue
 from os import Queue
ImportError: cannot import name Queue
 # Well where the heck is it?!

 What do people think of the idea of adding an alias to Queue into the
 threading module so that:
 a) the first line above works; and

I see no need. Code that *doesn't* need Queue but does use threading
shouldn't have to pay for loading Queue.py.

 b) Queue can be documented with all of the other threading primitives,
rather than being off somewhere else in its own top-level section.

Do top-level sections have to limit themselves to a single module?

Even if they do, I think it's fine to plant a prominent link to the
Queue module. You can't really expect people to learn how to use
threads wisely from reading the library reference anyway.

 My second problem is with the current signatures of the put() and get()
 methods. Specifically, the following code blocks forever instead of raising an
 Empty exception after 500 milliseconds as one might expect:
from Queue import Queue
x = Queue()
x.get(0.5)

I'm not sure if I have much sympathy with a bug due to refusing to
read the docs... :)

 I assume the current signature is there for backward compatibility with the
 original version that didn't support timeouts (considering the difficulty of
 telling the difference between x.get(1) and True = 1; x.get(True) from
 inside the get() method)

Huh? What a bizarre idea. Why would you do that? I gues I don't
understand where you're coming from.

 However, the need to write x.get(True, 0.5) seems seriously redundant, given
 that a single paramater can actually handle all the options (as is currently
 the case with Condition.wait()).

So write x.get(timeout=0.5). That's clear and unambiguous.

 The put_nowait and get_nowait functions are fine, because they serve a
 useful documentation purpose at the calling point (particularly given the
 current clumsy timeout signature).

 What do people think of the idea of adding put_wait and get_wait methods
 with the signatures:
put_wait(item,[timeout=None)
get_wait([timeout=None])

-1. I'd rather not tweak the current Queue module at all until Python
3000. Then we could force people to use keyword args.

 Optionally, the existing put and get methods could be deprecated, with the
 goal of eventually changing their signature to match the put_wait and get_wait
 methods above.

Apart from trying to guess the API without reading the docs (:-), what
are the use cases for using put/get with a timeout? I have a feeling
it's not that common.

--
--Guido van Rossum (home page: http://www.python.org/~guido/)
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PythonCore\CurrentVersion

2005-10-11 Thread Guido van Rossum
On 10/11/05, Tim Peters [EMAIL PROTECTED] wrote:
 Well, that's in interactive mode, and I see sys.path[0] ==  on both
 Windows and Linux then.  I don't see  in sys.path on either box in
 batch mode, although I do see the absolutized path to the current
 directory in sys.path in batch mode on Windows but not on Linux -- but
 Mark Hammond says he doesn't see (any form of) the current directory
 in sys.path in batch mode on Windows.

 It's a bit confusing ;-)

How did you test batch mode?

All:

sys.path[0] is *not* defined to be the current directory.

It is defined to be the directory of the script that was used to
invoke python (sys.argv[0], typically). If there is no script, or it
is being read from stdin, the default is ''.

--
--Guido van Rossum (home page: http://www.python.org/~guido/)
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PythonCore\CurrentVersion

2005-10-11 Thread Guido van Rossum
On 10/11/05, Tim Peters [EMAIL PROTECTED] wrote:
 [Tim]
  Well, that's in interactive mode, and I see sys.path[0] ==  on both
  Windows and Linux then.  I don't see  in sys.path on either box in
  batch mode, although I do see the absolutized path to the current
  directory in sys.path in batch mode on Windows but not on Linux -- but
  Mark Hammond says he doesn't see (any form of) the current directory
  in sys.path in batch mode on Windows.
 
  It's a bit confusing ;-)

 [Guido]
  How did you test batch mode?

 I gave full code (it's brief) and screen-scrapes from Windows and
 Linux yesterday:

 http://mail.python.org/pipermail/python-dev/2005-October/057162.html

 By batch mode, I meant invoking

 path_to_python   path_to_python_script.py

 from a shell prompt.

  All:
 
  sys.path[0] is *not* defined to be the current directory.
 
  It is defined to be the directory of the script that was used to
  invoke python (sys.argv[0], typically).

 In my runs, sys.argv[0] was the path to the Python executable, not to
 the script being run.

I tried your experiment but added 'print sys.argv[0]' and didn't see
that. sys.argv[0] is the path to the script.

 The directory of the script being run was
 nevertheless in sys.path[0] on both Windows and Linux.  On Windows,
 but not on Linux, the _current_ directory (the directory I happened to
 be in at the time I invoked Python) was also on sys.path; Mark Hammond
 said it was not when he tried, but he didn't show exactly what he did
 so I'm not sure what he saw.

I see what you see.  The first entry is the script's directory, the
2nd is a nonexistent zip file, the 3rd is the current directory, then
the rest is standard library stuff.

I suppose PC/getpathp.c puts it there, per your post quoted above?

--
--Guido van Rossum (home page: http://www.python.org/~guido/)
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Making Queue.Queue easier to use

2005-10-11 Thread Guido van Rossum
On 10/11/05, Tim Peters [EMAIL PROTECTED] wrote:
 Guido understands use cases for blocking and non-blocking put/get, and
 Queue always supported those possibilities.  The timeout argument got
 added later, and it's not really clear _why_ it was added.  timeout=0
 isn't a sane use case (because the same effect can be gotten with
 non-blocking put/get).

In the socket world, a similar bifurcation of the API has happened
(also under my supervision, even though the idea and prototype code
were contributed by others). The API there is very different because
the blocking or timeout is an attribute of the socket, not passed in
to every call.

But one lesson we can learn from sockets (or perhaps the reason why
people kept asking for timeout=0 to be fixed :) is that timeout=0 is
just a different way to spell blocking=False. The socket module makes
sure that the socket ends up in exactly the same state no matter which
API is used; and in fact the setblocking() API is redundant.

--
--Guido van Rossum (home page: http://www.python.org/~guido/)
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Autoloading? (Making Queue.Queue easier to use)

2005-10-12 Thread Guido van Rossum
On 10/12/05, Michael Chermside [EMAIL PROTECTED] wrote:
 I'm not familiar with the clever trick Greg is proposing, but I
 do agree that _IF_ everything else were equal, then Queue seems
 to belong in the threading module. My biggest reason is that I
 think anyone who is new to threading probably shouldn't use any
 communication mechanism OTHER than Queue or something similar
 which has been carefully designed by someone knowlegable.

I *still* disagree. At some level, Queue is just an application of
threading, while the threading module provides the basic API (never
mind that there's an even more basic API, the thread module -- it's
too low-level to consider and we actively recommend against it, at
least I hope we do).

While at this point there may be no other applications of threading
in the standard library, that may not remain the case; it's quite
possble that some of the discussions of threading APIs will eventually
lead to a PEP proposing a different threading paradigm build on top of
the threading module.

 I'm using the word application loosely here because I realize one
person's application is another's primitive operation. But I object to
the idea that just because A and B are often used together or A is
recommended for programs using B that A and B should live in the same
module. We don't put urllib and httplib in the socket module either!

Now, if we had a package structure, I would sure like to see threading
and Queue end up as neighbors in the same package. But I don't think
it's right to package them all up in the same module.

(Not to say that autoloading is a bad idea; I'm -0 on it for myself,
but I can see use cases; but it doesn't change my mind on whether
Queue should become threading.Queue. I guess I didn't articulate my
reasoning for being against that well previously and tried to hide
behind the load time argument.)

BTW, Queue.Queue violates a recent module naming standard; it is now
considered bad style to name the class and the module the same.
Modules and packages should have short all-lowercase names, classes
should be CapWords. Even the same but different case is bad style.
(I'd suggest queueing.Queue except nobody can type that right. :)

--
--Guido van Rossum (home page: http://www.python.org/~guido/)
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Autoloading? (Making Queue.Queue easier to use)

2005-10-12 Thread Guido van Rossum
On 10/12/05, [EMAIL PROTECTED] [EMAIL PROTECTED] wrote:
 Is the Queue class very useful outside a multithreaded context?

No. It was designed specifically for inter-thread communication.

--
--Guido van Rossum (home page: http://www.python.org/~guido/)
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Autoloading? (Making Queue.Queue easier to use)

2005-10-12 Thread Guido van Rossum
On 10/12/05, Aahz [EMAIL PROTECTED] wrote:
  (Python 3.0
 should deprecate ``thread`` by renaming it to ``_thread``).

+1. (We could even start doing this before 3.0.)

--
--Guido van Rossum (home page: http://www.python.org/~guido/)
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Autoloading? (Making Queue.Queue easier to use)

2005-10-12 Thread Guido van Rossum
On 10/12/05, Aahz [EMAIL PROTECTED] wrote:
 Note carefully the deprecation in quotes.  It's not going to be
 literally deprecated, only renamed, similar to the way _socket and
 socket work together.  We could also rename to _threading, but I prefer
 the simpler change of only a prepended underscore.

Could you specify exactly what you have in mind? How would backwards
compatibility be maintained in 2.x?

--
--Guido van Rossum (home page: http://www.python.org/~guido/)
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Autoloading? (Making Queue.Queue easier to use)

2005-10-13 Thread Guido van Rossum
On 10/13/05, Fredrik Lundh [EMAIL PROTECTED] wrote:
 Guido van Rossum wrote:

  BTW, Queue.Queue violates a recent module naming standard; it is now
  considered bad style to name the class and the module the same.
  Modules and packages should have short all-lowercase names, classes
  should be CapWords. Even the same but different case is bad style.

 unfortunately, this standard seem to result in generic spamtools modules
 into which people throw everything that's even remotely related to spam,
 followed by complaints about bloat and performance from users, followed by
 various more or less stupid attempts to implement lazy loading of hidden in-
 ternal modules, followed by more complaints from users who no longer has
 a clear view of what's really going on in there...

 I think I'll stick to the old standard for a few more years...

Yeah, until you've learned to use packages. :(

--
--Guido van Rossum (home page: http://www.python.org/~guido/)
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Threading and synchronization primitives

2005-10-13 Thread Guido van Rossum
On 10/13/05, [EMAIL PROTECTED] [EMAIL PROTECTED] wrote:

 Greg All right then, how about putting it in a module called
 Greg threadutils or something like that, which is clearly related to
 Greg threading, but is open for the addition of future thread-related
 Greg features that might arise.

 Then Lock, RLock, Semaphore, etc belong there instead of in threading don't
 they?

No. Locks and semaphores are the lowest-level threading primitives.
They go in the basic module.

--
--Guido van Rossum (home page: http://www.python.org/~guido/)
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] AST branch update

2005-10-14 Thread Guido van Rossum
[Jeremy]
  Neil and I have been working on the AST branch for the last week.
  We're nearly ready to merge the changes to the head.

[Raymond]
 Nice work.

Indeed. I should've threatened to kill the AST branch long ago! :)

--
--Guido van Rossum (home page: http://www.python.org/~guido/)
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Sourceforge CVS access

2005-10-15 Thread Guido van Rossum
Sobebody help Nick! This is beyond my SF-fu! :-(

On 10/15/05, Nick Coghlan [EMAIL PROTECTED] wrote:
 Guido van Rossum wrote:
  You're in. Use it wisely. Let me know if there are things you still
  cannot do. (But I'm not used to being SF project admin any more; other
  admins may be able to help you quicker...)

 Almost there - checking out over SSH failed to work. I checked the python SF
 admin page, and I still only have read access to the CVS repository. So if one
 of the SF admins could flip that last switch, that would be great :)

 Regards,
 Nick.

 --
 Nick Coghlan   |   [EMAIL PROTECTED]   |   Brisbane, Australia
 ---
  http://boredomandlaziness.blogspot.com




--
--Guido van Rossum (home page: http://www.python.org/~guido/)
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Sourceforge CVS access

2005-10-15 Thread Guido van Rossum
With Neal's help I've fixed Nick's permissions. Enjoy, Nick!

On 10/15/05, Guido van Rossum [EMAIL PROTECTED] wrote:
 Somebody help Nick! This is beyond my SF-fu! :-(

 On 10/15/05, Nick Coghlan [EMAIL PROTECTED] wrote:
  Guido van Rossum wrote:
   You're in. Use it wisely. Let me know if there are things you still
   cannot do. (But I'm not used to being SF project admin any more; other
   admins may be able to help you quicker...)
 
  Almost there - checking out over SSH failed to work. I checked the python SF
  admin page, and I still only have read access to the CVS repository. So if 
  one
  of the SF admins could flip that last switch, that would be great :)
 
  Regards,
  Nick.
 
  --
  Nick Coghlan   |   [EMAIL PROTECTED]   |   Brisbane, Australia
  ---
   http://boredomandlaziness.blogspot.com
 
 


 --
 --Guido van Rossum (home page: http://www.python.org/~guido/)




--
--Guido van Rossum (home page: http://www.python.org/~guido/)
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 343 updated

2005-10-16 Thread Guido van Rossum
On 10/16/05, Nick Coghlan [EMAIL PROTECTED] wrote:
 PEP 343 has been updated on python.org.

 Highlights of the changes:

- changed the name of the PEP to be simply The 'with' Statement
- added __with__() method
- added section on standard terminology (that is, contexts/context 
 managers)
- changed generator context decorator name to context
- Updated Resolved Issues section
- updated decimal.Context() example
- updated closing() example so it works for objects without close methods

 I also added a new Open Issues section with the questions:

- should the decorator be called context or something else, such as the
  old contextmanager? (The PEP currently says context)
- should the decorator be a builtin? (The PEP currently says yes)
- should the decorator be applied automatically to generators used to write
  __with__ methods? (The PEP currently says yes)

I hope you reverted the status to Proposed...

On the latter: I think it shouldn't; I don't like this kind of magic.
I'll have to read it before I can comment on the rest.

--
--Guido van Rossum (home page: http://www.python.org/~guido/)
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Definining properties - a use case for class decorators?

2005-10-16 Thread Guido van Rossum
On 10/16/05, Nick Coghlan [EMAIL PROTECTED] wrote:
 On and off, I've been looking for an elegant way to handle properties using
 decorators.

 It hasn't really worked, because decorators are inherently single function,
 and properties span multiple functions.

 However, it occurred to me that Python already contains a construct for
 grouping multiple related functions together: classes.

Nick, and everybody else trying to find a solution for this
problem, please don't. There's nothing wrong with having the three
accessor methods explicitly in the namespace, it's clear, and probably
less typing (and certainly less indenting!). Just write this:

class C:
def getFoo(self): ...
def setFoo(self): ...
foo = property(getFoo, setFoo)

--
--Guido van Rossum (home page: http://www.python.org/~guido/)
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Definining properties - a use case for class decorators?

2005-10-16 Thread Guido van Rossum
On 10/16/05, Calvin Spealman [EMAIL PROTECTED] wrote:
 On 10/16/05, Guido van Rossum [EMAIL PROTECTED] wrote:
  Nick, and everybody else trying to find a solution for this
  problem, please don't. There's nothing wrong with having the three
  accessor methods explicitly in the namespace, it's clear, and probably
  less typing (and certainly less indenting!). Just write this:
 
  class C:
  def getFoo(self): ...
  def setFoo(self): ...
  foo = property(getFoo, setFoo)

 Does this necessisarily mean a 'no' still for class decorators, or do
 you just not like this particular use case for them. Or, are you
 perhaps against this proposal due to its use of nested classes?

I'm still -0 on class decorators pending good use cases. I'm -1 on
using a class decorator (if we were to introduce them) for get/set
properties; it doesn't save writing and it doesn't save reading.

--
--Guido van Rossum (home page: http://www.python.org/~guido/)
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Definining properties - a use case for class decorators?

2005-10-16 Thread Guido van Rossum
[Guido]
  Nick, and everybody else trying to find a solution for this
  problem, please don't.

[Greg Ewing]
 Denying that there's a problem isn't going to make it
 go away. Many people, including me, have the feeling that
 the standard way of defining properties at the moment leaves
 something to be desired, for all the same reasons that have
 led to @-decorators.

My challenge to many people, including you, is to make that feeling
more concrete. Sometimes when you have such a feeling it just means
you haven't drunk the kool-aid yet. :)

With decorators there was a concrete issue: the modifier trailed after
the function body, in a real sense hiding from the reader. I don't
see such an issue with properties. Certainly the proposed solutions so
far are worse than the problem.

 However, I agree that trying to keep the accessor method
 names out of the class namespace isn't necessary, and may
 not even be desirable. The way I'm defining properties in
 PyGUI at the moment looks like this:

class C:

  foo = overridable_property('foo', The foo property)

  def get_foo(self):
...

  def set_foo(self, x):
...

 This has the advantage that the accessor methods can be
 overridden in subclasses with the expected effect. This
 is particularly important in PyGUI, where I have a generic
 class definition which establishes the valid properties
 and their docstrings, and implementation subclasses for
 different platforms which supply the accessor methods.

But since you define the API, are you sure that you need properties at
all? Maybe the users would be happy to write widget.get_foo() and
widget.set_foo(x) instead of widget.foo or widget.foo = x?

 The only wart is the necessity of mentioning the property
 name twice, once on the lhs and once as an argument.
 I haven't thought of a good solution to that, yet.

To which Tim Delaney responded, have a look at my response here:
http://aspn.activestate.com/ASPN/Cookbook/Python/Recipe/408713

I looked at that, and now I believe it's actually *better* to mention
the property name twice, at least compared to Tim' s approach. Looking
at that version, I think it's obscuring the semantics; it (ab)uses the
fact that a function's name is accessible through its __name__
attribute. But (unlike Greg's version) it breaks down when one of the
arguments is not a plain function. This makes it brittle in the
context of renaming operations, e.g.:

getx = lambda self: 42
def sety(self, value): self._y = value
setx = sety
x = LateBindingProperty(getx, setx)

--
--Guido van Rossum (home page: http://www.python.org/~guido/)
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Autoloading? (Making Queue.Queue easier to use)

2005-10-17 Thread Guido van Rossum
On 10/17/05, Nick Coghlan [EMAIL PROTECTED] wrote:
 Ah well, another idea runs aground on the harsh rocks of reality.

I should point out that it's intentional that there are very few
similarities between modules and classes. Many attempts have been made
to unify the two, but these never work right, because the module can't
decide whether it behaves like a class or like an instance. Also the
direct access to global variables prevents you to put any kind of code
in the get-attribute path.

--
--Guido van Rossum (home page: http://www.python.org/~guido/)
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 3000 and exec

2005-10-17 Thread Guido van Rossum
On 10/17/05, Jim Jewett [EMAIL PROTECTED] wrote:
 Guido van Rossum wrote:

  Another idea might be to change the exec() spec so that you are
  required to pass in a namespace (and you can't use locals() either!).
  Then the whole point becomes moot.

 I think of exec as having two major uses:

 (1)  A run-time compiler
 (2)  A way to change the local namespace, based on run-time
 information (such as a config file).

 By turning exec into a function with its own namespace (and
 enforcing a readonly locals()), the second use is eliminated.

 Is this intentional for security/style/efficiency/predictability?

Yes, there are lots of problems with (2); both the human reader and
the compiler often don't quite know what the intended effect is.

 If so, could exec/eval at least

 (1)  Be treatable as nested functions, so that they can *read* the
 current namespace.

There will be a way to get the current namespace (similar to locals()
but without its bugs). But it's probably better to create an empty
namespace and explicitly copy into it only those things that you are
willing to expose to the exec'ed code (or the things it needs).

 (2)  Grow a return value, so that they can more easily pass
 information back to at least a (tuple of) known variable name(s).

You can easily build that functionality yourself; after running
exec(), you just pick certain things out of the namespace that you
expect it to create.

--
--Guido van Rossum (home page: http://www.python.org/~guido/)
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Definining properties - a use case for class decorators?

2005-10-17 Thread Guido van Rossum
[Guido]
  I looked at that, and now I believe it's actually *better* to mention
  the property name twice, at least compared to Tim' s approach.

[Greg Ewing]
 I'm inclined to agree. Passing functions that you're not
 going to use as functions but just use the name of doesn't
 seem right.

 And in my version, it's not *really* redundant, since the
 name is only used to derive the names of the accessor methods.
 It doesn't *have* to be the same as the property name, although
 using anything else could justifiably be regarded as insane...

OK, so how's this for a radical proposal.

Let's change the property built-in so that its arguments can be either
functions or strings (or None). If they are functions or None, it
behaves exactly like it always has.

If an argument is a string, it should be a method name, and the method
is looked up by that name each time the property is used. Because this
is late binding, it can be put before the method definitions, and a
subclass can override the methods. Example:

class C:

foo = property('getFoo', 'setFoo', None, 'the foo property')

def getFoo(self):
return self._foo

def setFoo(self, foo):
self._foo = foo

What do you think?

--
--Guido van Rossum (home page: http://www.python.org/~guido/)
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Definining properties - a use case for class decorators?

2005-10-17 Thread Guido van Rossum
[Guido]
  Let's change the property built-in so that its arguments can be either
  functions or strings (or None). If they are functions or None, it
  behaves exactly like it always has.
 
  If an argument is a string, it should be a method name, and the method
  is looked up by that name each time the property is used. Because this
  is late binding, it can be put before the method definitions, and a
  subclass can override the methods. Example:
 
  class C:
 
  foo = property('getFoo', 'setFoo', None, 'the foo property')
 
  def getFoo(self):
  return self._foo
 
  def setFoo(self, foo):
  self._foo = foo
 
  What do you think?

[Barry]
 Ick, for all the reasons that strings are less appealing than names.

I usually wholeheartedly agree with that argument, but here I don't
see an alternative.

 IMO, there's not enough advantage in having the property() call before
 the functions than after.

Maybe you didn't see the use case that Greg had in mind? He wants to
be able to override the getter and/or setter in a subclass, without
changing the docstring or having to repeat the property() call. That
requires us to do a late binding lookup based on a string.

Tim Delaney had a different solution where you would pass in the
functions but all it did was use their __name__ attribute to look up
the real function at runtime. The problem with that is that the
__name__ attribute may not be what you expect, as it may not
correspond to the name of the object passed in. Example:

class C:
def getx(self): ...something...
gety = getx
y = property(gety)

class D(C):
def gety(self): ...something else...

Here, the intention is clearly to override the way the property's
value is computed, but it doesn't work right -- gety.__name__ is
'getx', and D doesn't override getx, so D().y calls C.getx() instead
of D.gety().

If you can think of a solution that looks better than mine, you're a genius.

--
--Guido van Rossum (home page: http://www.python.org/~guido/)
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Definining properties - a use case for class decorators?

2005-10-17 Thread Guido van Rossum
On 10/17/05, Steven Bethard [EMAIL PROTECTED] wrote:
 I'm not sure if you'll like it any better, but I combined Michael
 Urman's suggestion with my late-binding property recipe to get:
 http://aspn.activestate.com/ASPN/Cookbook/Python/Recipe/442418
 It solves the name-repetition problem and the late-binding problem (I
 believe), at the cost of either adding an extra argument to the
 functions forming the property or confusing the self argument a
 little.

That is probably as good as you can get it *if* you prefer the nested
class over a property call with string arguments. Personally, I find
the nested class inheriting from Property a lot more magical than
the call to property() with string arguments.

I wonder if at some point in the future Python will have to develop a
macro syntax so that you can write

Property foo:
def get(self): return self._foo
...etc...

which would somehow translate into code similar to your recipe.

But until then, I prefer the simplicity of

foo = property('get_foo', 'set_foo')

--
--Guido van Rossum (home page: http://www.python.org/~guido/)
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Definining properties - a use case for classdecorators?

2005-10-19 Thread Guido van Rossum
On 10/19/05, Fredrik Lundh [EMAIL PROTECTED] wrote:
 letting class inject a slightly magic self variable into the class 
 namespace ?

 class C:

 foo = property(self.getFoo, self.setFoo, None, 'the foo property')

 def getFoo(self):
 return self._foo

 def setFoo(self, foo):
 self._foo = foo

 (figuring out exactly what self should be is left as an exercise etc)

It's magical enough to deserve to be called __self__. But even so:

I've seen proposals like this a few times in other contexts. I may
even have endorsed the idea at one time. The goal is always the same:
forcing delayed evaluation of a getattr operation without using either
a string literal or a lambda. But I find it quite a bit too magical,
for all values of xyzzy, that xyzzy.foo would return a function of one
argument that, when called with an argument x, returns x.foo. Even if
it's easy enough to write the solution (*), that sentence describing
it gives me goosebumps. And the logical consequence, xyzzy.foo(x),
which is an obfuscated way to write x.foo, makes me nervous.

(*) Here's the solution:

class XYZZY(object):
def __getattr__(self, name):
return lambda arg: getattr(arg, name)
xyzzy = XYZZY()

--
--Guido van Rossum (home page: http://www.python.org/~guido/)
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Pre-PEP: Task-local variables

2005-10-20 Thread Guido van Rossum
Whoa, folks! Can I ask the gentlemen to curb their enthusiasm?

PEP 343 is still (back) on the drawing table, PEP 342 has barely been
implemented (did it survive the AST-branch merge?), and already you
are talking about adding more stuff. Please put on the brakes!

If there's anything this discussion shows me, it's that implicit
contexts are a dangerous concept, and should be treated with much
skepticism.

I would recommend that if you find yourself needing context data while
programming an asynchronous application using generator trampolines
simulating coroutines, you ought to refactor the app so that the
context is explicitly passed along rather than grabbed implicitly.
Zope doesn't *require* you to get the context from a thread-local, and
I presume that SQLObject also has a way to explicitly use a specific
connection (I'm assuming cursors and similar data structures have an
explicit reference to the connection used to create them). Heck, even
Decimal allows you to invoke every operation as a method on a
decimal.Context object!

I'd rather not tie implicit contexts to the with statement,
conceptually. Most uses of the with-statement are purely local (e.g.
with open(fn) as f), or don't apply to coroutines (e.g. with
my_lock). I'd say that with redirect_stdout(f) also doesn't apply
-- we already know it doesn't work in threaded applications, and that
restriction is easily and logically extended to coroutines.

If you're writing a trampoline for an app that needs to modify decimal
contexts, the decimal module already provides the APIs for explicitly
saving and restoring contexts.

I know that somewhere in the proto-PEP Phillip argues that the context
API needs to be made a part of the standard library so that his
trampoline can efficiently swap implicit contexts required by
arbitrary standard and third-party library code. My response to that
is that library code (whether standard or third-party) should not
depend on implicit context unless it assumes it can assume complete
control over the application. (That rules out pretty much everything
except Zope, which is fine with me. :-)

Also, Nick wants the name 'context' for PEP-343 style context
managers. I think it's overloading too much to use the same word for
per-thread or per-coroutine context.

--
--Guido van Rossum (home page: http://www.python.org/~guido/)
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Pre-PEP: Task-local variables

2005-10-20 Thread Guido van Rossum
On 10/20/05, Phillip J. Eby [EMAIL PROTECTED] wrote:
 At 04:04 PM 10/20/2005 -0400, Jeremy Hylton wrote:
 On 10/20/05, Guido van Rossum [EMAIL PROTECTED] wrote:
   Whoa, folks! Can I ask the gentlemen to curb their enthusiasm?
  
   PEP 343 is still (back) on the drawing table, PEP 342 has barely been
   implemented (did it survive the AST-branch merge?), and already you
   are talking about adding more stuff. Please put on the brakes!
 
 Yes.  PEP 342 survived the merge of the AST branch.  I wonder, though,
 if the Grammar for it can be simplified at all.  I haven't read the
 PEP closely, but I found the changes a little hard to follow.  That
 is, why was the grammar changed the way it was -- or how would you
 describe the intent of the changes?

 The intent was to make it so that '(yield optional_expr)' always works, and
 also that   [lvalue =] yield optional_expr works.  If you can find another
 way to hack the grammar so that both of 'em work, it's certainly okay by
 me.  The changes I made were just the simplest things I could figure out to 
 do.

Right.

 I seem to recall that the hard part was the need for 'yield expr,expr' to
 be interpreted as '(yield expr,expr)', not '(yield expr),expr', for
 backward compatibility reasons.

But only at the statement level.

These should be errors IMO:

  foo(yield expr, expr)
  foo(expr, yield expr)
  foo(1 + yield expr)
  x = yield expr, expr
  x = expr, yield expr
  x = 1 + yield expr

--
--Guido van Rossum (home page: http://www.python.org/~guido/)
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Pre-PEP: Task-local variables

2005-10-20 Thread Guido van Rossum
On 10/20/05, Phillip J. Eby [EMAIL PROTECTED] wrote:
 At 08:57 AM 10/20/2005 -0700, Guido van Rossum wrote:
 Whoa, folks! Can I ask the gentlemen to curb their enthusiasm?
 
 PEP 343 is still (back) on the drawing table, PEP 342 has barely been
 implemented (did it survive the AST-branch merge?), and already you
 are talking about adding more stuff. Please put on the brakes!

 Sorry.  I thought that 343 was just getting a minor tune-up.

Maybe, but the issues on the table are naming issues -- is __with__
the right name, or should it be __context__? Should the decorator be
applied implicitly? Should the decorator be called @context or
@contextmanager?

 In the months
 since the discussion and approval (and implementation; Michael Hudson
 actually had a PEP 343 patch out there),

Which he described previously as a hack and apparently didn't feel
comfortable checking in. At least some of it will have to be redone,
(a) for the AST code, and (b) for the revised PEP.

 I've been doing a lot of thinking
 about how they will be used in applications, and thought that it would be a
 good idea to promote people using task-specific variables in place of
 globals or thread-locals.

That's clear, yes. :-)

I still find it unlikely that a lot of people will be using trampoline
frameworks. You and Twisted, that's all I expect.

 The conventional wisdom is that global variables are bad, but the truth is
 that they're very attractive because they allow you to have one less thing
 to pass around and think about in every line of code.

Which doesn't make them less bad -- they're still there and perhaps
more likely to trip you up when you least expect it. I think there's a
lot of truth in that conventional wisdom.

 Without globals, you
 would sooner or later end up with every function taking twenty arguments to
 pass through states down to other code, or else trying to cram all this
 data into some kind of context object, which then won't work with code
 that doesn't know about *your* definition of what a context is.

Methinks you are exaggerating for effect.

 Globals are thus extremely attractive for practical software
 development.  If they weren't so useful, it wouldn't be necessary to warn
 people not to use them, after all.  :)

 The problem with globals, however, is that sometimes they need to be
 changed in a particular context.  PEP 343 makes it safer to use globals
 because you can always offer a context manager that changes them
 temporarily, without having to hand-write a try-finally block.  This will
 make it even *more* attractive to use globals, which is not a problem as
 long as the code has no multitasking of any sort.

Hm. There are different kinds of globals. Most globals don't need to
be context-managed at all, because they can safely be shared between
threads, tasks or coroutines. Caches usually fall in this category
(e.g. the compiled regex cache). A little locking is all it takes.

The globals that need to be context-managed are the pernicious kind of
which you can never have too few. :-)

They aren't just accumulating global state, they are implicit
parameters, thereby truly invoking the reasons why globals are frowned
upon.

 Of course, the multithreading scenario is usually fixed by using
 thread-locals.  All I'm proposing is that we replace thread locals with
 task locals, and promote the use of task-local variables for managed
 contexts (such as the decimal context) *that would otherwise be a global or
 a thread-local variable*.  This doesn't seem to me like a very big deal;
 just an encouragement for people to make their stuff easy to use with PEP
 342 and 343.

I'm all for encouraging people to make their stuff easy to use with
these PEPs, and with multi-threading use.

But IMO the best way to accomplish those goals is to refrain from
global (or thread-local or task-local) context as much as possible,
for example by passing along explicit context.

The mere existence of a standard library module to make handling
task-specific contexts easier sends the wrong signal; it suggests that
it's a good pattern to use, which it isn't -- it's a last-resort
pattern, when all other solutions fail.

If it weren't for Python's operator overloading, the decimal module
would have used explicit contexts (like the Java version); but since
it would be really strange to have such a fundamental numeric type
without the ability to use the conventional operator notation, we
resorted to per-thread context. Even that doesn't always do the right
thing -- handling decimal contexts is surprisingly subtle (as Nick can
testify based on his experiences attempting to write a decimal context
manager for the with-statement!).

Yes, coroutines make it even subtler.

But I haven't seen the use case yet for mixing coroutines with changes
to decimal context settings; somehow it doesn't strike me as a likely
use case (not that you can't construct one, so don't bother -- I can
imagine it too, I just think YAGNI).

 By the way, I don't

Re: [Python-Dev] Questionable AST wibbles

2005-10-21 Thread Guido van Rossum
On 10/21/05, Jeremy Hylton [EMAIL PROTECTED] wrote:
 On 10/21/05, Neal Norwitz [EMAIL PROTECTED] wrote:
  This probably is not a big deal, but I was surprised by this change:
 
  +++ test_repr.py20 Oct 2005 19:59:24 -  1.20
  @@ -123,7 +123,7 @@
 
  def test_lambda(self):
  self.failUnless(repr(lambda x: x).startswith(
  -function lambda))
  +function lambda))

if this means that the __name__ attribute of a lambda now says
lambda instead of lambda, please change it back. The angle
brackets make it stand out more, and I imagine people might be
checking for this to handle it specially.

  This one may be only marginally worse (names w/parameter unpacking):
 
  test_grammar.py
 
  -verify(f4.func_code.co_varnames == ('two', '.2', 'compound',
  -'argument',  'list'))
  +vereq(f4.func_code.co_varnames,
  +  ('two', '.1', 'compound', 'argument',  'list'))

This doesn't bother me.

--
--Guido van Rossum (home page: http://www.python.org/~guido/)
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] AST branch is in?

2005-10-21 Thread Guido van Rossum
On 10/21/05, Neil Schemenauer [EMAIL PROTECTED] wrote:
 Also, the concrete syntax tree (CST) generated by Python's parser is
 not a convenient data structure to deal with. Anyone who's used the
 'parser' module probably experienced the pain:

  parser.ast2list(parser.suite('a = 1'))
 [257, [266, [267, [268, [269, [320, [298, [299, [300, [301,
 [303, [304, [305, [306, [307, [308, [309, [310, [311, [1,
 'a']]], [22, '='], [320, [298, [299, [300, [301, [303,
 [304, [305, [306, [307, [308, [309, [310, [311, [2,
 '1'], [4, '']]], [0, '']]

That's the fault of the 'parser' extension module though, and this
affects tools using the parser module, not the bytecode compiler
itself. The CST exposed to C programmers is slightly higher level.
(But the new AST is higher level still, of course.)

BTW, Elemental is letting me open-source a reimplementation of pgen in
Python. This also includes a nifty way to generate ASTs. This should
become available within a few weeks.

--
--Guido van Rossum (home page: http://www.python.org/~guido/)
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] DRAFT: python-dev Summary for 2005-09-01 through 2005-09-16

2005-10-21 Thread Guido van Rossum
On 10/21/05, Tony Meyer [EMAIL PROTECTED] wrote:
 This is over a month late, sorry, but here it is (Steve did his
 threads ages ago; I've fallen really behind).

Better late than never! These summaries are awesome.

Just one nit:
 --
 Responsiveness of IDLE development
 --

 Noam Raphael posted a request for help getting a large patch to IDLE
 committed to CVS.  He was concerned that there hasn't been any IDLE
 development recently, and that patches are not being considered.  He
 indicated that his group was considering offering a fork of IDLE with
 the improvements, but that they would much prefer integrating the
 improvements into the core distribution.

 It was pointed out that a fork might be the best solution, for
 various reasons (e.g. the improvements may not be of general
 interest, the release time would be much quicker), and that this was
 how the current version of IDLE was developed.  The dicussion died
 out, so it seems likely that a fork will be the resulting solution.

Later, it turned out that Kurt Kaiser had missed this message on
python-dev (which he only reads occasionally); he redirected the
thread to idle-dev where it seems that his issues with the
contribution are being resolved and a fork is averted. Whew!

--
--Guido van Rossum (home page: http://www.python.org/~guido/)
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Comparing date+time w/ just time

2005-10-22 Thread Guido van Rossum
On 10/22/05, [EMAIL PROTECTED] [EMAIL PROTECTED] wrote:
 With significant input from Fred I made some changes to xmlrpclib a couple
 months ago to better integrate datetime objects into xmlrpclib.  That raised
 some problems because I neglected to add support for comparing datetime
 objects with xmlrpclib.DateTime objects.  (The problem showed up in
 MoinMoin.)  I've been working on that recently (adding rich comparison
 methods to DateTime while retaining __cmp__ for backward compatibility), and
 have second thoughts about one of the original changes.

 I tried to support datetime, date and time objects.  My problems are with
 support for time objects.  Marshalling datetimes as xmlrpclib.DateTime
 objects is no problem (though you lose fractions of a second).  Marshalling
 dates is reasonable if you treat the time as 00:00:00.  I decided to marshal
 datetime.time objects by fixing the day portion of the xmlrpclib.DateTime
 object as today's date.  That's the suspect part.

 When I went back recently to add better comparison support, I decided to
 compare xmlrpclib.DateTime objects with time objects by simply comparing the
 HH:MM:SS part of the DateTime with the time object.  That's making me a bit
 queazy now.  datetime.time(hour=23) would compare equal to any DateTime with
 its time equal to 11PM.  Under the rule, in the face of ambiguity, refuse
 the temptation to guess, I'm inclined to dump support for marshalling and
 comparison of time objects altogether.  Do others agree that was a bad idea?

Agreed.

--
--Guido van Rossum (home page: http://www.python.org/~guido/)
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Proposed resolutions for open PEP 343 issues

2005-10-22 Thread Guido van Rossum
On 10/22/05, Nick Coghlan [EMAIL PROTECTED] wrote:
 I'm still looking for more feedback on the issues raised in the last update of
 PEP 343. There hasn't been much direct feedback so far, but I've rephrased and
 suggested resolutions for the outstanding issues based on what feedback I have
 received, and my own thoughts over the last week of so.

Thanks for bringing this up again. It's been at the back of my mind,
but hasn't had much of a chance to come to the front lately...

 For those simply skimming, my proposed issue resolutions are:

1. Use the slot name __context__ instead of __with__

+1

2. Reserve the builtin name context for future use as described below

+0.5. I don't think we'll need that built-in, but I do think that the
term context is too overloaded to start using it for anything in
particular.

3a. Give generator-iterators a native context that invokes self.close()

I'll have to think about this one more, and I don't have time for that
right now.

3b. Use contextmanager as a builtin decorator to get generator-contexts

+1

4. Special case the __context__ slot to avoid the need to decorate it

-1. I expect that we'll also see generator *functions* (not methods)
as context managers. The functions need the decorator. For consistency
the methods should also be decorated explicitly.

For example, while I'm now okay (at the +0.5 level) with having files
automatically behave like context managers, one could still write an
explicit context manager 'opening':

@contextmanager
def opening(filename):
f = open(filename)
try:
yield f
finally:
f.close()

Compare to

class FileLike:

def __init__(self, ...): ...

def close(self): ...

@contextmanager
def __context__(self):
try:
yield self
finally:
self.close()

--
--Guido van Rossum (home page: http://www.python.org/~guido/)
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Proposed resolutions for open PEP 343 issues

2005-10-22 Thread Guido van Rossum
Here's another argument against automatically decorating __context__.

What if I want to have a class with a __context__ method that returns
a custom context manager that *doesn't* involve applying
@contextmanager to a generator?

While technically this is possible with your proposal (since such a
method wouldn't be a generator), it's exceedingly subtle for the human
reader. I'd much rather see the @contextmanager decorator to emphasize
the difference.

--
--Guido van Rossum (home page: http://www.python.org/~guido/)
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Proposed resolutions for open PEP 343 issues

2005-10-23 Thread Guido van Rossum
On 10/23/05, Nick Coghlan [EMAIL PROTECTED] wrote:
 However, I'm still concerned about the fact that the following class has a
 context manager that doesn't actually work:

class Broken(object):
  def __context__(self):
  print This never gets executed
  yield
  print Neither does this

That's only because of your proposal to endow generators with a
default __context__ manager. Drop that idea and you're golden.

(As long as nobody snuck the proposal back in to let the
with-statement silently ignore objects that don't have a __context__
method -- that was rejected long ago on.)

In my previous mail I said I had to think about that one more -- well,
I have, and I'm now -1 on it. Very few generators (that aren't used a
context manangers) will need the immediate explicit close() call, and
it will happen eventually when they are GC'ed anyway. Too much magic
is bad for your health.

 So how about if type_new simply raises a TypeError if it finds a
 generator-iterator function in the __context__ slot?

No. type should not bother with understanding what the class is trying
to do. __new__ is only special because it is part of the machinery
that type itself invokes in order to create a new class.

--
--Guido van Rossum (home page: http://www.python.org/~guido/)
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Proposed resolutions for open PEP 343 issues

2005-10-23 Thread Guido van Rossum
On 10/23/05, Phillip J. Eby [EMAIL PROTECTED] wrote:
 Actually, you've just pointed out a new complication introduced by having
 __context__.  The return value of __context__ is supposed to have an
 __enter__ and an __exit__.  Is it a type error if it doesn't?  How do we
 handle that, exactly?

Of course it's an error! The translation in the PEP should make that
quite clear (there's no testing for whether __context__, __enter__
and/or __exit__ exist before they are called). It would be an
AttributeError.

 That is, assuming generators don't have enter/exit/context methods, then
 the above code is broken because its __context__ returns an object without
 enter/exit, sort of like an __iter__ that returns something without a 
 'next()'.

Right. That was my point. Nick's worried about undecorated __context__
because he wants to endow generators with a different default
__context__. I say no to both proposals and the worries cancel each
other out. EIBTI.

--
--Guido van Rossum (home page: http://www.python.org/~guido/)
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Divorcing str and unicode (no more implicit conversions).

2005-10-23 Thread Guido van Rossum
Folks, please focus on what Python 3000 should do.

I'm thinking about making all character strings Unicode (possibly with
different internal representations a la NSString in Apple's Objective
C) and introduce a separate mutable bytes array data type. But I could
use some validation or feedback on this idea from actual
practitioners.

I don't want to see proposals to mess with the str/unicode semantics
in Python 2.x. Let' leave the Python 2.x str/unicode semantics alone
until Python 3000 -- we don't need mutliple transitions. (Although we
could add the mutable bytes array type sooner.)

--
--Guido van Rossum (home page: http://www.python.org/~guido/)
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Proposed resolutions for open PEP 343 issues

2005-10-24 Thread Guido van Rossum
On 10/24/05, Nick Coghlan [EMAIL PROTECTED] wrote:
 That makes the resolutions for the posted issues:

 1. The slot name __context__ will be used instead of __with__
 2. The builtin name context is currently offlimits due to its ambiguity
 3a. generator-iterators do NOT have a native context
 3b. Use contextmanager as a builtin decorator to get generator-contexts
 4. The __context__ slot will NOT be special cased

+1

 I'll add those into the PEP and reference this thread after Martin is done
 with the SVN migration.

 However, those resolutions bring up the following issues:

5 a. What exception is raised when EXPR does not have a __context__ method?
  b.  What about when the returned object is missing __enter__ or __exit__?
 I suggest raising TypeError in both cases, for symmetry with for loops.
 The slot check is made in C code, so I don't see any difficulty in raising
 TypeError instead of AttributeError if the relevant slots aren't filled.

Why are you so keen on TypeError? I find AttributeError totally
appropriate. I don't see symmetry with for-loops as a valuable
property here. AttributeError and TypeError are often interchangeable
anyway.

6 a. Should a generic closing context manager be provided?

No. Let's provide the minimal mechanisms FIRST.

  b. If yes, should it be a builtin or in a contexttools module?
 I'm not too worried about this one for the moment, and it could easily be
 left out of the PEP itself. Of the sample managers, it seems the most
 universally useful, though.

Let's leave some examples just be examples.

I think I'm leaning towards adding __context__ to locks (all types
defined in tread or threading, including condition variables), files,
and decimal.Context, and leave it at that.

--
--Guido van Rossum (home page: http://www.python.org/~guido/)
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Inconsistent Use of Buffer Interface in stringobject.c

2005-10-24 Thread Guido van Rossum
On 10/24/05, Phil Thompson [EMAIL PROTECTED] wrote:
 I'm implementing a string-like object in an extension module and trying to
 make it as interoperable with the standard string object as possible. To do
 this I'm implementing the relevant slots and the buffer interface. For most
 things this is fine, but there are a small number of methods in
 stringobject.c that don't use the buffer interface - and I don't understand
 why.

 Specifically...

 string_contains() doesn't which means that...

 MyString(foo) in foobar

 ...doesn't work.

 s.join(sequence) only allows sequence to contain string or unicode objects.

 s.strip([chars]) only allows chars to be a string or unicode object. Same for
 lstrip() and rstrip().

 s.ljust(width[, fillchar]) only allows fillchar to be a string object (not
 even a unicode object). Same for rjust() and center().

 Other methods happily allow types that support the buffer interface as well as
 string and unicode objects.

 I'm happy to submit a patch - I just wanted to make sure that this behaviour
 wasn't intentional for some reason.

A concern I'd have with fixing this is that Unicode objects also
support the buffer API. In any situation where either str or unicode
is accepted I'd be reluctant to guess whether a buffer object was
meant to be str-like or Unicode-like. I think this covers all the
cases you mention here.

We need to support this better in Python 3000; but I'm not sure you
can do much better in Python 2.x; subclassing from str is unlikely to
work for you because then too many places are going to assume the
internal representation is also the same as for str.

--
--Guido van Rossum (home page: http://www.python.org/~guido/)
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Inconsistent Use of Buffer Interface in stringobject.c

2005-10-24 Thread Guido van Rossum
On 10/24/05, M.-A. Lemburg [EMAIL PROTECTED] wrote:
 Guido van Rossum wrote:
  A concern I'd have with fixing this is that Unicode objects also
  support the buffer API. In any situation where either str or unicode
  is accepted I'd be reluctant to guess whether a buffer object was
  meant to be str-like or Unicode-like. I think this covers all the
  cases you mention here.

 This situation is a little better than that: the buffer
 interface has a slot called getcharbuffer which is what
 the string methods use in case they find that a string
 argument is not of type str or unicode.

I stand corrected!

 As first step, I'd suggest to implement the gatcharbuffer
 slot. That will already go a long way.

Phil, if anything still doesn't work after doing what Marc-Andre says,
those would be good candidates for fixes!

--
--Guido van Rossum (home page: http://www.python.org/~guido/)
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Divorcing str and unicode (no more implicit conversions).

2005-10-24 Thread Guido van Rossum
On 10/24/05, Martin v. Löwis [EMAIL PROTECTED] wrote:
 Indeed. My guess is that indexing is more common than you think,
 especially when iterating over the string. Of course, iteration
 could also operate on UTF-8, if you introduced string iterator
 objects.

Python's slice-and-dice model pretty much ensures that indexing is
common. Almost everything is ultimately represented as indices: regex
search results have the index in the API, find()/index() return
indices, many operations take a start and/or end index. As long as
that's the case, indexing better be fast.

Changing the APIs would be much work, although perhaps not impossible
of Python 3000. For example, Raymond Hettinger's partition() API
doesn't refer to indices at all, and can replace many uses of find()
or index().

Still, the mere existence of __getitem__ and __getslice__ on strings
makes it necessary to implement them efficiently. How realistic would
it be to drop them? What should replace them? Some kind of abstract
pointers-into-strings perhaps, but that seems much more complex.

The trick seems to be to support both simple programs manipulating
short strings (where indexing is probably the easiest API to
understand, and the additional copying is unlikely to cause
performance problems) , as well as  programs manipulating very large
buffers containing text and doing sophisticated string processing on
them. Perhaps we could provide a different kind of API to support the
latter, perhaps based on a mutable character buffer data type without
direct indexing?

--
--Guido van Rossum (home page: http://www.python.org/~guido/)
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Divorcing str and unicode (no more implicit conversions).

2005-10-24 Thread Guido van Rossum
On 10/24/05, Martin v. Löwis [EMAIL PROTECTED] wrote:
 Guido van Rossum wrote:
  Changing the APIs would be much work, although perhaps not impossible
  of Python 3000. For example, Raymond Hettinger's partition() API
  doesn't refer to indices at all, and can replace many uses of find()
  or index().

 I think Neil's proposal is not to make them go away, but to implement
 them less efficiently. For example, if the internal representation
 is UTF-8, indexing requires linear time, as opposed to constant time.
 If the internal representation is UTF-16, and you have a flag to
 indicate whether there are any surrogates on the string, indexing
 is constant if the flag is false, else linear.

I understand all that. My point is that it's a bad idea to offer an
indexing operation that isn't O(1).

  Perhaps we could provide a different kind of API to support the
  latter, perhaps based on a mutable character buffer data type without
  direct indexing?

 There are different design goals conflicting here:
 - some think: all my data is ASCII, so I want to only use one
byte per character.
 - others think: all my data goes to the Windows API, so I want
to use 2 byte per character.
 - yet others think: I want all of Unicode, with proper, efficient
indexing, so I want four bytes per char.

I doubt the last one though. Probably they really don't want efficient
indexing, they want to perform higher-level operations that currently
are only possible using efficient indexing or slicing. With the right
API. perhaps they could work just as efficiently with an internal
representation of UTF-8.

 It's not so much a matter of API as a matter of internal
 representation. The API doesn't have to change (except for the
 very low-level C API that directly exposes Py_UNICODE*, perhaps).

I think the API should reflect the representation *to some extend*,
namely it shouldn't claim to have operations that are typically
thought of as O(1) that can only be implemented as O(n). An internal
representation of UTF-8 might make everyone happy except heavy Windows
users; but it requires changes to the API so people won't be writing
Python 2.x-style string slinging code.

--
--Guido van Rossum (home page: http://www.python.org/~guido/)
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Divorcing str and unicode (no more implicit conversions).

2005-10-24 Thread Guido van Rossum
On 10/24/05, Bill Janssen [EMAIL PROTECTED] wrote:
   - yet others think: I want all of Unicode, with proper, efficient
  indexing, so I want four bytes per char.
 
  I doubt the last one though. Probably they really don't want efficient
  indexing, they want to perform higher-level operations that currently
  are only possible using efficient indexing or slicing. With the right
  API. perhaps they could work just as efficiently with an internal
  representation of UTF-8.

 I just got mail this morning from a researcher who wants exactly what
 Martin described, and wondered why the default MacPython 2.4.2 didn't
 provide it by default. :-)

Oh, I don't doubt that they want it. But often they don't *need* it,
and the higher-level goal they are trying to accomplish can be dealt
with better in a different way. (Sort of my response to people asking
for static typing in Python as well. :-)

Did they tell you what they were trying to do that MacPython 2.4.2
wouldn't let them, beyond represent a large Unicode string as an
array of 4-byte integers?

--
--Guido van Rossum (home page: http://www.python.org/~guido/)
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Proposed resolutions for open PEP 343 issues

2005-10-25 Thread Guido van Rossum
On 10/25/05, Eric Nieuwland [EMAIL PROTECTED] wrote:
 Hmmm... Would it be reasonable to introduce a ProtocolError exception?

And which perceived problem would that solve? The problem of Nick 
Guido disagreeing in public?

--
--Guido van Rossum (home page: http://www.python.org/~guido/)
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Proposed resolutions for open PEP 343 issues

2005-10-25 Thread Guido van Rossum
[Eric are all your pets called Eric? Nieuwland]
  Hmmm... Would it be reasonable to introduce a ProtocolError exception?

[Guido]
  And which perceived problem would that solve?

[Eric]
 It was meant to be a bit more informative about what is wrong.

 ProtocolError: lacks __enter__ or __exit__

That's exactly what I'm trying to avoid. :)

I find AttributeError: __exit__ just as informative. In either case,
if you know what __exit__ means, you'll know what you did wrong. And
if you don't know what it means, you'll have to look it up anyway. And
searching for ProtocolError doesn't do you any good -- you'll have to
learn about what __exit__ is and where it is required.

--
--Guido van Rossum (home page: http://www.python.org/~guido/)
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] AST branch is in?

2005-10-25 Thread Guido van Rossum
On 10/25/05, Frank Wierzbicki [EMAIL PROTECTED] wrote:
  My name is Frank Wierzbicki and I'm working on the Jython project.  Does
 anyone on this list know more about the history of this Grammar sharing
 between the two projects?  I've heard about some Grammar sharing between
 Jython and Python, and I've noticed that (most of) the jython code in
 /org/python/parser/ast is commented Autogenerated AST node.  I would
 definitely like to look at (eventually) coordinating with this effort.

  I've cross-posted to the Jython-dev list in case someone there has some
 insight.

Your best bet is to track down Jim Hugunin and see if he remembers.
He's jimhug at microsoft.com or jim at hugunin.net.

--
--Guido van Rossum (home page: http://www.python.org/~guido/)
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Divorcing str and unicode (no more implicit conversions).

2005-10-25 Thread Guido van Rossum
On 10/25/05, Bill Janssen [EMAIL PROTECTED] wrote:
 I think he was more interested in the invariant Martin proposed, that

  len(\U0001)

 should always be the same and should always be 1.

Yes but why? What does this invariant do for him?

--
--Guido van Rossum (home page: http://www.python.org/~guido/)
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] AST branch is in?

2005-10-25 Thread Guido van Rossum
On 10/25/05, Samuele Pedroni [EMAIL PROTECTED] wrote:
  Your best bet is to track down Jim Hugunin and see if he remembers.
  He's jimhug at microsoft.com or jim at hugunin.net.

 no. this is all after Jim, its indeed a derived effort from the CPython
 own AST effort, just that we started using it quite a while ago.
 This is all after Jim was not involved with Jython anymore, Finn Bock
 started this.

Oops! Sorry for the misinformation. Shows how much I know. :(

--
--Guido van Rossum (home page: http://www.python.org/~guido/)
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Divorcing str and unicode (no more implicitconversions).

2005-10-25 Thread Guido van Rossum
On 10/25/05, Josiah Carlson [EMAIL PROTECTED] wrote:
 Identically drawn glyphs are a problem, and pretending that they aren't
 a problem, doesn't make it so.  Right now, all possible name glyphs are
 visually distinct, which would not be the case if any unicode character
 could be used as a name (except for numerals).  Speaking of which, would
 we then be offering support for arabic/indic numeric literals, and/or
 support it in int()/float()?  Ideally I would like to say yes, but I
 could see the confusion if such were allowed.

This problem isn't new. There are plenty of fonts where 1 and l are
hard to distinguish, or l and I for that matter, or O and 0.

Yes, we need better tools to diagnose this.

No, we shouldn't let this stop us from adding such a feature if it is
otherwise a good feature.

I'm not so sure about this for other reasons -- it hampers code
sharing, and as soon as you add right-to-left character sets to the
mix (or top-to-bottom, for that matter), displaying source code is
going to be near impossible for most tools (since the keywords and
standard library module names will still be in the Latin alphabet).
This actually seems a killer even for allowing Unicode in comments,
which I'd otherwise favor. What do Unicode-aware apps generally do
with right-to-left characters?

--
--Guido van Rossum (home page: http://www.python.org/~guido/)
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Proposed resolutions for open PEP 343 issues

2005-10-25 Thread Guido van Rossum
On 10/25/05, Nick Coghlan [EMAIL PROTECTED] wrote:
 Maybe there's a design principle in there somewhere:

Failed duck-typing - AttributeError (or TypeError for complex checks)
Failed instance or subtype check - TypeError

Doesn't convince me.

If there are principles at work here (and not just coincidences), they
are (a) don't  lightly replace an exception by another, and (b) don't
raise AttributeError; the getattr operation raise it for you. (a) says
that we should let the AttributeError bubble up in the case of the
with-statement; (b) explains why you see TypeError when a slot isn't
filled.

 Most of the functions in abstract.c handle complex protocols, so a simple
 attribute error wouldn't convey the necessary meaning. The context protocol,
 on the other hand, is fairly simple, and an AttributeError tells you
 everything you really need to know.

That's what I've been saying all the time. :-)

--
--Guido van Rossum (home page: http://www.python.org/~guido/)
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Divorcing str and unicode (no more implicitconversions).

2005-10-25 Thread Guido van Rossum
On 10/25/05, Josiah Carlson [EMAIL PROTECTED] wrote:
 Indeed, they are similar, but_ different_ in my font as well.  The trick
 is that the glyphs are not different in the case of certain greek or
 cyrillic letters.  They don't just /look/ similar they /are identical/.

Well, in the font I'm using to read this email, I and l are /identical/.

--
--Guido van Rossum (home page: http://www.python.org/~guido/)
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] ? operator in python

2005-10-26 Thread Guido van Rossum
Dear Lucky,

You are correct. Python 2.5 will have a conditional operator. The
syntax will be different than C; it will look like this:

  (EXPR1 if TEST else EXPR2)

(which is the equivalent of TEST?EXPR1:EXPR2 in C). For more
information, see PEP 308 (http://www.python.org/peps/pep-0308.html).

Python 2.5 will be released some time next year; we hope to have
alphas available in the 2nd quarter. Thatr's about as firm as we can
currently be about the release date.

Enjoy,

--Guido van Rossum

On 10/25/05, Lucky Wankhede [EMAIL PROTECTED] wrote:


  Dear sir,

  I m a student of Computer Science Dept.
  University Of Pune.(M.S.) (India). We are learning
 python as a course for our semester. Found its not
 only use full but heart touching laguage.

   Sir , I have found that the python is going
 to have new feature, of ?  operator, same as in C
 languge.


   Kindly provide me with the information that
 in version of python we will be able to find that
 feature and when it is about to realse.
Considering your best of sympathetic
 consideration. Hoping for early response.



  Thank You.


 Mr. Lucky R. Wankhede
 M.C,A, Ist,
 Dept. Of Comp.
 Sciende,
 University of Pune,
 India.

--
--Guido van Rossum (home page: http://www.python.org/~guido/)
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Conversion to Subversion is complete

2005-10-27 Thread Guido van Rossum
On 10/27/05, [EMAIL PROTECTED] [EMAIL PROTECTED] wrote:
 The Python source code repository is now converted to subversion;
 please feel free to start checking out new sandboxes.

Woo hoo! Thanks for all the hard work and good thinking, Martin.

 Most of you are probably interested in checking out one of these
 folders:

 svn+ssh://[EMAIL PROTECTED]/python/trunk
 svn+ssh://[EMAIL PROTECTED]/python/branches/release24-maint
 svn+ssh://[EMAIL PROTECTED]/peps

This doesn't work for me. I'm sure the problem is on my end, but my
svn skills are too rusty to figure it out. I get this:

$ svn checkout svn+ssh://[EMAIL PROTECTED]/peps
Permission denied (publickey,keyboard-interactive).

svn: Connection closed unexpectedly
$svn --version
svn, version 1.2.0 (r14790)
   compiled Jun 13 2005, 18:51:32

Copyright (C) 2000-2005 CollabNet.
Subversion is open source software, see http://subversion.tigris.org/
This product includes software developed by CollabNet (http://www.Collab.Net/).

The following repository access (RA) modules are available:

* ra_dav : Module for accessing a repository via WebDAV (DeltaV) protocol.
  - handles 'http' scheme
  - handles 'https' scheme
* ra_svn : Module for accessing a repository using the svn network protocol.
  - handles 'svn' scheme
* ra_local : Module for accessing a repository on local disk.
  - handles 'file' scheme

$

I can ssh to svn.python.org just fine, with no password (it says it's
dinsdale). I can checkout the read-only versions just fine. I can work
with the pydotorg svn repository just fine (checked something in last
week).

--
--Guido van Rossum (home page: http://www.python.org/~guido/)
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 352: Required Superclass for Exceptions

2005-10-28 Thread Guido van Rossum
On 10/28/05, Nick Coghlan [EMAIL PROTECTED] wrote:
 Brett Cannon wrote:
  Anyway, as soon as the cron job posts the PEP to the web site (already
  checked into the new svn repository) have a read and start expounding
  about how wonderful it is and that there is no qualms with it
  whatsoever.  =)

 You mean aside from the implementation of __getitem__ being broken in
 BaseException*? ;)

Are you clairvoyant?! The cronjob wass broken due to the SVN
transition and the file wasn't on the site yet. (Now fixed BTW.) Oh,
and here's the URL just in case:
http://www.python.org/peps/pep-0352.html

 Aside from that, I actually do have one real problem and one observation.

 The problem: The value of ex.args

The PEP as written significantly changes the semantics of ex.args - instead
 of being an empty tuple when no arguments are provided, it is instead a
 singleton tuple containing the empty string.

A backwards compatible definition of BaseException.__init__ would be:

  def __init__(self, *args):
  self.args = args
  self.message = '' if not args else args[0]

But does anyone care? As long as args exists and is a tuple, does it
matter that it doesn't match the argument list when the latter was
empty? IMO the protocol mostly says that ex.args exists and is a tuple
-- the values in there can't be relied upon in pre-2.5-Python.
Exceptions that have specific information should store it in a
different place, not in ex.args.

 The observation: The value of ex.message

Under PEP 352 the concept of allowing return x to be used in a generator
 to mean raise StopIteration(x) would actually align quite well. A bare
 return, however, would need to be changed to translate to raise
 StopIteration(None) rather than its current raise StopIteration in order to
 get the correct value (None) into ex.message.

Since ex.message is new, how can you say that it should have the value
None? IMO the whole idea is that ex.message should always be a string
going forward (although I'm not going to add a typecheck to enforce
this).

--
--Guido van Rossum (home page: http://www.python.org/~guido/)
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 352 Transition Plan

2005-10-28 Thread Guido van Rossum
On 10/28/05, Raymond Hettinger [EMAIL PROTECTED] wrote:
 I don't follow why the PEP deprecates catching a category of exceptions
 in a different release than it deprecates raising them.  Why would a
 release allow catching something that cannot be raised?  I must be
 missing something here.

So conforming code can catch exceptions raised by not-yet conforming code.

--
--Guido van Rossum (home page: http://www.python.org/~guido/)
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 352: Required Superclass for Exceptions

2005-10-28 Thread Guido van Rossum
[Trying to cut this short... We have too many threads for this topic. :-( ]

On 10/28/05, Nick Coghlan [EMAIL PROTECTED] wrote:
[on making args b/w compatible]
 I agree changing the behaviour is highly unlikely to cause any serious
 problems (mainly because anyone *caring* about the contents of args is rare),
 the current behaviour is relatively undocumented, and the PEP now proposes
 deprecating ex.args immediately, so Guido's well within his rights if he wants
 to change the behaviour.

I take it back. Since the feature will disappear in Python 3.0 and is
maintained only for b/w compatibility, we should keep it as b/w
compatible as possible. That means it should default to () and always
have as its value exactly the positional arguments that were passed.

OTOH, I want message to default to , not to None (even though it
will be set to None if you explicitly pass None as the first
argument). So the constructor could be like this (until Python 3000):

def __init__(self, *args):
self.args = args
if args:
self.message = args[0]
else:
self.message = 

I think Nick proposed this before as well, so let's just do this.

 I'm talking about the specific context of the behaviour of 'return' in
 generators, not on the behaviour of ex.message in general. For normal
 exceptions, I agree '' is the correct default.

 For that specific case of allowing a return value from generators, and using
 it as the message on the raised StopIteration, *then* it makes sense for
 return to translate to raise StopIteration(None), so that generators have
 the same 'default return value' as normal functions.

I don't like that (not-even-proposed) feature anyway. I see no use for
it; it only gets proposed by people who are irked by the requirement
that generators can contain 'return' but not 'return value'. I think
that irkedness is unwarranted; 'return' is useful to cause an early
exit, but generators don't have a return value so 'return value' is
meaningless.

--
--Guido van Rossum (home page: http://www.python.org/~guido/)
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 352 Transition Plan

2005-10-31 Thread Guido van Rossum
I've made a final pass over PEP 352, mostly fixing the __str__,
__unicode__ and __repr__ methods to behave more reasonably. I'm all
for accepting it now. Does anybody see any last-minute show-stopping
problems with it?

As always, http://python.org/peps/pep-0352.html

--
--Guido van Rossum (home page: http://www.python.org/~guido/)
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Freezing the CVS on Oct 26 for SVN switchover

2005-10-31 Thread Guido van Rossum
Help!

What's the magic to get $Revision$ and $Date$ to be expanded upon
checkin? Comparing pep-0352.txt and pep-0343.txt, I noticed that the
latter has the svn revision and date in the headers, while the former
still has Brett's original revision 1.5 and a date somewhere in June.
I tried to fix this by rewriting the fields as $Revision$ and $Date$
but that doesn't seem to make a difference.

Googling for this is a bit tricky because Google collapses $Revision
and Revision, which makes any query for svn and $Revision rather
non-specific. :-(  It's also not yet in our Wiki.

--
--Guido van Rossum (home page: http://www.python.org/~guido/)
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] python-dev sprint at PyCon

2005-11-01 Thread Guido van Rossum
On 11/1/05, Phillip J. Eby [EMAIL PROTECTED] wrote:
 At 09:35 AM 11/1/2005 -0500, A.M. Kuchling wrote:
 Every PyCon has featured a python-dev sprint.  For the past few years,
 hacking on the AST branch has been a tradition, but we'll have to come
 up with something new for this year's conference (in Dallas Texas;
 sprints will be Monday Feb. 27 through Thursday March 2).
 
 According to Anthony's release plan, a first alpha of 2.5 would be
 released in March, hence after PyCon and the sprints.  We should
 discuss possible tasks for a python-dev sprint.  What could we do?

 * PEP 343 implementation ('with:')
 * PEP 308 implementation ('x if y else z')
 * A bytes type

* PEP 328 - absolute/relative import
* PEP 341 - unifying try/except and try/finally (I believe this was
accepted; it's still marked Open in PEP 0)

 Or perhaps some of the things that have been waiting for the AST branch to
 be finished, i.e.:

 * One of the global variable speedup PEPs
 * Guido's instance variable speedup idea (LOAD_SELF_IVAR and
 STORE_SELF_IVAR, see
 http://mail.python.org/pipermail/python-dev/2002-February/019854.html)

--
--Guido van Rossum (home page: http://www.python.org/~guido/)
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] python-dev sprint at PyCon

2005-11-01 Thread Guido van Rossum
On 11/1/05, Phillip J. Eby [EMAIL PROTECTED] wrote:
 At 10:22 AM 11/1/2005 -0700, Guido van Rossum wrote:
 * PEP 328 - absolute/relative import

 I assume that references to 2.4 in that PEP should be changed to 2.5, and
 so on.

For the part that hasn't been implemented yet, yes.

 It also appears to me that the PEP doesn't record the issue brought up by
 some people about the current absolute/relative ambiguity being useful for
 packaging purposes.  i.e., being able to nest third-party packages such
 that they end up seeing their dependencies, even though they're not
 installed at the root package level.

 For example, I have a package that needs Python 2.4's version of pyexpat,
 and I need it to run in 2.3, but I can't really overwrite the 2.3 pyexpat,
 so I just build a backported pyexpat and drop it in the package, so that
 the code importing it just ends up with the right thing.

 Of course, that specific example is okay since 2.3 isn't going to somehow
 grow absolute importing.  :)  But I think people brought up other examples
 besides that, it's just the one that I personally know I've done.

I guess this ought to be recorded. :-(

The issue has been beaten to death and my position remains firm:
rather than playing namespace games, consistent renaming is the right
thing to do here. This becomes a trivial source edit, which beats the
problems of debugging things when it doesn't work out as expected
(which is very common due to the endless subtleties of loading
multiple versions of the same code).

--
--Guido van Rossum (home page: http://www.python.org/~guido/)
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] python-dev sprint at PyCon

2005-11-01 Thread Guido van Rossum
On 11/1/05, Phillip J. Eby [EMAIL PROTECTED] wrote:
 At 11:14 AM 11/1/2005 -0700, Guido van Rossum wrote:
 I guess this ought to be recorded. :-(
 
 The issue has been beaten to death and my position remains firm:
 rather than playing namespace games, consistent renaming is the right
 thing to do here. This becomes a trivial source edit,

 Well, it's not trivial if you're (in my case) trying to support 2.3 and 2.4
 with the same code base.

You should just bite the bullet and make a privatized copy of the
package(s) on which you depend part of your own distributions.

 It'd be nice to have some other advice to offer people besides, go edit
 your code.  Of course, if the feature hadn't already existed, I suppose a
 PEP to add it would have been shot down, so it's a reasonable decision.

I agree it would be nice if we could do something about deep version
issues. But it's hard, and using the absolute/relative ambiguity isn't
a solution but a nasty hack. I don't have a solution either except
copying code (which IMO is a *fine* solution in most cases as long as
copyright issues don't prevent you).

--
--Guido van Rossum (home page: http://www.python.org/~guido/)
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] a different kind of reduce...

2005-11-01 Thread Guido van Rossum
 [Greg Ewing]
  Maybe ** should be defined for functions so that you
  could do things like
 
 up3levels = dirname ** 3

[Raymond Hettinger]
 Hmm, using the function's own namespace is an interesting idea.  It
 might also be a good place to put other functionals:

results = f.map(data)
newf = f.partial(somearg)

Sorry to rain on everybody's parade, but I don't think so. There are
many different types of callables. This stuff would only work if they
all implemented the same API. That's unlikely to happen. A module with
functions to implement the various functional operations has much more
potential.

--
--Guido van Rossum (home page: http://www.python.org/~guido/)
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] No more problems with new SVN repository

2005-11-03 Thread Guido van Rossum
I have a question after this exhilarating exchange.

Is there a way to prevent this kind of thing in the future, e.g. by
removing or rejecting change log messages with characters that are
considered invalid in XML?

(Or should perhaps the fix be to suppress or quote these characters
somehow in XML?)

--
--Guido van Rossum (home page: http://www.python.org/~guido/)
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Plea to distribute debugging lib

2005-11-04 Thread Guido van Rossum
I vaguely recall that there were problems with distributing the debug
version of the MS runtime.

Anyway, why can't you do this yourself for all Boost users? It's all
volunteer time, you know...

--Guido

On 11/4/05, Charles Cazabon [EMAIL PROTECTED] wrote:
 David Abrahams [EMAIL PROTECTED] wrote:
 
  For years, Boost.Python has been doing some hacks to work around the fact
  that a Windows Python distro doesn't include the debug build of the library.
 [...]
  Having to download the Python source and build the debug DLL was deemed
  unacceptable.

 I'm curious: why was this deemed unacceptable?  Python's license is about as
 liberal as it gets, and the code is almost startlingly easy to compile --
 easier than any other similarly-sized codebase I've had to work with.

 Charles
 --
 ---
 Charles Cazabon   [EMAIL PROTECTED]
 GPL'ed software available at:   http://pyropus.ca/software/
 ---
 ___
 Python-Dev mailing list
 Python-Dev@python.org
 http://mail.python.org/mailman/listinfo/python-dev
 Unsubscribe: 
 http://mail.python.org/mailman/options/python-dev/guido%40python.org



--
--Guido van Rossum (home page: http://www.python.org/~guido/)
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 352 Transition Plan

2005-11-05 Thread Guido van Rossum
 [Guido van Rossum]

  I've made a final pass over PEP 352, mostly fixing the __str__,
  __unicode__ and __repr__ methods to behave more reasonably. I'm all
  for accepting it now. Does anybody see any last-minute show-stopping
  problems with it?

[François]
 I did not follow the thread, so maybe I'm out in order, be kind with me.

 After having read PEP 352, it is not crystal clear whether in:

 try:
 ...
 except:
 ...

 the except: will mean except BaseException: or except Exception:.
 I would except the first, but the text beginning the section titled
 Exception Hierarchy Changes suggests it could mean the second, without
 really stating it.

This is probably a leftover from PEP 348, which did have a change for
bare 'except:' in mind. PEP 352 doesn't propose to change its meaning,
and if there are words that suggest this, they should be removed.

Until Python 3.0, it will not change its meaning from what it is now;
this is because until then, it is still *possible* (though it will
become deprecated behavior) to raise string exceptions or classes that
don't inherit from BaseException.

 Let me argue that except BaseException: is preferable.  First, because
 there is no reason to load a bare except: by anything but a very
 simple and clean meaning, like the real base of the exception hierarchy.
 Second, as a bare except: is not considered good practice on average,
 it would be counter-productive trying to figure out ways to make it more
 frequently _usable_.

What bare 'except:' will mean in Python 3.0, and whether it is even
allowed at all, is up for discussion -- it will have to be a new PEP.

Personally, I think bare 'except:' should be removed from the language
in Python 3.0, so that all except clauses are explicit in what they
catch and there isn't any confusion over whether KeyboardInterrupt,
SystemExit etc. are included or not.

--
--Guido van Rossum (home page: http://www.python.org/~guido/)
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] For Python 3k, drop default/implicit hash, and comparison

2005-11-06 Thread Guido van Rossum
On 11/6/05, Jim Fulton [EMAIL PROTECTED] wrote:
 IMO, the provision of defaults for hash, eq and other comparisons
 was a mistake.

I agree with you for 66%. Default hash and inequalities were a
mistake. But I wouldn't want to do without a default ==/!=
implementation (and of course it should be defined so that an object
is only equal to itself).

In fact, the original hash() was clever enough to complain when __eq__
(or __cmp__) was overridden but __hash__ wasn't; but this got lost by
accident for new-style classes when I added a default __hash__ to the
new universal base class (object). But I think the original default
hash() isn't particularly useful, so I think it's better to just not
be hashable unless __hash__ is defined explicitly.

 I'm especially sensitive to this because I do a lot
 of work with persistent data that outlives program execution. For such
 objects, memory address is meaningless.  In particular, the default
 ordering of objects based in address has caused a great deal of pain
 to people who store data in persistent BTrees.

This argues against the inequalities (, =, , =) and I agree.

 Oddly, what I've read in these threads seems to be arguing about
 which implicit method is best.  The answer, IMO, is to not do this
 implicitly at all.  If programmers want their objects to be
 hashable, comparable, or orderable, then they should implement operators
 explicitly.  There could even be a handy, but *optional*, base class that
 provides these operators based on ids.

I don't like that final suggestion. Before you know it, a meme
develops telling newbies that all classes should inherit from that
optional base class, and then later it's impossible to remove it
because you can't tell whether it's actually needed or not.

 This would be too big a change for Python 2 but, IMO, should definately
 be made for Python 3k.  I doubt any change in the default definition
 of these operations is practical for Python 2.  Too many people rely on
 them, usually without really realizing it.

Agreed.

 Lets plan to stop guessing how to do hash and comparison.

 Explicit is better than implicit. :)

Except that I really don't think that there's anything wrong with a
default __eq__ that uses object identity. As Martin pointed out, it's
just too weird that an object wouldn't be considered equal to itself.
It's the default __hash__ and __cmp__ that mess things up.

--
--Guido van Rossum (home page: http://www.python.org/~guido/)
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] For Python 3k, drop default/implicit hash, and comparison

2005-11-06 Thread Guido van Rossum
On 11/6/05, John Williams [EMAIL PROTECTED] wrote:
 (This is kind of on a tangent to the original discussion, but I don't
 want to create yet another subject line about object comparisons.)

 Lately I've found that virtually all my implementations of __cmp__,
 __hash__, etc. can be factored into this form inspired by the key
 parameter to the built-in sorting functions:

 class MyClass:

def __key(self):
  # Return a tuple of attributes to compare.
  return (self.foo, self.bar, ...)

def __cmp__(self, that):
  return cmp(self.__key(), that.__key())

def __hash__(self):
  return hash(self.__key())

The main way this breaks down is when comparing objects of different
types. While most comparisons typically are defined in terms of
comparisons on simpler or contained objects, two objects of different
types that happen to have the same key shouldn't necessarily be
considered equal.

--
--Guido van Rossum (home page: http://www.python.org/~guido/)
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] For Python 3k, drop default/implicit hash, and comparison

2005-11-06 Thread Guido van Rossum
On 11/6/05, Phillip J. Eby [EMAIL PROTECTED] wrote:
 At 12:58 PM 11/6/2005 -0800, Guido van Rossum wrote:
 The main way this breaks down is when comparing objects of different
 types. While most comparisons typically are defined in terms of
 comparisons on simpler or contained objects, two objects of different
 types that happen to have the same key shouldn't necessarily be
 considered equal.

 When I use this pattern, I often just include the object's type in the
 key.  (I call it the 'hashcmp' value, but otherwise it's the same pattern.)

But how do you make that work with subclassing? (I'm guessing your
answer is that you don't. :-)

--
--Guido van Rossum (home page: http://www.python.org/~guido/)
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] For Python 3k, drop default/implicit hash, and comparison

2005-11-07 Thread Guido van Rossum
Two more thoughts in this thread.

(1) The key idiom is a great pattern but I don't think it would work
well to make it a standard language API.

(2) I'm changing my mind about the default hash().

The original default hash() (which would raise TypeError if __eq__ was
overridden but __hash__ was not) is actually quite useful in some
situations. Basically, simplifying a bit, there are two types of
objects: those that represent *values* and those that do not. For
value-ish objects, overriding __eq__ is common and then __hash__ needs
to be overridden in order to get the proper dict and set behavior. In
a sense, __eq__ defines an equivalence class in the mathematical
sense.

But in many applications I've used objects for which object identity
is important.

Let me construct a hypothetical example: suppose we represent a car
and its parts as objects. Let's say each wheel is an object. Each
wheel is unique and we don't have equivalency classes for them.
However, it would be useful to construct sets of wheels (e.g. the set
of wheels currently on my car that have never had a flat tire). Python
sets use hashing just like dicts. The original hash() and __eq__
implementation would work exactly right for this purpose, and it seems
silly to have to add it to every object type that could possibly be
used as a set member (especially since this means that if a third
party library creates objects for you that don't implement __hash__
you'd have a hard time of adding it).

In short, I agree something's broken, but the fix should not be to
remove the default __hash__ and __eq__ altogether. Instead, the
default __hash__ should be made smarter (and perhaps the only way to
do this is to build the smarts into hash() again). I do agree that
__cmp__, __gt__ etc. should be left undefined by default. All of this
is Python 3000 only.

--
--Guido van Rossum (home page: http://www.python.org/~guido/)
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] cross-compiling

2005-11-07 Thread Guido van Rossum
I know some folks have successfully used cross-compilation before. But
this was in a distant past. There was some support for it in the
configure script; surely you're using that? I believe it lets you
specify defaults for the TRY_RUN macros. But it's probably very
primitive.

About using distutils to build the extensions, this is because some
extensions require quite a bit of logic to determine the build
commands (e.g. look at BSDDB or Tkinter). There was a pre-distutils
way of building extensions using Modules/Setup* but this required
extensive manual editing if tools weren't in the place where they were
expected, and they never were.

I don't have time to look into this further right now, but I hope I
will in the future. Keep me posted!

--Guido

On 11/7/05, Neal Norwitz [EMAIL PROTECTED] wrote:
 We've been having some issues and discussions at work about cross
 compiling.  There are various people that have tried (are) cross
 compiling python.  Right now the support kinda sucks due to a couple
 of reasons.

 First, distutils is required to build all the modules.  This means
 that python must be built twice.  Once for the target machine and once
 for the host machine.  The host machine is really not desired since
 it's only purpose is to run distutils.  I don't know the history of
 why distutils is used.  I haven't had much of an issue with it since
 I've never needed to cross compile.  What are the issues with not
 requiring python to be built on the host machine (ie, not using
 distutils)?

 Second, in configure we try to run little programs (AC_TRY_RUN) to
 determine what to set.  I don't know of any good alternative but to
 force those to be defined manually for cross-compiled environments.
 Any suggestions here?  I'm thinking we can skip the the AC_TRY_RUNs
 if host != target and we pickup the answers to those from a user
 supplied file.

 I'm *not* suggesting that normal builds see any change in behaviour.
 Nothing will change for most developers.  ie, ./configure ; make ;
 ./python will continue to work the same.  I only want to make it
 possible to cross compile python by building it only on the target
 platform.

 n

 PS.  I would be interested to hear from others who are doing cross
 compiling and know more about it than me.
 ___
 Python-Dev mailing list
 Python-Dev@python.org
 http://mail.python.org/mailman/listinfo/python-dev
 Unsubscribe: 
 http://mail.python.org/mailman/options/python-dev/guido%40python.org



--
--Guido van Rossum (home page: http://www.python.org/~guido/)
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Inconsistent behaviour in import/zipimport hooks

2005-11-08 Thread Guido van Rossum
You didn't show us what's in the zip file.  Can you show a zipinfo output?

My intention with import was always that without -O, *.pyo files are
entirely ignored; and with -O, *.pyc files are entirely ignored.

It sounds like you're saying that you want to change this so that .pyc
and .pyo are always honored (with .pyc preferred if -O is not present
and .pyo preferred if -O is present). I'm not sure that I like that
better. If that's how zipimport works, I think it's broken!

--Guido

On 11/8/05, Osvaldo Santana Neto [EMAIL PROTECTED] wrote:
 Hi,

 I'm working on Python[1] port for Maemo Platform[2] and I've found a
 inconsistent behavior in zipimport and import hook with '.pyc' and
 '.pyo' files. The shell section below show this problem using a
 'module_c.pyc', 'module_o.pyo' and 'modules.zip' (with module_c and
 module_o inside):

 $ ls
 module_c.pyc  module_o.pyo  modules.zip

 $ python
  import module_c
  import module_o
 ImportError: No module named module_o

 $ python -O
  import module_c
 ImportError: No module named module_c
  import module_o

 $ rm *.pyc *.pyo
 $ PYTHONPATH=modules.zip python
  import module_c
 module_c
  import module_o
 module_o

 $ PYTHONPATH=modules.zip python -O
  import module_c
 module_c
  import module_o
 module_o

 I've create a patch suggestion to remove this inconsistency[3] (*I* think
 zipimport behaviour is better).

 [1] http://pymaemo.sf.net/
 [2] http://www.maemo.org/
 [3] http://python.org/sf/1346572

 --
 Osvaldo Santana Neto (aCiDBaSe)
 icq, url = (11287184, http://www.pythonbrasil.com.br;)
 ___
 Python-Dev mailing list
 Python-Dev@python.org
 http://mail.python.org/mailman/listinfo/python-dev
 Unsubscribe: 
 http://mail.python.org/mailman/options/python-dev/guido%40python.org



--
--Guido van Rossum (home page: http://www.python.org/~guido/)
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Inconsistent behaviour in import/zipimport hooks

2005-11-09 Thread Guido van Rossum
Maybe it makes more sense to deprecate .pyo altogether and instead
have a post-load optimizer optimize .pyc files according to the
current optimization settings?

Unless others are interested in this nothing will happen.

I've never heard of a third party making their code available only as
.pyo, so the use case for changing things isn't very strong. In fact
the only use cases I know for not making .py available are in
situations where a proprietary canned application is distributed to
end users who have no intention or need to ever add to the code.

--Guido

On 11/9/05, Osvaldo Santana [EMAIL PROTECTED] wrote:
 On 11/9/05, Guido van Rossum [EMAIL PROTECTED] wrote:
  You didn't show us what's in the zip file.  Can you show a zipinfo output?

 $ zipinfo modules.zip
 Archive:  modules.zip   426 bytes   2 files
 -rw-r--r--  2.3 unx  109 bx defN 31-Oct-05 14:49 module_o.pyo
 -rw-r--r--  2.3 unx  109 bx defN 31-Oct-05 14:48 module_c.pyc
 2 files, 218 bytes uncompressed, 136 bytes compressed:  37.6%

  My intention with import was always that without -O, *.pyo files are
  entirely ignored; and with -O, *.pyc files are entirely ignored.
 
  It sounds like you're saying that you want to change this so that .pyc
  and .pyo are always honored (with .pyc preferred if -O is not present
  and .pyo preferred if -O is present). I'm not sure that I like that
  better. If that's how zipimport works, I think it's broken!

 Yes, this is how zipimport works and I think this is good in cases
 where a third-party binary module/package is available only with .pyo
 files and others only with .pyc files (without .py source files, of
 course).

 I know we can rename the files, but this is a good solution? Well, I
 don't have a strong opinion about the solution adopted and I really
 like to see other alternatives and opinions.

 Thanks,
 Osvaldo

 --
 Osvaldo Santana Neto (aCiDBaSe)
 icq, url = (11287184, http://www.pythonbrasil.com.br;)
 ___
 Python-Dev mailing list
 Python-Dev@python.org
 http://mail.python.org/mailman/listinfo/python-dev
 Unsubscribe: 
 http://mail.python.org/mailman/options/python-dev/guido%40python.org



--
--Guido van Rossum (home page: http://www.python.org/~guido/)
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Weak references: dereference notification

2005-11-09 Thread Guido van Rossum
  Gustavo J. A. M. Carneiro wrote:
 I have come across a situation where I find the current weak
   references interface for extension types insufficient.
  
 Currently you only have a tp_weaklistoffset slot, pointing to a
   PyObject with weak references.  However, in my case[1] I _really_ need
   to be notified when a weak reference is dereferenced.

I find reading through the bug discussion a bit difficult to
understand your use case. Could you explain it here? If you can't
explain it you certainly won't get your problem solved! :-)

   What happens now
   is that, when you call a weakref object, a simple Py_INCREF is done on
   the referenced object.  It would be easy to implement a new slot to
   contain a function that should be called when a weak reference is
   dereferenced.  Or, alternatively, a slot or class attribute that
   indicates an alternative type that should be used to create weak
   references: instead of the builtin weakref object, a subtype of it, so
   you can override tp_call.
  
 Does this sounds acceptable?

[Jim Fulton]
  Since you can now (as of 2.4) subclass the weakref.ref class, you should be 
  able to
  do this yourself in Python.  See for example, weakref.KeyedRef.

  I know I can subclass it, but it doesn't change anything.  If people
 keep writing code like weakref.ref(myobj) instead of myweakref(myobj),
 it still won't work.

   I wouldn't want to have to teach users of the library that they need
 to use an alternative type; that seldom doesn't work.

   Now, if there was a place in the type that contained information like

 for creating weak references of instances of this type, use this
 weakref class

 and weakref.ref was smart enough to lookup this type and use it, only
 _then_ it could work.

Looks what you're looking for is a customizable factory fuction.

--
--Guido van Rossum (home page: http://www.python.org/~guido/)
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Inconsistent behaviour in import/zipimport hooks

2005-11-09 Thread Guido van Rossum
On 11/9/05, Osvaldo Santana [EMAIL PROTECTED] wrote:
 I've noticed this inconsistency when we stop to use zipimport in our
 Python For Maemo distribution. We've decided to stop using zipimport
 because the device (Nokia 770) uses a compressed filesystem.

I won't comment further on the brainstorm that's going on (this is
becoming a topic for c.l.py) but I think you are misunderstanding the
point of zipimport. It's not done (usually) for the compression but
for the index. Finding a name in the zipfile index is much more
efficient than doing a directory search; and the zip index can be
cached.

--
--Guido van Rossum (home page: http://www.python.org/~guido/)
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Inconsistent behaviour in import/zipimport hooks

2005-11-09 Thread Guido van Rossum
On 11/9/05, Brett Cannon [EMAIL PROTECTED] wrote:
 On 11/9/05, Guido van Rossum [EMAIL PROTECTED] wrote:
  Maybe it makes more sense to deprecate .pyo altogether and instead
  have a post-load optimizer optimize .pyc files according to the
  current optimization settings?

 But I thought part of the point of .pyo files was that they left out
 docstrings and thus had a smaller footprint?

Very few people care about the smaller footprint (although one piped up here).

 Plus I wouldn't be
 surprised if we started to move away from bytecode optimization and
 instead tried to do more AST transformations which would remove
 possible post-load optimizations.

 I would have  no issue with removing .pyo files and have .pyc files
 just be as optimized as they  the current settings are and leave it at
 that.  Could have some metadata listing what optimizations occurred,
 but do we really need to have a specific way to denote if bytecode has
 been optimized?  Binary files compiled from C don't note what -O
 optimization they were compiled with.  If someone distributes
 optimized .pyc files chances are they are going to have a specific
 compile step with py_compile and they will know what optimizations
 they are using.

Currently, .pyo files have some important semantic differences with
.pyc files; -O doesn't remove docstrings (that's -OO) but it does
remove asserts. I wouldn't want to accidentally use a .pyc file
without asserts compiled in unless the .py file wasn't around.

For application distribution, the following probably would work:

- instead of .pyo files, we use .pyc files
- the .pyc file records whether optimizations were applied, whether
asserts are compiled, and whether docstrings are retained
- if the compiler finds a .pyc that is inconsistent with the current
command line, it ignores it and rewrites it (if it is writable) just
as if the .py file were newer

However, this would be a major pain for the standard library and other
shared code -- there it's really nice to have a cache for each of the
optimization levels since usually regular users can't write the
.py[co] files there, meaning very slow always-recompilation if the
standard .pyc files aren't of the right level, causing unacceptable
start-up times.

The only solutions I can think of that use a single file actually
*increase* the file size by having unoptimized and optimized code
side-by-side, or some way to quickly skip the assertions -- the -OO
option is a special case that probably needs to be done differently
anyway and only for final distribution.

--
--Guido van Rossum (home page: http://www.python.org/~guido/)
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Inconsistent behaviour in import/zipimport hooks

2005-11-09 Thread Guido van Rossum
On 11/9/05, Phillip J. Eby [EMAIL PROTECTED] wrote:
 At 03:25 PM 11/9/2005 -0800, Guido van Rossum wrote:
 The only solutions I can think of that use a single file actually
 *increase* the file size by having unoptimized and optimized code
 side-by-side, or some way to quickly skip the assertions -- the -OO
 option is a special case that probably needs to be done differently
 anyway and only for final distribution.

 We could have a JUMP_IF_NOT_DEBUG opcode to skip over asserts and if
 __debug__ blocks.  Then under -O we could either patch this to a plain
 jump, or compact the bytecode to remove the jumped-over part(s).

That sounds very reasonable.

 By the way, while we're on this subject, can we make the optimization
 options be part of the compile() interface?  Right now the distutils has to
 actually exec another Python process whenever you want to compile
 code with
 a different optimization level than what's currently in effect, whereas if
 it could pass the desired level to compile(), this wouldn't be necessary.

Makes sense to me; we need a patch of course.

--
--Guido van Rossum (home page: http://www.python.org/~guido/)
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Inconsistent behaviour in import/zipimport hooks

2005-11-09 Thread Guido van Rossum
[Guido]
  However, this would be a major pain for the standard library and other
  shared code -- there it's really nice to have a cache for each of the
  optimization levels since usually regular users can't write the
  .py[co] files there, meaning very slow always-recompilation if the
  standard .pyc files aren't of the right level, causing unacceptable
  start-up times.
[Brett]
 What if PEP 304 came into being?  Then people would have a place to
 have the shared code's recompiled version stored and thus avoid the
 overhead from repeated use.

Still sounds suboptimal for the standard library; IMO it should just work.

  The only solutions I can think of that use a single file actually
  *increase* the file size by having unoptimized and optimized code
  side-by-side, or some way to quickly skip the assertions -- the -OO
  option is a special case that probably needs to be done differently
  anyway and only for final distribution.

 One option would be to introduce an ASSERTION bytecode that has an
 argument specifying the amount of bytecode for the assertion.  The
 eval loop can then just igonore the bytecode if assertions are being
 evaluated and fall through to the bytecode for the assertions (and
 thus be the equivalent of NOP) or use the argument to jump forward
 that number of bytes in the bytecode and completely skip over the
 assertion (and thus be just like a JUMP_FORWARD).  Either way
 assertions becomes slightly more costly but it should be very minimal.

I like Phillip's suggestion -- no new opcode, just a conditional jump
that can be easily optimized out.

--
--Guido van Rossum (home page: http://www.python.org/~guido/)
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Inconsistent behaviour in import/zipimport hooks

2005-11-09 Thread Guido van Rossum
  I like Phillip's suggestion -- no new opcode, just a conditional jump
  that can be easily optimized out.

 Huh?  But Phillip is suggesting a new opcode that is essentially the
 same as my proposal but naming it differently and saying the bytecode
 should get changed directly instead of having the eval loop handle the
 semantic differences based on whether -O is being used.

Sorry. Looking back they look pretty much the same to me. Somehow I
glanced over Phillip's code and thought he was proposing to use a
regular JUMP_IF opcode with the special __debug__ variable (which
would be a 3rd possibility, good if we had backwards compatibility
requirements for bytecode -- which we don't, fortunately :-).

--
--Guido van Rossum (home page: http://www.python.org/~guido/)
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Inconsistent behaviour in import/zipimport hooks

2005-11-09 Thread Guido van Rossum
On 11/9/05, Brett Cannon [EMAIL PROTECTED] wrote:
 On 11/9/05, Guido van Rossum [EMAIL PROTECTED] wrote:
I like Phillip's suggestion -- no new opcode, just a conditional jump
that can be easily optimized out.
  
   Huh?  But Phillip is suggesting a new opcode that is essentially the
   same as my proposal but naming it differently and saying the bytecode
   should get changed directly instead of having the eval loop handle the
   semantic differences based on whether -O is being used.
 
  Sorry.

 No problem.  Figured you just misread mine.

  Looking back they look pretty much the same to me. Somehow I
  glanced over Phillip's code and thought he was proposing to use a
  regular JUMP_IF opcode with the special __debug__ variable (which
  would be a 3rd possibility, good if we had backwards compatibility
  requirements for bytecode -- which we don't, fortunately :-).
 

 Fortunately.  =)

 So does this mean you like the idea?  Should this all move forward somehow?

I guess so. :-)

It will need someone thinking really hard about all the use cases,
edge cases, etc., implementation details, and writing up a PEP. Feel
like volunteering? You might squeeze Phillip as a co-author. He's a
really good one.

--
--Guido van Rossum (home page: http://www.python.org/~guido/)
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Event loops, PyOS_InputHook, and Tkinter

2005-11-10 Thread Guido van Rossum
On 11/9/05, Michiel Jan Laurens de Hoon [EMAIL PROTECTED] wrote:
 My application doesn't need a toolkit at all. My problem is that because
 of Tkinter being the standard Python toolkit, we cannot have a decent
 event loop in Python. So this is the disadvantage I see in Tkinter.

That's a non-sequitur if I ever saw one. Who gave you that idea? There
is no connection.

(If there's *any* reason for Python not having a standard event loop
it's probably because I've never needed one.)

--
--Guido van Rossum (home page: http://www.python.org/~guido/)
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Inconsistent behaviour in import/zipimport hooks

2005-11-10 Thread Guido van Rossum
On 11/10/05, Phillip J. Eby [EMAIL PROTECTED] wrote:
 At 04:33 PM 11/9/2005 -0800, Guido van Rossum wrote:
 On 11/9/05, Phillip J. Eby [EMAIL PROTECTED] wrote:
   By the way, while we're on this subject, can we make the optimization
   options be part of the compile() interface?  Right now the distutils has 
   to
   actually exec another Python process whenever you want to compile
   code with
   a different optimization level than what's currently in effect, whereas if
   it could pass the desired level to compile(), this wouldn't be necessary.
 
 Makes sense to me; we need a patch of course.

 But before we can do that, it's not clear to me if it should be part of the
 existing flags argument, or whether it should be separate.  Similarly,
 whether it's just going to be a level or an optimization bitmask in its own
 right might be relevant too.

 For the current use case, obviously, a level argument suffices, with 'None'
 meaning whatever the command-line level was for backward
 compatibility.  And I guess we could go with that for now easily enough,
 I'd just like to know whether any of the AST or optimization mavens had
 anything they were planning in the immediate future that might affect how
 the API addition should be structured.

I'm not a big user of this API, please design as you see fit.

--
--Guido van Rossum (home page: http://www.python.org/~guido/)
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Inconsistent behaviour in import/zipimport hooks

2005-11-11 Thread Guido van Rossum
On 11/11/05, Ulrich Berning [EMAIL PROTECTED] wrote:
 Guido, if it was intentional to separate slightly different generated
 bytecode into different files and if you have good reasons for doing
 this, why have I never seen a .pyoo file :-)

Because -OO was an afterthought and not implemented by me.

 For instance, nobody would give the output of a C compiler a different
 extension when different compiler flags are used.

But the usage is completely different. With C you explicitly manage
when compilation happens. With Python you don't. When you first run
your program with -O but it crashes, and then you run it again without
-O to enable assertions, you would be very unhappy if the bytecode
cached in a .pyo file would be reused!

 I would appreciate to see the generation of .pyo files completely
 removed in the next release.

You seem to forget the realities of backwards compatibility. While
there are ways to cache bytecode without having multiple extensions,
we probably can't do that until Python 3.0.

--
--Guido van Rossum (home page: http://www.python.org/~guido/)
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Iterating a closed StringIO

2005-11-17 Thread Guido van Rossum
On 11/17/05, Walter Dörwald [EMAIL PROTECTED] wrote:
 Currently StringIO.StringIO and cStringIO.StringIO behave differently
 when iterating a closed stream:

 s = StringIO.StringIO(foo)
 s.close()
 s.next()

 gives StopIteration, but

 s = cStringIO.StringIO(foo)
 s.close()
 s.next()

 gives ValueError: I/O operation on closed file.

 Should they raise the same exception? Should this be fixed for 2.5?

I think cStringIO is doing the right thing; real files behave the same way.

Submit a patch for StringIO (also docs please) and assign it to me and
I'll make sure it goes in.

--
--Guido van Rossum (home page: http://www.python.org/~guido/)
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


<    1   2   3   4   5   6   7   8   9   10   >