Re: [Python-Dev] Fwd: Translating docs

2006-02-26 Thread Alexander Schremmer
On Sun, 26 Feb 2006 08:50:57 +0100, Georg Brandl wrote:

> Martin: There aren't any German docs, are there?

There is e.g. http://starship.python.net/~gherman/publications/tut-de/

Kind regards,
Alexander

___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Fwd: Translating docs

2006-02-26 Thread martin
Zitat von Georg Brandl <[EMAIL PROTECTED]>:

> Martin: There aren't any German docs, are there?

I started translating the doc strings once, but never got to complete
it. I still believe that the doc string translation is the only approach
that could work in a reasonable way - you would have to use pydoc to
view the translations, though.

There are, of course, various German books.

Regards,
Martin





___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Using and binding relative names (was Re: PEP for Better Control of Nested Lexical Scopes)

2006-02-26 Thread Massimiliano Leoni
Why would you change the Python scoping rules, instead of using the 
function attributes, available from release 2.1 (PEP 232) ?
For example, you may write:

def incgen(start, inc):
   def incrementer():
 incrementer.a += incrementer.b
 return incrementer.a
   incrementer.a = start - inc
   incrementer.b = inc
   return incrementer

f = incgen(100, 2)
g = incgen(200, 3)
for i in range(5):
 print f(), g()

The result is:

100 200
102 203
104 206
106 209
108 212

___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] bytes.from_hex()

2006-02-26 Thread Stephen J. Turnbull
> "Greg" == Greg Ewing <[EMAIL PROTECTED]> writes:

Greg> I think we need some concrete use cases to talk about if
Greg> we're to get any further with this. Do you have any such use
Greg> cases in mind?

I gave you one, MIME processing in email, and a concrete bug that is
possible with the design you propose, but not in mine.  You said, "the
programmers need to try harder."  If that's an acceptable answer, I
have to concede it beats all any use case I can imagine.

I think it's your turn.  Give me a use case where it matters
practically that the output of the base64 codec be Python unicode
characters rather than 8-bit ASCII characters.

I don't think you can.  Everything you have written so far is based on
defending your maintained assumption that because Python implements
text processing via the unicode type, everything that is described as
a "character" must be coerced to that type.

If you give up that assumption, you get

1.  an automatic reduction in base64.upcase() bugs because it's a type
error, ie, binary objects are not text objects, no matter what their
representation

2.  encouragement to programmer teams to carry along binary objects as
opaque blobs until they're just about to put them on the wire,
then let the wire protocol guy implement the conversion at that point

3.  efficiency for a very common case where ASCII octets are the wire
representation

4.  efficient and clear implementation and documentation using the
codec framework and API

I don't really see a downside, except for the occasional double
conversion ASCII -> unicode -> UTF-16, as is allowed (but not
mandated) in XML's use of base64.  What downside do you see?

-- 
School of Systems and Information Engineering http://turnbull.sk.tsukuba.ac.jp
University of TsukubaTennodai 1-1-1 Tsukuba 305-8573 JAPAN
   Ask not how you can "do" free software business;
  ask what your business can "do for" free software.
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] cProfile prints to stdout?

2006-02-26 Thread Guido van Rossum
On 2/25/06, [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote:
>
> >> It is currently impossible to separate profile output from the
> >> program's output.
>
> Guido> It is if you use the "advanced" use of the profiler -- the
> Guido> profiling run just saves the profiling data to a file, and the
> Guido> pstats module invoked separately prints the output.
>
> Sure, but then it's not "simple".  Your original example was "... > file".
> I'd like it to be (nearly) as easy to do it right yet keep it simple.

OK. I believe the default should be stdout though, and the conveniece
method print_stats() in profile.py should be the only place that
references stderr. The smallest code mod would be to redirect stdout
temporarily inside print_stats(); but I won't complain if you're more
ambitious and modify pstats.py.

def print_stats(self, sort=-1, stream=None):
import pstats
if stream is None:
stream = sys.stderr
save = sys.stdout
try:
if stream is not None:
sys.stdout = stream
pstats.Stats(self).strip_dirs().sort_stats(sort). \
  print_stats()
finally:
sys.stdout = save

--
--Guido van Rossum (home page: http://www.python.org/~guido/)
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Using and binding relative names (was Re: PEP for Better Control of Nested Lexical Scopes)

2006-02-26 Thread Thomas Wouters
On Sun, Feb 26, 2006 at 03:27:34PM +0100, Massimiliano Leoni wrote:

> Why would you change the Python scoping rules, instead of using the 
> function attributes, available from release 2.1 (PEP 232) ?

Because closures allow for data that isn't trivially reachable by the caller
(or anyone but the function itself.) You can argue that that's unpythonic or
what not, but fact is that the current closures allow that.

-- 
Thomas Wouters <[EMAIL PROTECTED]>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Using and binding relative names (was Re: PEP for Better Control of Nested Lexical Scopes)

2006-02-26 Thread Steven Bethard
On 2/25/06, Almann T. Goo <[EMAIL PROTECTED]> wrote:
> On 2/23/06, Steven Bethard <[EMAIL PROTECTED]> wrote:
> > On 2/22/06, Almann T. Goo <[EMAIL PROTECTED]> wrote:
> > > def incrementer_getter(val):
> > >def incrementer():
> > >val = 5
> > >def inc():
> > >..val += 1
> > >return val
> > >return inc
> > >return incrementer
> >
> > Sorry, what way did the user think?  I'm not sure what you think was
> > supposed to happen.
>
> My apologies ... I shouldn't use vague terms like what the "user
> thinks."  My problem, as is demonstrated in the above example, is that
> the implicit nature of evaluating a name in Python conflicts with the
> explicit nature of the proposed "dot" notation.  It makes it easier
> for a user to write obscure code (until Python 3K when we force users
> to use "dot" notation for all enclosing scope access ;-) ).

Then do you also dislike the original proposal: that only a single dot
be allowed, and that the '.' would mean "this name, but in the nearest
outer scope that defines it"?  Then:

def incrementer_getter(val):
   def incrementer():
   val = 5
   def inc():
   .val += 1
   return val
   return inc
   return incrementer

would do what I think you want it to[1].  Note that I only suggested
extending the dot-notation to allow multiple dots because of Greg
Ewing's complaint that it wasn't enough like the relative import
notation.  Personally I find PJE's original proposal more intuitive,
and based on your example, I suspect so do you.

[1] That is, increment the ``val`` in incrementer(), return the same
``val``, and never modify the ``val`` in incrementer_getter().

STeVe
--
Grammar am for people who can't think for myself.
--- Bucky Katt, Get Fuzzy
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Using and binding relative names (was Re: PEP forBetter Control of Nested Lexical Scopes)

2006-02-26 Thread Terry Reedy

"Almann T. Goo" <[EMAIL PROTECTED]> wrote in message 
news:[EMAIL PROTECTED]
> On 2/26/06, Greg Ewing <[EMAIL PROTECTED]> wrote:
>> Alternatively, 'global' could be redefined to mean
>> what we're thinking of for 'outer'. Then there would
>> be no change in keywordage.
>> Given the rarity of global statement usage to begin
>> with, I'd say that narrows things down to something
>> well within the range of acceptable breakage in 3.0.
>
> You read my mind--I made a reply similar to this on another branch of
> this thread just minutes ago :).
>
> I am curious to see what the community thinks about this.

I *think* I like this better than more complicated proposals.  I don't 
think I would ever have a problem with the intermediate scope masking the 
module scope.  After all, if I really meant to access the current global 
scope from a nested function, I simply would not use that name in the 
intermediate scope.

tjr



___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Using and binding relative names (was Re: PEP for Better Control of Nested Lexical Scopes)

2006-02-26 Thread Almann T. Goo
On 2/26/06, Steven Bethard <[EMAIL PROTECTED]> wrote:
> Then do you also dislike the original proposal: that only a single dot
> be allowed, and that the '.' would mean "this name, but in the nearest
> outer scope that defines it"?  Then:
>
> def incrementer_getter(val):
>def incrementer():
>val = 5
>def inc():
>.val += 1
>return val
>return inc
>return incrementer
>
> would do what I think you want it to[1].  Note that I only suggested
> extending the dot-notation to allow multiple dots because of Greg
> Ewing's complaint that it wasn't enough like the relative import
> notation.  Personally I find PJE's original proposal more intuitive,
> and based on your example, I suspect so do you.
>
> [1] That is, increment the ``val`` in incrementer(), return the same
> ``val``, and never modify the ``val`` in incrementer_getter().

I'm not sure if I find this more intuitive, but I think it is more
convenient than the "explicit dots" for each scope.  However my
biggest issue is still there.  I am not a big fan of letting users
have synonyms for names.  Notice how ".var" means the same as "var" in
some contexts in the example above--that troubles me.  PEP 227
addresses this concern with regard to the class scope:

Names in class scope are not accessible.  Names are resolved in
the innermost enclosing function scope.  If a class definition
occurs in a chain of nested scopes, the resolution process skips
class definitions.  This rule prevents odd interactions between
class attributes and local variable access.

As the PEP further states:

An alternative would have been to allow name binding in class
scope to behave exactly like name binding in function scope.  This
rule would allow class attributes to be referenced either via
attribute reference or simple name.  This option was ruled out
because it would have been inconsistent with all other forms of
class and instance attribute access, which always use attribute
references.  Code that used simple names would have been obscure.

I especially don't want to add an issue that is similar to one that
PEP 227 went out of its way to avoid.

-Almann

--
Almann T. Goo
[EMAIL PROTECTED]
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Using and binding relative names (was Re: PEP forBetter Control of Nested Lexical Scopes)

2006-02-26 Thread Ron Adam
Terry Reedy wrote:
> "Almann T. Goo" <[EMAIL PROTECTED]> wrote in message 
> news:[EMAIL PROTECTED]
>> On 2/26/06, Greg Ewing <[EMAIL PROTECTED]> wrote:
>>> Alternatively, 'global' could be redefined to mean
>>> what we're thinking of for 'outer'. Then there would
>>> be no change in keywordage.
>>> Given the rarity of global statement usage to begin
>>> with, I'd say that narrows things down to something
>>> well within the range of acceptable breakage in 3.0.
>> You read my mind--I made a reply similar to this on another branch of
>> this thread just minutes ago :).
>>
>> I am curious to see what the community thinks about this.
> 
> I *think* I like this better than more complicated proposals.  I don't 
> think I would ever have a problem with the intermediate scope masking the 
> module scope.  After all, if I really meant to access the current global 
> scope from a nested function, I simply would not use that name in the 
> intermediate scope.
> 
> tjr

Would this apply to reading intermediate scopes without the global keyword?

How would you know you aren't in inadvertently masking a name in a 
function you call?

In most cases it will probably break something in an obvious way, but I 
suppose in some cases it won't be so obvious.

Ron

___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


[Python-Dev] Exposing the abstract syntax

2006-02-26 Thread martin
At PyCon, there was general reluctance for incorporating
the ast-objects branch, primarily because people where
concerned what the reference counting would do to
maintainability, and what (potentially troublesome)
options direct exposure of AST objects would do.

OTOH, the approach of creating a shadow tree did not
find opposition, so I implemented that.

Currently, you can use compile() to create an AST
out of source code, by passing PyCF_ONLY_AST (0x400)
to compile. The mapping of AST to Python objects
is as follows:

- there is a Python type for every sum, product,
  and constructor.
- The constructor types inherit from their sum
  types (e.g. ClassDef inherits from stmt)
- Each constructor and product type has an
  _fields member, giving the names of the fields
  of the product.
- Each node in the AST has members with the names
  given in _fields
- If the field is optional, it might be None
- if the field is zero-or-more, it is represented
  as a list.

It might be reasonable to expose this through
a separate module, in particular to provide
access to the type objects.

Regards,
Martin




___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Using and binding relative names (was Re: PEP forBetter Control of Nested Lexical Scopes)

2006-02-26 Thread Alex Martelli
On Feb 26, 2006, at 11:47 AM, Ron Adam wrote:
...
> How would you know you aren't in inadvertently masking a name in a
> function you call?

What does calling have to do with it?  Nobody's proposing a move to  
(shudder) dynamic scopes, we're talking of saner concepts such as  
lexical scopes anyway.  Can you give an example of what you mean?

For the record: I detest the existing 'global' (could I change but  
ONE thing in Python, that would be the one -- move from hated  
'global' to a decent namespace use, e.g. glob.x=23 rather than global  
x;x=23), and I'd detest a similar 'outer' just as intensely (again,  
what I'd like instead is a decent namespace) -- so I might well be  
sympathetic to your POV, if I could but understand it;-).


Alex

___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Using and binding relative names (was Re: PEP forBetter Control of Nested Lexical Scopes)

2006-02-26 Thread Almann T. Goo
> Would this apply to reading intermediate scopes without the global keyword?

Using a name from an enclosing scope without re-binding to it would
not require the "global" keyword.  This actually is the case today
with "global" and accessing a name from the global scope versus
re-binding to it--this would make "global" more general than
explicitly overriding to the global scope.

> How would you know you aren't in inadvertently masking a name in a
> function you call?

I think is really an issue with the name binding semantics in Python. 
There are benefits to not having variable declarations, but with
assignment meaning bind locally, you can already shadow a name in a
nested scope inadvertently today.

> In most cases it will probably break something in an obvious way, but I
> suppose in some cases it won't be so obvious.

Having the "global" keyword semantics changed to be "lexically global"
would break in the cases that "global" is used on a name within a
nested scope that has an enclosing scope with the same name.  I would
suppose that actual instances in real code of this would be rare.

Consider:
>>> x = 1
>>> def f() :
...   x = 2
...   def inner() :
... global x
... print x
...   inner()
...
>>> f()
1

Under the proposed rules:
>>> f()
2

PEP 227 also had backwards incompatibilities that were similar and I
suggest handling them the same way by issuing a warning in these cases
when the new semantics are not being used (i.e. no "from __future__").

-Almann
--
Almann T. Goo
[EMAIL PROTECTED]
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Using and binding relative names (was Re: PEP forBetter Control of Nested Lexical Scopes)

2006-02-26 Thread Almann T. Goo
On 2/26/06, Alex Martelli <[EMAIL PROTECTED]> wrote:
> For the record: I detest the existing 'global' (could I change but
> ONE thing in Python, that would be the one -- move from hated
> 'global' to a decent namespace use, e.g. glob.x=23 rather than global
> x;x=23), and I'd detest a similar 'outer' just as intensely (again,
> what I'd like instead is a decent namespace) -- so I might well be
> sympathetic to your POV, if I could but understand it;-).

I would prefer a more explicit means to accomplish this too (I sort of
like the prefix dot in this regard), however the fundamental problem
with allowing this lies in how accessing and binding names works in
Python today (sorry if I sound like a broken record in this regard).

Unless we change how names can be accessed/re-bound (very bad for
backwards compatibility), any proposal that forces explicit name
spaces would have to allow for both accessing "simple names" (like
just "var") and names via attribute access (name spaces) like
"glob.var"--I think this adds the problem of introducing obscurity to
the language.

-Almann

--
Almann T. Goo
[EMAIL PROTECTED]
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Using and binding relative names (was Re: PEP forBetter Control of Nested Lexical Scopes)

2006-02-26 Thread Ron Adam
Alex Martelli wrote:
> On Feb 26, 2006, at 11:47 AM, Ron Adam wrote:
>...
>> How would you know you aren't in inadvertently masking a name in a
>> function you call?
> 
> What does calling have to do with it?  Nobody's proposing a move to 
> (shudder) dynamic scopes, we're talking of saner concepts such as 
> lexical scopes anyway.  Can you give an example of what you mean?

(sigh of relief) Ok, so the following example will still be true.


def foo(n): #foo is a global
return n

def bar(n):
return foo(n)#call to foo is set at compile time

def baz(n):
foo = lambda x: 7#will not replace foo called in bar.
return bar(n)

print baz(42)

I guess I don't quite get what they are proposing yet.

It seems to me adding intermediate scopes are making functions act more 
like class's.  After you add naming conventions to functions they begin 
to look like this.

""" Multiple n itemiter """
class baz(object):
 def getn(baz, n):
 start = baz.start
 baz.start += n
 return baz.lst[start:start+n]
 def __init__(baz, lst):
 baz.lst = lst
 baz.start = 0

b = baz(range(100))

for n in range(1,10):
 print b.getn(n)


> For the record: I detest the existing 'global' (could I change but ONE 
> thing in Python, that would be the one -- move from hated 'global' to a 
> decent namespace use, e.g. glob.x=23 rather than global x;x=23), and I'd 
> detest a similar 'outer' just as intensely (again, what I'd like instead 
> is a decent namespace) -- so I might well be sympathetic to your POV, if 
> I could but understand it;-).

Maybe something explicit like:

 >>> import __main__ as glob
 >>> glob.x = 10
 >>> globals()
{'__builtins__': , '__name__': 
'__main__', 'glo
b': , '__doc__': None, 'x': 10}
 >>>


That could eliminate the global keyword.

I'm -1 on adding the intermediate (outer) scopes to functions. I'd even 
like to see closures gone completely, but there's probably a reason they 
are there.  What I like about functions is they are fast, clean up 
behind themselves, and act *exactly* the same on consecutive calls.

Cheers,

Ron












___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


[Python-Dev] PEP 308

2006-02-26 Thread Thomas Wouters

Since I was on a streak of implementing not-quite-the-right-thing, I checked
in my PEP 308 implementation *with* backward compatibility -- just to spite
Guido's latest change to the PEP. It jumps through a minor hoop (two new
grammar rules) in order to be backwardly compatible, but that hoop can go
away in Python 3.0, and that shouldn't be too long from now. I apologize for
the test failures of compile, transform and parser: they seem to all depend
on the parsermodule being updated. If no one feels responsible for it, I'll
do it later in the week (I'll be sprinting until Thursday anyway.)

-- 
Thomas Wouters <[EMAIL PROTECTED]>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] bytes.from_hex()

2006-02-26 Thread Greg Ewing
Stephen J. Turnbull wrote:

 > I gave you one, MIME processing in email

If implementing a mime packer is really the only use case
for base64, then it might as well be removed from the
standard library, since 99.9% of all programmers will
never touch it. Those that do will need to have boned up
on the subject of encoding until it's coming out their
ears, so they'll know what they're doing in any case. And
they'll be quite competent to write their own base64
encoder that works however they want it to.

I don't have any real-life use cases for base64 that a
non-mime-implementer might come across, so all I can do
is imagine what shape such a use case might have.

When I do that, I come up with what I've already described.
The programmer wants to send arbitrary data over a channel
that only accepts text. He doesn't know, and doesn't want
to have to know, how the channel encodes that text --
it might be ASCII or EBCDIC or morse code, it shouldn't
matter. If his Python base64 encoder produces a Python
character string, and his Python channel interface accepts
a Python character string, he doesn't have to know.

> I think it's your turn.  Give me a use case where it matters
> practically that the output of the base64 codec be Python unicode
> characters rather than 8-bit ASCII characters.

I'd be perfectly happy with ascii characters, but in Py3k,
the most natural place to keep ascii characters will be in
character strings, not byte arrays.

 > Everything you have written so far is based on
> defending your maintained assumption that because Python implements
> text processing via the unicode type, everything that is described as
> a "character" must be coerced to that type.

I'm not just blindly assuming that because the RFC happens
to use the word "character". I'm also looking at how it uses
that word in an effort to understand what it means. It
*doesn't* specify what bit patterns are to be used to
represent the characters. It *does* mention two "character
sets", namely ASCII and EBCDIC, with the implication that
the characters it is talking about could be taken as being
members of either of those sets. Since the Unicode character
set is a superset of the ASCII character set, it doesn't
seem unreasonable that they could also be thought of as
Unicode characters.

> I don't really see a downside, except for the occasional double
> conversion ASCII -> unicode -> UTF-16, as is allowed (but not
> mandated) in XML's use of base64.  What downside do you see?

It appears that all your upsides I see as downsides, and
vice versa. We appear to be mutually upside-down. :-)

XML is another example. Inside a Python program, the most
natural way to represent an XML is as a character string.
Your way, embedding base64 in it would require converting
the bytes produced by the base64 encoder into a character
string in some way, taking into account the assumed ascii
encoding of said bytes. My way, you just use the result
directly, with no coding involved at all.

-- 
Greg Ewing, Computer Science Dept, +--+
University of Canterbury,  | Carpe post meridiam! |
Christchurch, New Zealand  | (I'm not a morning person.)  |
[EMAIL PROTECTED]  +--+
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Using and binding relative names (was Re: PEP forBetter Control of Nested Lexical Scopes)

2006-02-26 Thread Alex Martelli

On Feb 26, 2006, at 4:20 PM, Ron Adam wrote:
...
> (sigh of relief) Ok, so the following example will still be true.

Yep, no danger of dynamic scoping, be certain of that.

> Maybe something explicit like:
>
 import __main__ as glob

Sure, or the more general ''glob=__import__(__name__)''.

> I'm -1 on adding the intermediate (outer) scopes to functions. I'd  
> even
> like to see closures gone completely, but there's probably a reason  
> they
> are there.  What I like about functions is they are fast, clean up
> behind themselves, and act *exactly* the same on consecutive calls.

Except that the latter assertion is just untrue in Python -- we  
already have a bazilion ways to perform side effects, and, since  
there is no procedure/function distinction, side effects in functions  
are an extremely common thing.  If you're truly keen on having the  
"exactly the same" property, you may want to look into functional  
languages, such as Haskell -- there, all data is immutable, so the  
property does hold (any *indispensable* side effects, e.g. I/O, are  
packed into 'monads' -- but that's another story).

Closures in Python are often extremely handy, as long as you use them  
much as you would in Haskell -- treating data as immutable (and in  
particular outer names as unrebindable). You'd think that functional  
programming fans wouldn't gripe so much about Python closures being  
meant for use like Haskell ones, hm?-)  But, of course, they do want  
to have their closure and rebind names too...


Alex

___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Using and binding relative names (was Re: PEP forBetter Control of Nested Lexical Scopes)

2006-02-26 Thread Almann T. Goo
On 2/26/06, Ron Adam <[EMAIL PROTECTED]> wrote:
> I'm -1 on adding the intermediate (outer) scopes to functions. I'd even
> like to see closures gone completely, but there's probably a reason they
> are there.

We already have enclosing scopes since Python 2.1--this is PEP 227
(http://www.python.org/peps/pep-0227.html).  The proposal is for a
mechanism to allow for re-binding of enclosing scopes which seems like
a logical step to me.  The rest of the scoping semantics would remain
as they are today in Python.

-Almann

--
Almann T. Goo
[EMAIL PROTECTED]
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Using and binding relative names (was Re: PEP forBetter Control of Nested Lexical Scopes)

2006-02-26 Thread Ron Adam
Alex Martelli wrote:

>> I'm -1 on adding the intermediate (outer) scopes to functions. I'd even
>> like to see closures gone completely, but there's probably a reason they
>> are there.  What I like about functions is they are fast, clean up
>> behind themselves, and act *exactly* the same on consecutive calls.
> 
> Except that the latter assertion is just untrue in Python -- we already 
> have a bazilion ways to perform side effects, and, since there is no 
> procedure/function distinction, side effects in functions are an 
> extremely common thing.  If you're truly keen on having the "exactly the 
> same" property, you may want to look into functional languages, such as 
> Haskell -- there, all data is immutable, so the property does hold (any 
> *indispensable* side effects, e.g. I/O, are packed into 'monads' -- but 
> that's another story).

True, I should have said mostly act the same when using them in a common 
and direct way. I know we can change all sorts of behaviors fairly 
easily if we choose to.


> Closures in Python are often extremely handy, as long as you use them 
> much as you would in Haskell -- treating data as immutable (and in 
> particular outer names as unrebindable). You'd think that functional 
> programming fans wouldn't gripe so much about Python closures being 
> meant for use like Haskell ones, hm?-)  But, of course, they do want to 
> have their closure and rebind names too...

So far everywhere I've seen closures used, a class would work.  But 
maybe not as conveniently or as fast?

On the other side of the coin there are those who want to get rid of the 
"self" variable in class's also.  Which would cause classes to look more 
like nested functions.

Haskel sounds interesting, maybe I'll try a bit of it sometime.  But I 
like Python. ;-)

Ron
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Using and binding relative names (was Re: PEP forBetter Control of Nested Lexical Scopes)

2006-02-26 Thread Alex Martelli
On Feb 26, 2006, at 5:43 PM, Ron Adam wrote:
...
> So far everywhere I've seen closures used, a class would work.  But
> maybe not as conveniently or as fast?

Yep.  In this, closures are like generators: much more convenient  
than purpose-built classes, but not as general.

> Haskel sounds interesting, maybe I'll try a bit of it sometime.  But I
> like Python. ;-)

So do I, so do many others: the first EuroHaskell was held the day  
right after a EuroPython, in the same venue (a Swedish University,  
Chalmers) -- that was convenient because so many delegates were  
interested in both languages, see.

We stole list comprehensions and genexps from Haskell (the idea and  
most of the semantics, not the syntax, which was Pythonized  
relentlessly) -- and the two languages share the concept of  
indentation being significant for grouping, with some minor  
differences in details since they developed these concepts  
independently. Hey, what more do you need?-)


Alex

___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


[Python-Dev] Current trunk test failures

2006-02-26 Thread Tim Peters
The buildbot shows that the debug-build test_grammar is dying with a C
assert failure on all boxes.

In case it helps, in a Windows release build test_transformer is also failing:

test_transformer
test test_transformer failed -- Traceback (most recent call last):
  File "C:\Code\python\lib\test\test_transformer.py", line 16, in
testMultipleLHS
a = transformer.parse(s)
  File "C:\Code\python\lib\compiler\transformer.py", line 52, in parse
return Transformer().parsesuite(buf)
  File "C:\Code\python\lib\compiler\transformer.py", line 129, in parsesuite
return self.transform(parser.suite(text))
  File "C:\Code\python\lib\compiler\transformer.py", line 125, in transform
return self.compile_node(tree)
  File "C:\Code\python\lib\compiler\transformer.py", line 158, in compile_node
return self.file_input(node[1:])
  File "C:\Code\python\lib\compiler\transformer.py", line 189, in file_input
self.com_append_stmt(stmts, node)
  File "C:\Code\python\lib\compiler\transformer.py", line 1036, in
com_append_stmt
result = self.lookup_node(node)(node[1:])
  File "C:\Code\python\lib\compiler\transformer.py", line 305, in stmt
return self.com_stmt(nodelist[0])
  File "C:\Code\python\lib\compiler\transformer.py", line 1029, in com_stmt
result = self.lookup_node(node)(node[1:])
  File "C:\Code\python\lib\compiler\transformer.py", line 315, in simple_stmt
self.com_append_stmt(stmts, nodelist[i])
  File "C:\Code\python\lib\compiler\transformer.py", line 1036, in
com_append_stmt
result = self.lookup_node(node)(node[1:])
  File "C:\Code\python\lib\compiler\transformer.py", line 305, in stmt
return self.com_stmt(nodelist[0])
  File "C:\Code\python\lib\compiler\transformer.py", line 1029, in com_stmt
result = self.lookup_node(node)(node[1:])
  File "C:\Code\python\lib\compiler\transformer.py", line 353, in expr_stmt
exprNode = self.lookup_node(en)(en[1:])
  File "C:\Code\python\lib\compiler\transformer.py", line 763, in lookup_node
return self._dispatch[node[0]]
KeyError: 324

Also test_parser:

C:\Code\python\PCbuild>python  -E -tt ../lib/test/regrtest.py -v test_parser
test_parser
test_assert (test.test_parser.RoundtripLegalSyntaxTestCase) ... FAIL
test_basic_import_statement
(test.test_parser.RoundtripLegalSyntaxTestCase) ... ok
test_class_defs (test.test_parser.RoundtripLegalSyntaxTestCase) ... ok
test_expressions (test.test_parser.RoundtripLegalSyntaxTestCase) ... FAIL
test_function_defs (test.test_parser.RoundtripLegalSyntaxTestCase) ... FAIL
test_import_from_statement
(test.test_parser.RoundtripLegalSyntaxTestCase) ... ok
test_pep263 (test.test_parser.RoundtripLegalSyntaxTestCase) ... ok
test_print (test.test_parser.RoundtripLegalSyntaxTestCase) ... FAIL
test_simple_assignments (test.test_parser.RoundtripLegalSyntaxTestCase) ... FAIL
test_simple_augmented_assignments
(test.test_parser.RoundtripLegalSyntaxTestCase) ... FAIL
test_simple_expression (test.test_parser.RoundtripLegalSyntaxTestCase) ... FAIL
test_yield_statement (test.test_parser.RoundtripLegalSyntaxTestCase) ... FAIL
test_a_comma_comma_c (test.test_parser.IllegalSyntaxTestCase) ... ok
test_illegal_operator (test.test_parser.IllegalSyntaxTestCase) ... ok
test_illegal_yield_1 (test.test_parser.IllegalSyntaxTestCase) ... ok
test_illegal_yield_2 (test.test_parser.IllegalSyntaxTestCase) ... ok
test_junk (test.test_parser.IllegalSyntaxTestCase) ... ok
test_malformed_global (test.test_parser.IllegalSyntaxTestCase) ... ok
test_print_chevron_comma (test.test_parser.IllegalSyntaxTestCase) ... ok
test_compile_error (test.test_parser.CompileTestCase) ... ok
test_compile_expr (test.test_parser.CompileTestCase) ... ok
test_compile_suite (test.test_parser.CompileTestCase) ... ok

==
FAIL: test_assert (test.test_parser.RoundtripLegalSyntaxTestCase)
--
Traceback (most recent call last):
  File "C:\Code\python\lib\test\test_parser.py", line 180, in test_assert
self.check_suite("assert alo < ahi and blo < bhi\n")
  File "C:\Code\python\lib\test\test_parser.py", line 28, in check_suite
self.roundtrip(parser.suite, s)
  File "C:\Code\python\lib\test\test_parser.py", line 19, in roundtrip
self.fail("could not roundtrip %r: %s" % (s, why))
AssertionError: could not roundtrip 'assert alo < ahi and blo <
bhi\n': Expected node type 303, got 302.

==
FAIL: test_expressions (test.test_parser.RoundtripLegalSyntaxTestCase)
--
Traceback (most recent call last):
  File "C:\Code\python\lib\test\test_parser.py", line 50, in test_expressions
self.check_expr("foo(1)")
  File "C:\Code\python\lib\test\test_parser.py", line 25, in check_expr
self.roundtrip(parser.expr, s)
  File "C:\Code\python\lib\test\test_parser.py", line 19, in roundtrip
self.fail("co

Re: [Python-Dev] bytes.from_hex()

2006-02-26 Thread Stephen J. Turnbull
> "Greg" == Greg Ewing <[EMAIL PROTECTED]> writes:

Greg> Stephen J. Turnbull wrote:

>> I gave you one, MIME processing in email

Greg> If implementing a mime packer is really the only use case
Greg> for base64, then it might as well be removed from the
Greg> standard library, since 99.9% of all programmers will
Greg> never touch it.  I don't have any real-life use cases for
Greg> base64 that a non-mime-implementer might come across, so all
Greg> I can do is imagine what shape such a use case might have.

I guess we don't have much to talk about, then.

>> Give me a use case where it matters practically that the output
>> of the base64 codec be Python unicode characters rather than
>> 8-bit ASCII characters.

Greg> I'd be perfectly happy with ascii characters, but in Py3k,
Greg> the most natural place to keep ascii characters will be in
Greg> character strings, not byte arrays.

Natural != practical.

Anyway, I disagree, and I've lived with the problems that come with an
environment that mixes objects with various underlying semantics into
a single "text stream" for a decade and a half.

That doesn't make me authoritative, but as we agree to disagree, I
hope you'll keep in mind that someone with real-world experience that
is somewhat relevant[1] to the issue doesn't find that natural at all.

Greg> Since the Unicode character set is a superset of the ASCII
Greg> character set, it doesn't seem unreasonable that they could
Greg> also be thought of as Unicode characters.

I agree.  However, as soon as I go past that intuition to thinking
about what that implies for _operations_ on the base64 string, it
begins to seem unreasonable, unnatural, and downright dangerous.  The
base64 string is a representation of an object that doesn't have text
semantics.  Nor do base64 strings have text semantics: they can't even
be concatenated as text (the pad character '=' is typically a syntax
error in a profile of base64, except as terminal padding).  So if you
wish to concatenate the underlying objects, the base64 strings must be
decoded, concatenated, and re-encoded in the general case.  IMO it's
not worth preserving the very superficial coincidence of "character
representation" in the face of such semantics.

I think that fact that favoring the coincidence of representation
leads you to also deprecate the very natural use of the codec API to
implement and understand base64 is indicative of a deep problem with
the idea of implementing base64 as bytes->unicode.


Footnotes: 
[1]  That "somewhat" is intended literally; my specialty is working
with codecs for humans in Emacs, but I've also worked with more
abstract codecs such as base64 in contexts like email, in both LISP
and Python.

-- 
School of Systems and Information Engineering http://turnbull.sk.tsukuba.ac.jp
University of TsukubaTennodai 1-1-1 Tsukuba 305-8573 JAPAN
   Ask not how you can "do" free software business;
  ask what your business can "do for" free software.
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com