Re: [Python-Dev] Tkinter lockups.
Thomas Wouters wrote: It seems that, on my platform at least, Tk_Init() doesn't like being called twice even when the first call resulted in an error. That's Tcl and Tk 8.4.12. Tkapp_Init() (which is the Tkinter part that calls Tk_Init()) does its best to guard against calling Tk_Init() twice when the first call was succesful, but it doesn't remember failure cases. I don't know enough about Tcl/Tk or Tkinter how this is best handled, but it would be mightily convenient if it were. ;-) I've created a bugreport on it, and I hope someone with Tkinter knowledge can step in and fix it. (It looks like SF auto-assigned it to Martin already, hmm.) I have now reported the underlying Tk bug at http://sourceforge.net/tracker/index.php?func=detailaid=1479587group_id=12997atid=112997 and worked around it in _tkinter.c. Regards, Martin ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
[Python-Dev] speeding up function calls
Results: 2.86% for 1 arg (len), 11.8% for 2 args (min), and 1.6% for pybench. ./python.exe -m timeit 'for x in xrange(1): len([])' ./python.exe -m timeit 'for x in xrange(1): min(1,2)' One part of it is a little dangerous though. http://python.org/sf/1479611 The general idea is to preallocate arg tuples and never dealloc. This saves a fair amount of work. I'm not sure it's entirely safe though. I noticed in doing this patch that PyTuple_Pack() calls _New() which initializes each item to NULL, then in _Pack() each item is set to the appropriate value. If we could get rid of duplicate work like that (or checking values in both callers and callees), we could get more speed. In order to try and find functions where this is more important, you can use Walter's coverage results: http://coverage.livinglogic.de n ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] methods on the bytes object
Martin v. Löwis [EMAIL PROTECTED] wrote: Josiah Carlson wrote: I mean unicode strings, period. I can't imagine what unicode strings which do not contain data could be. Binary data as opposed to text. Input to a array.fromstring(), struct.unpack(), etc. You can't/shouldn't put such data into character strings: you need an encoding first. Certainly that is the case. But how would you propose embedded bytes data be represented? (I talk more extensively about this particular issue later). Neither array.fromstring nor struct.unpack will produce/consume type 'str' in Python 3; both will operate on the bytes type. So fromstring should probably be renamed frombytes. Um...struct.unpack() already works on unicode... struct.unpack('L', u'work') (2003792491L,) As does array.fromstring... a = array.array('B') a.fromstring(u'work') a array('B', [119, 111, 114, 107]) ... assuming that all characters are in the 0...127 range. But that's a different discussion. Certainly it is the case that right now strings are used to contain 'text' and 'bytes' (binary data, encodings of text, etc.). The problem is in the ambiguity of Python 2.x str containing text where it should only contain bytes. But in 3.x, there will continue to be an ambiguity, as strings will still contain bytes and text (parsing literals, see the somewhat recent argument over bytes.encode('base64'), etc.). No. In Python 3, type 'str' cannot be interpreted to contain bytes. Operations that expect bytes and are given type 'str', and no encoding, should raise TypeError. I am apparently not communicating this particular idea effectively enough. How would you propose that I store parsing literals for non-textual data, and how would you propose that I set up a dictionary to hold some non-trivial number of these parsing literals? I don't want a vague you can't do X, I want a here's the code you would use. From what I understand, it would seem that you would suggest that I use something like the following... handler = {bytes('...', encoding=...).encode('latin-1'): ..., #or '\u\u...': ..., #or even without bytes/str (0xXX, 0xXX, ...): ..., } Note how two of those examples have non-textual data inside of a Python 3.x string? Yeah. We've not removed the problem, only changed it from being contained in non-unicode strings to be contained in unicode strings (which are 2 or 4 times larger than their non-unicode counterparts). We have removed the problem. Excuse me? People are going to use '...' to represent literals of all different kinds. Whether these are text literals, binary data literals, encoded binary data blobs (see the output of img2py.py from wxPython), whatever. We haven't removed the problem, we've only forced all string literals to be unicode; foolish consistancy and all that. Within the remainder of this email, there are two things I'm trying to accomplish: 1. preserve the Python 2.x string type I would expect that people try that. I'm -1. I also expect that people will try to make it happen; I am (and I'm certainly not a visionary when it comes to programming language features). I would also hope that others are able to see that immutable unicode and mutable bytes aren't necessarily sufficient, especially when the standard line will be something like if you are putting binary data inside of a unicode string, you are doing it wrong. Especially considering that unless one jumps through hoops of defining their bytes data as a bytes(list/tuple), and not bytes('...', encoding=...), that technically, they are still going to be storing bytes data as unicode strings. 2. make the bytes object more palatable regardless of #1 This might be good, but we have to be careful to not create a type that people would casually use to represent text. Certainly. But by lacking #1, we will run into a situation where Python 3.x strings will be used to represent bytes. Understand that I'm also trying to differentiate the two cases (and thinking further, a bytes literal would allow users to differentiate them without needing to use bytes('...', ...) ). In the realm of palatability, giving bytes objects most of the current string methods, (with perhaps .read(), .seek(), (and .write() for mutable bytes) ), I think, would go a long ways (if not all the way) towards being more than satisfactory. I do, however, believe that the Python 2.x string type is very useful from a data parsing/processing perspective. You have to explain your terminology somewhat better here: What applications do you have in mind when you are talking about parsing/processing? To me, parsing always means text, never raw bytes. I'm thinking of the Chomsky classification of grammars, EBNF, etc. when I hear parsing. What does pickle.load(...) do to the files that are passed into it? It reads the (possibly binary) data it reads in
Re: [Python-Dev] More on contextlib - adding back a contextmanagerdecorator
Guido van Rossum wrote: Things should be as simple as possible but no simpler. It's pretty clear to me that dropping __context__ approaches this ideal. I'm sorry I didn't push back harder when __context__ was first proposed -- in retrospect, the first 5 months of PEP 343's life, before __context__ (or __with__, as it was originally called) was invented, were by far its happiest times. I've posted two versions of the with page from the language reference: http://pyref.infogami.com/with (current) http://pyref.infogami.com/with-alt (simplified) (the original is a slightly tweaked version of the current development docs) /F ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] PEP 3102: Keyword-only arguments
Terry Reedy wrote: My Why? was and is exactly a request for that further discussion. Again: if a function has a fixed number n of params, why say that the first k can be passed by position, while the remaining n-k *must* be passed by name? have you designed API:s for others than yourself, and followed up how they are used ? /F ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] PEP 3102: Keyword-only arguments
Terry Reedy wrote: Nick Coghlan [EMAIL PROTECTED] wrote in message Because for some functions (e.g. min()/max()) you want to use *args, but support some additional keyword arguments to tweak a few aspects of the operation (like providing a key=x option). This and the rest of your 'explanation' is about Talin's first proposal, to which I already had said The rationale for this is pretty obvious. Actually, I misread Talin's PEP moreso than your question - I thought the first syntax change was about the earlier Py3k discussion of permitting '*args' before keyword arguments in a functional call. It seems that one is actually non-controversial enough to not really need a PEP at all :) Reading the PEP again, I realise what you were actually asking, and have to say I agree the only use case that has been identified for keyword-only arguments is functions which accept an arbitrary number of positional arguments. So +1 for the first change, -1 for the second. Cheers, Nick. -- Nick Coghlan | [EMAIL PROTECTED] | Brisbane, Australia --- http://www.boredomandlaziness.org ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] More on contextlib - adding back a contextmanager decorator
Phillip J. Eby wrote: At 08:08 PM 4/30/2006 -0700, Guido van Rossum wrote: If you object against the extra typing, we'll first laugh at you (proposals that *only* shave a few characters of a common idiom aren't all that popular in these parts), and then suggest that you can spell foo.some_method() as foo(). Okay, you've moved me to at least +0 for dropping __context__. I have only one object myself that has a non-self __context__, and it doesn't have a __call__, so none of my code breaks beyond the need to add parentheses in a few places. ;) At least +0 here, too. I've just been so deep in this lately that it is taking a while to wind my thinking back a year or so. Still, far better to be having this discussion now than in 6 months time :) It sure has been a long and winding road back to Guido's original version of PEP 343, though! As for decimal contexts, I'm thinking maybe we should have a decimal.using(ctx=None, **kw) function, where ctx defaults to the current decimal context, and the keyword arguments are used to make a modified copy, seems like a reasonable best way to implement the behavior that __context__ was added for. And then all of the existing special machinery can go away and be replaced with a single @contextfactory. 'localcontext' would probably work as at least an interim name for such a function. with decimal.localcontext() as ctx: # use the new context here This is really an all-round improvement over the current SVN approach, where the fact that a new decimal context object is being created by the existing decimal context object is thoroughly implicit and unobvious. (I think we should stick with @contextfactory as the decorator name, btw, even if we go back to calling __enter__/__exit__ things context managers.) Agreed. And that decorator will still be useful for defining methods as well as functions. Cheers, Nick. -- Nick Coghlan | [EMAIL PROTECTED] | Brisbane, Australia --- http://www.boredomandlaziness.org ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
[Python-Dev] global variable modification in functions [Re: elimination of scope bleeding of iteration variables]
Nick Coghlan wrote: Ben Wing wrote: apologies if this has been brought up on python-dev already. a suggestion i have, perhaps for python 3.0 since it may break some code (but imo it could go into 2.6 or 2.7 because the likely breakage would be very small, see below), is the elimination of the misfeature whereby the iteration variable used in for-loops, list comprehensions, etc. bleeds out into the surrounding scope. [i'm aware that there is a similar proposal for python 3.0 for list comprehensions specifically, but that's not enough.] List comprehensions will be fixed in Py3k. However, the scoping of for loop variables won't change, as the current behaviour is essential for search loops that use a break statement to terminate the loop when the item is found. Accordingly, there is plenty of code in the wild that *would* break if the for loop variables were constrained to the for loop, even if your own code wouldn't have such a problem. Outside pure scripts, significant control flow logic (like for loops) should be avoided at module level. You are typically much better off moving the logic inside a _main() function and invoking it at the end of the module. This avoids the 'accidental global' problem for all of the script-only variables, not only the ones that happen to be used as for loop variables. i did in fact end up doing that. however, in the process i ran into another python annoyance i've tripped over repeatedly: you can't assign to a global variable without explicitly declaring it as `global'. instead, you magically get a shadowing local variable. this behavior is extremely hostile to newcomers: e.g. foo = 1 def set_foo(): foo = 2 print foo -- 1 the worst part is, not a single warning from Python about this. in a large program, such a bug can be very tricky to track down. now i can see how an argument against changing this behavior might hinge upon global names like `hash' and `list'; you certainly wouldn't want an intended local variable called `hash' or `list' to trounce upon these. but this argument confuses lexical and dynamic scope: global variables declared inside a module are (or can be viewed as) globally lexically scoped in the module, whereas `hash' and `list' are dynamically scoped. so i'd suggest: [1] ideally, change this behavior, either for 2.6 or 3.0. maybe have a `local' keyword if you really want a new scope. [2] until this change, python should always print a warning in this situation. [3] the current 'UnboundLocal' exception should probably be more helpful, e.g. suggesting that you might need to use a `global foo' declaration. ben ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
[Python-Dev] python syntax additions to support indentation insensitivity/generated code
recently i've been writing code that generates a python program from a source file containing intermixed python code and non-python constructs, which get converted into python. similar things have been done in many other languages -- consider, for example, the way php is embedded into web pages. unfortunately, doing this is really hard in python because of its whitespace sensitivity. i suggest the following simple change: in addition to constructs like this: block: within-block statement within-block statement ... out-of-block statement python should support block:: within-block statement within-block statement ... end out-of-block statement the syntax is similar, but has an extra colon, and is terminated by an `end'. indentation of immediate-level statements within the block is unimportant. mixed-mode code should be possible, e.g.: block-1:: within-block-1 statement within-block-1 statement block-2: within-block-2 statement within-block-2 statement within-block-1 statement ... end in other words, code within block-2 is indentation-sensitive; block-2 is terminated by the first immediate-level statement at or below the indentation of the `block-2' statement. similarly, in this: [A] block-1:: [B] within-block-1 statement [C] within-block-1 statement [D] block-2: [E] within-block-2 statement [F] within-block-2 statement [G] block-3:: [H] within-block-3 statement [I] within-block-3 statement [J] end [K] within-block-2 statement [L] within-block-1 statement [M] ... [N] end the indentation of lines [D], [E], [F], [G], [K] and [L] is significant, but not any others. that is, [E], [F], [G], and [K] must be at the same level, which is greater than the level of line [D], and line [L] must be at a level less than or equal to line [D]. all other lines, including [H], [I] and [J], can be at any indentation level. also, line [D] can be at any level with respect to line [C]. the idea is that a python code generator could easily mix generated and hand-written code. hand-written code written in normal python style could be wrapped by generated code using the indentation-insensitive style; if the generated code had no indentation, everything would work as expected without the generator having to worry about indentation. i don't see too many problems with backward-compatibility here. the double-colon shouldn't cause any problems, since that syntax isn't legal currently. `end' could be recognized as a keyword only following a double-colon block; elsewhere, it could still be a variable. ben ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] More on contextlib - adding back a contextmanager decorator
Greg Ewing wrote: I've been thinking about the terms guarded context and context guard. We could say that the with-statement executes its body in a guarded context (an abstract notion, not a concrete object). To do this, it creates a context guard (a concrete object) with __enter__ and __exit__ methods that set up and tear down the guarded context. This seems clearer to me, since I can more readily visualise a guard object being specially commissioned to deal with one particular job (guarding a particular invocation of a context). With only one object, there wouldn't be a need for any more terms. contrast and compare: http://pyref.infogami.com/with http://pyref.infogami.com/with-alt http://pyref.infogami.com/with-guard a distinct term for whatever the __enter__ method returns (i.e. the thing assigned to the target list) would still be nice. /F ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] More on contextlib - adding back a contextmanager decorator
Greg Ewing wrote: Also a thought on terminology. Even though it seems I may have been the person who thought it up originally, I'm not sure I like the term manager. It seems rather wooly, and it's not clear whether a context manager is supposed to manage just one context or multiple contexts. I think getting rid of __context__ should clear up most of this confusion (which is further evidence that Guido is making the right call). Once that change is made, the context expression in the with statement produces a context manager with __enter__ and __exit__ methods which set up and tear down a managed context for the body of the with statement. This is very similar to your later suggestion of context guard and guarded context. I believe this is actually going back to using the terminology as you originally suggested it (one concrete object, one abstract concept). We really only got into trouble once we tried to add a second kind of concrete object into the mix (objects with only a __context__ method). Cheers, Nick. -- Nick Coghlan | [EMAIL PROTECTED] | Brisbane, Australia --- http://www.boredomandlaziness.org ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] More on contextlib - adding back a contextmanager decorator
Nick Coghlan wrote: the context expression in the with statement produces a context manager with __enter__ and __exit__ methods which set up and tear down a managed context for the body of the with statement. This is very similar to your later suggestion of context guard and guarded context. Currently I think I still prefer the term guard, since it does a better job of conjuring up the same sort of idea as a try-finally. There's also one other issue, what to call the decorator. I don't like @contextfactory, because it sounds like something that produces contexts, yet we have no such object. With only one object, it should probably be named after that object, i.e. @contextmanager or @contextguard. That's if we think it's really worth making it easy to abuse a generator in this way -- which I'm not convinced about. It's not as if people are going to be implementing context managers/guards every day. -- Greg ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] More on contextlib - adding back a contextmanager decorator
Nick Coghlan wrote: Greg Ewing wrote: Also a thought on terminology. Even though it seems I may have been the person who thought it up originally, I'm not sure I like the term manager. It seems rather wooly, and it's not clear whether a context manager is supposed to manage just one context or multiple contexts. I think getting rid of __context__ should clear up most of this confusion (which is further evidence that Guido is making the right call). Once that change is made, the context expression in the with statement produces a context manager with __enter__ and __exit__ methods which set up and tear down a managed context for the body of the with statement. This is very similar to your later suggestion of context guard and guarded context. Thinking about it a bit further. . . 1. PEP 343, 2.5 alpha 1, 2.5 alpha 2 and the discussions here have no doubt seriously confused the meaning of the term 'context manager' for a lot of people (you can certainly put me down as one such person). Anyone not already confused is likely to *become* confused if we subtly change the meaning in alpha 3. 2. The phrase managed context is unfortunately close to .NET's term managed code, and would likely lead to confusion for IronPython folks (and other programmers with .NET experience) 3. manager is an extremely generic term that is already used in a lot of different ways in various programming contexts Switching to Greg's suggestion of context guard and guarded context as the terms would allow us to hit the reset button and start the documentation afresh without terminology confusion resulting from the evolution of PEP 343 and its implementation and documentation. I think context guard also works better in terms of guarding entry to and exit from the guarded context, whereas I always wanted to call those operations set up and tear down for context managers. The current @contextfactory decorator could be renamed to @guardfactory to make it explicit that it results in a factory function for context guards. Cheers, Nick. P.S. I think I can hear anguished howls coming from the offices of various book publishers around the world ;) -- Nick Coghlan | [EMAIL PROTECTED] | Brisbane, Australia --- http://www.boredomandlaziness.org ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Tkinter lockups.
Thanks Martin! Jeff ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] More on contextlib - adding back a contextmanager decorator
Greg Ewing wrote: Nick Coghlan wrote: the context expression in the with statement produces a context manager with __enter__ and __exit__ methods which set up and tear down a managed context for the body of the with statement. This is very similar to your later suggestion of context guard and guarded context. Currently I think I still prefer the term guard, since it does a better job of conjuring up the same sort of idea as a try-finally. See the other message I wrote while you were writing this one for the various reasons I now agree with you :) There's also one other issue, what to call the decorator. I don't like @contextfactory, because it sounds like something that produces contexts, yet we have no such object. Agreed. With only one object, it should probably be named after that object, i.e. @contextmanager or @contextguard. That's if we think it's really worth making it easy to abuse a generator in this way -- which I'm not convinced about. It's not as if people are going to be implementing context managers/guards every day. I suggested renaming it to guardfactory in my other message. Keeping the term 'factory' in the name emphasises that the decorator results in a callable that returns a context guard, rather than producing a context guard directly. As for whether or not we should provide this ability, I think we definitely should. It allows try/finally boilerplate code to be replaced with a guarded context almost mechanically, whereas converting the same code to a manually written context guard could involve significant effort in changing from local variable based storage to instance attribute based storage. IOW, the feature is provided for the same reason that generator functions are provided: it is typically *much* easier to write a generator function than it is to write the same iterator manually. Cheers, Nick. -- Nick Coghlan | [EMAIL PROTECTED] | Brisbane, Australia --- http://www.boredomandlaziness.org ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] More on contextlib - adding back a contextmanager decorator
Fredrik Lundh wrote: a distinct term for whatever the __enter__ method returns (i.e. the thing assigned to the target list) would still be nice. I've called that the context entry value in a few places (I don't think any of them were in the actual documentation though). A sample modification to the reference page: -- Here's a more detailed description: 1. The context expression is evaluated, to obtain a context guard. 2. The guard object's __enter__ method is invoked to obtain the context entry value. 3. If a target list was included in the with statement, the context entry value is assigned to it. 4. The suite is executed. 5. The guard object's __exit__ method is invoked. If an exception caused the suite to be exited, its type, value, and traceback are passed as arguments to __exit__. Otherwise, three None arguments are supplied. -- Cheers, Nick. -- Nick Coghlan | [EMAIL PROTECTED] | Brisbane, Australia --- http://www.boredomandlaziness.org ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] global variable modification in functions [Re: elimination of scope bleeding of iteration variables]
On Sun, Apr 30, 2006 at 10:47:07PM -0500, Ben Wing wrote: foo = 1 def set_foo(): foo = 2 PyLint gives a warning here local foo shadows global variable. Oleg. -- Oleg Broytmannhttp://phd.pp.ru/[EMAIL PROTECTED] Programmers don't die, they just GOSUB without RETURN. ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] PEP 3102: Keyword-only arguments
Martin v. Löwis wrote: One reason I see is to have keyword-only functions, i.e. with no positional arguments at all: def make_person(*, name, age, phone, location): pass which also works for methods: def make_person(self, *, name, age, phone, location): pass In these cases, you don't *want* name, age to be passed in a positional way. How else would you formulate that if this syntax wasn't available? But is it necessary to syntactically *enforce* that the arguments be used as keywords? I.e., why not just document that the arguments should be used as keyword arguments, and leave it at that. If users insist on using them positionally, then their code will be less readable, and might break if you decide to change the order of the parameters, but we're all consenting adults. (And if you *do* believe that the parameters should all be passed as keywords, then I don't see any reason why you'd ever be motivated to change their order.) -Edward ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] PEP 3102: Keyword-only arguments
Edward Loper wrote: One reason I see is to have keyword-only functions, i.e. with no positional arguments at all: def make_person(*, name, age, phone, location): pass which also works for methods: def make_person(self, *, name, age, phone, location): pass In these cases, you don't *want* name, age to be passed in a positional way. How else would you formulate that if this syntax wasn't available? But is it necessary to syntactically *enforce* that the arguments be used as keywords? I.e., why not just document that the arguments should be used as keyword arguments, and leave it at that. and how do you best do that, in a way that automatic introspection tools understand, unless you invent some kind of standard syntax for it? and if you have a standard syntax for it, why not support it at the interpreter level ? /F ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] PEP 3102: Keyword-only arguments
On 4/30/06, Edward Loper [EMAIL PROTECTED] wrote (referring to keyword-only arguments): I see two possible reasons: - A function's author believes that calls to the function will be easier to read if certain parameters are passed by name, rather than positionally; and they want to enforce that calling convention on their users. This seems to me to go against the consenting adults principle. - A function's author believes they might change the signature in the future to accept new positional arguments, and they will want to put them before the args that they declare keyword-only. Both of these motivations seem fairly weak. Certainly, neither seems to warrant a significant change to function definition syntax. I disagree. I think the use cases are more significant than you suggest, and the proposed change less significant. Readability and future-compatibility are key factors in API design. How well a language supports them determines how sweet its libraries can be. Even relatively simple high-level functions often have lots of clearly inessential options. When I design this kind of function, I often wish for keyword-only arguments. path.py's write_lines() is an example. In fact... it feels as though I've seen keyword-only arguments in a few places in the stdlib. Am I imagining this? Btw, I don't think the term consenting adults applies. To me, that refers to the agreeable state of affairs where you, the programmer about to do something dangerous, know it's dangerous and indicate your consent somehow in your source code, e.g. by typing an underscore. That underscore sends a warning. It tells you to think twice. It tells you the blame is all yours if this doesn't work. It makes consent explicit (both mentally and syntactically). I'm +1 on the use cases but -0 on the PEP. The proposed syntax isn't clear; I think I want a new 'explicit' keyword or something. (Like that'll happen. Pfft.) -j ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] PEP 3102: Keyword-only arguments
On Sunday 30 April 2006 22:50, Edward Loper wrote: I see two possible reasons: Another use case, observed in the wild: - An library function is written to take an arbitrary number of positional arguments using *args syntax. The library is released, presumably creating dependencies on the specific signature of the function. In a subsequent version of the function, the function is determined to need additional information. The only way to add an argument is to use a keyword for which there is no positional equivalent. -Fred -- Fred L. Drake, Jr. fdrake at acm.org ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] PEP 3102: Keyword-only arguments
Fred L. Drake, Jr. wrote: On Sunday 30 April 2006 22:50, Edward Loper wrote: I see two possible reasons: Another use case, observed in the wild: - An library function is written to take an arbitrary number of positional arguments using *args syntax. The library is released, presumably creating dependencies on the specific signature of the function. In a subsequent version of the function, the function is determined to need additional information. The only way to add an argument is to use a keyword for which there is no positional equivalent. This falls under the first subproposal from Terry's email: There are two subproposals: first, keyword-only args after a variable number of positional args, which requires allowing keyword parameter specifications after the *args parameter, and second, keyword-only args after a fixed number number of positional args, implemented with a naked '*'. To the first, I said The rationale for this is pretty obvious.. To the second, I asked, and still ask, Why?. I was trying to come up with use cases for the second subproposal. -Edward ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] More on contextlib - adding back a contextmanager decorator
On May 1, 2006, at 8:15 AM, Nick Coghlan wrote: 1. PEP 343, 2.5 alpha 1, 2.5 alpha 2 and the discussions here have no doubt seriously confused the meaning of the term 'context manager' for a lot of people (you can certainly put me down as one such person). Anyone not already confused is likely to *become* confused if we subtly change the meaning in alpha 3. Don't forget that the majority of users will never have heard any of these discussions nor have used 2.5a1 or 2.5a2. Choose the best term for them, not for the readers of python-dev. James ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] More on contextlib - adding back a contextmanager decorator
On 5/1/06, James Y Knight [EMAIL PROTECTED] wrote: Don't forget that the majority of users will never have heard any of these discussions nor have used 2.5a1 or 2.5a2. Choose the best term for them, not for the readers of python-dev. I couldn't agree more! (Another thought, occasionally useful,is to consider that surely the number of Python programs yet to be written, and the number of Python programmers who yet have to learn the language, must surely exceed the current count. At least, one would hope so -- if that's not true, we might as well stop now. :-) Nick, do you have it in you to fix PEP 343? Or at least come up with a draft patch? We can take this off-linel with all the +0's and +1's coming in I'm pretty comfortable with this change now, although we should probably wait until later today to commit. -- --Guido van Rossum (home page: http://www.python.org/~guido/) ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] PEP 3102: Keyword-only arguments
On 5/1/06, Edward Loper [EMAIL PROTECTED] wrote: There are two subproposals: first, keyword-only args after a variable number of positional args, which requires allowing keyword parameter specifications after the *args parameter, and second, keyword-only args after a fixed number number of positional args, implemented with a naked '*'. To the first, I said The rationale for this is pretty obvious.. To the second, I asked, and still ask, Why?. I was trying to come up with use cases for the second subproposal. A function/method could have one argument that is obviously needed and a whole slew of options that few people care about. For most people, the signature they know is foo(arg). It would be nice if all the options were required to be written as keyword arguments, so the reader will not have to guess what foo(arg, True) means. This signature style could perhaps be used for the join() builtin that some folks are demanding: join(iterable, sep= , auto_str=False). For the record, I'm +1 on Talin's PEP, -1 on join. -- --Guido van Rossum (home page: http://www.python.org/~guido/) ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] global variable modification in functions [Re: elimination of scope bleeding of iteration variables]
On 4/30/06, Ben Wing [EMAIL PROTECTED] wrote: [1] ideally, change this behavior, either for 2.6 or 3.0. maybe have a `local' keyword if you really want a new scope. [2] until this change, python should always print a warning in this situation. [3] the current 'UnboundLocal' exception should probably be more helpful, e.g. suggesting that you might need to use a `global foo' declaration. You're joking right? -- --Guido van Rossum (home page: http://www.python.org/~guido/) ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] unittest argv
Wouldn't this be an incompatible change? That would make it a no-no. Providing a dummy argv[0] isn't so hard is it? On 4/30/06, John Keyes [EMAIL PROTECTED] wrote: Hi, main() in unittest has an optional parameter called argv. If it is not present in the invocation, it defaults to None. Later in the function a check is made to see if argv is None and if so sets it to sys.argv. I think the default should be changed to sys.argv[1:] (i.e. the command line arguments minus the name of the python file being executed). The parseArgs() function then uses getopt to parse argv. It currently ignores the first item in the argv list, but this causes a problem when it is called from another python function and not from the command line. So using the current code if I call: python mytest.py -v then argv in parseArgs is ['mytest.py', '-v'] But, if I call: unittest.main(module=None, argv=['-v','mytest']) then argv in parseArgs is ['mytest'], as you can see the verbosity option is now gone and cannot be used. Here's a diff to show the code changes I have made: 744c744 argv=None, testRunner=None, testLoader=defaultTestLoader): --- argv=sys.argv[1:], testRunner=None, testLoader=defaultTestLoader): 751,752d750 if argv is None: argv = sys.argv 757c755 self.progName = os.path.basename(argv[0]) --- #self.progName = os.path.basename(argv[0]) 769c767 options, args = getopt.getopt(argv[1:], 'hHvq', --- options, args = getopt.getopt(argv, 'hHvq', You may notice I have commented out the self.progName line. This variable is not used anywhere in the module so I guess it could be removed. To keep it then conditional check on argv would have to remain and be moved after the self.progName line. I hope this makes sense, and it's my first post so go easy on me ;) Thanks, -John K ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/guido%40python.org -- --Guido van Rossum (home page: http://www.python.org/~guido/) ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Adding functools.decorator
On 4/30/06, Georg Brandl [EMAIL PROTECTED] wrote: Guido van Rossum wrote: I expect that at some point people will want to tweak what gets copied by _update_wrapper() -- e.g. some attributes may need to be deep-copied, or personalized, or skipped, etc. What exactly do you have in mind there? If someone wants to achieve this, she can write his own version of @decorator. I meant that the provided version should make writing your own easier than copying the source and editing it. Some form of subclassing might make sense, or a bunch of smaller functions that can be called for various actions. You'll probably have to discover some real use cases before you'll be able to design the right API for this. -- --Guido van Rossum (home page: http://www.python.org/~guido/) ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] socket module recvmsg/sendmsg
Is there a question or a request in here somewhere? If not, c.l.py.ann would be more appropriate. If you want that code integrated into core Python, read python.org/dev and prepare a patch for SF! --Guido On 4/30/06, Heiko Wundram [EMAIL PROTECTED] wrote: Hi all! I've implemented recvmsg and sendmsg for the socket module in my private Python tree for communication between two forked processes, which are essentially wrappers for proper handling of SCM_RIGHTS and SCM_CREDENTIALS Unix-Domain-Socket messages (which are the two types of messages that are defined on Linux). The main reason I need these two primitives is that I require (more or less transparent) file/socket descriptor exchange between two forked processes, where one process accepts a socket, and delegates processing of the socket connection to another process of a set of processes; this is much like a ForkingTCPServer, but with the Handler-process prestarted. As connection to the Unix-Domain-Socket is openly available, the receiving process needs to check the identity of the first process; this is done using a getsockopt(SO_PEERCRED) call, which is also handled more specifically by my socket extension to return a socket._ucred-type structure, which wraps the pid/uid/gid-structure returned by the corresponding getsockopt call, and also the socket message (SCM_CREDENTIALS) which passes or sets this information for the remote process. I'd love to see these two socket message primitives (of which the first, SCM_RIGHTS, is available on pretty much any Unix derivative) included in a Python distribution at some point in time, and as I've not got the time to push for an inclusion in the tree (and even less time to work on other Python patches myself) at the moment, I thought that I might just post here so that someone interested might pick up the work I've done so far and check the implementation for bugs, and at some stage these two functions might actually find their way into the Python core. Anyway, my private Python tree (which has some other patches which aren't of general interest, I'd think) is available at: http://dev.modelnine.org/hg/python and I can, if anyone is interested, post a current diff of socketmodule.* against 2.4.3 to the Python bug tracker at sourceforge. I did that some time ago (about half a year) when socket-passing code wasn't completely functioning yet, but at least at that point there didn't seem much interest in the patch. The patch should apply pretty cleanly against the current HEAD too, at least it did the last time I checked. I'll add a small testsuite for both functions to my tree some time tomorrow. --- Heiko. ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/guido%40python.org -- --Guido van Rossum (home page: http://www.python.org/~guido/) ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] methods on the bytes object (was: Crazy idea for str.join)
Please take this to the py3k list. It's still open which methods to add; it'll depend on the needs we discover while using bytes to write the I/O library. I don't believe we should add everything we can; rather, I'd like to keep the API small until we have a clear need for a particular method. For the record, I'm holding off adding join() for now; I'd rather speed up the += operation. --Guido On 4/30/06, Josiah Carlson [EMAIL PROTECTED] wrote: Guido van Rossum [EMAIL PROTECTED] wrote: On 4/29/06, Josiah Carlson [EMAIL PROTECTED] wrote: I understand the underlying implementation of str.join can be a bit convoluted (with the auto-promotion to unicode and all), but I don't suppose there is any chance to get str.join to support objects which implement the buffer interface as one of the items in the sequence? In Py3k, buffers won't be compatible with strings -- buffers will be about bytes, while strings will be about characters. Given that future I don't think we should mess with the semantics in 2.x; one change in the near(ish) future is enough of a transition. This brings up something I hadn't thought of previously. While unicode will obviously keep its .join() method when it becomes str in 3.x, will bytes objects get a .join() method? Checking the bytes PEP, very little is described about the type other than it basically being an array of 8 bit integers. That's fine and all, but it kills many of the parsing and modification use-cases that are performed on strings via the non __xxx__ methods. Specifically in the case of bytes.join(), the current common use-case of literal.join(...) would become something similar to bytes(literal).join(...), unless bytes objects got a syntax... Or maybe I'm missing something? Anyways, when the bytes type was first being discussed, I had hoped that it would basically become array.array(B, ...) + non-unicode str. Allowing for bytes to do everything that str was doing before, plus a few new tricks (almost like an mmap...), minus those operations which require immutability. - Josiah -- --Guido van Rossum (home page: http://www.python.org/~guido/) ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] python syntax additions to support indentation insensitivity/generated code
On Sun, Apr 30, 2006, Ben Wing wrote: recently i've been writing code that generates a python program from a source file containing intermixed python code and non-python constructs, which get converted into python. Please take this to comp.lang.python (this is not in-and-of-itself inappropriate for python-dev, but experience indicates that this discussion will probably not be productive and therefore belongs on the general newsgroup). -- Aahz ([EMAIL PROTECTED]) * http://www.pythoncraft.com/ Argue for your limitations, and sure enough they're yours. --Richard Bach ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] global variable modification in functions [Re: elimination of scope bleeding of iteration variables]
At 07:32 AM 5/1/2006 -0700, Guido van Rossum wrote: On 4/30/06, Ben Wing [EMAIL PROTECTED] wrote: [1] ideally, change this behavior, either for 2.6 or 3.0. maybe have a `local' keyword if you really want a new scope. [2] until this change, python should always print a warning in this situation. [3] the current 'UnboundLocal' exception should probably be more helpful, e.g. suggesting that you might need to use a `global foo' declaration. You're joking right? While I agree that item #1 is a non-starter, it seems to me that in the case where the compiler statically knows a name is being bound in the module's globals, and there is a *non-argument* local variable being bound in a function body, the odds are quite high that the programmer forgot to use global. I could almost see issuing a warning, or having a way to enable such a warning. And for the case where the compiler can tell the variable is accessed before it's defined, there's definitely something wrong. This code, for example, is definitely missing a global and the compiler could in principle tell: foo = 1 def bar(): foo+=1 So I see no problem (in principle, as opposed to implementation) with issuing a warning or even a compilation error for that code. (And it's wrong even if the snippet I showed is in a nested function definition, although the error would be different.) If I recall correctly, the new compiler uses a control-flow graph that could possibly be used to determine whether there is a path on which a local could be read before it's stored. ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] More on contextlib - adding back a contextmanager decorator
At 08:29 PM 5/1/2006 +1000, Nick Coghlan wrote: 'localcontext' would probably work as at least an interim name for such a function. with decimal.localcontext() as ctx: # use the new context here And the as ctx should be unnecessary for most use cases, if localcontext has an appropriately designed API. ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] PEP 3102: Keyword-only arguments
Edward Loper wrote: Martin v. Löwis wrote: One reason I see is to have keyword-only functions, i.e. with no positional arguments at all: def make_person(*, name, age, phone, location): pass But is it necessary to syntactically *enforce* that the arguments be used as keywords? This really challenges the whole point of the PEP: keyword-only arguments (at least, it challenges the title of the PEP, although probably not the specified rationale). I.e., why not just document that the arguments should be used as keyword arguments, and leave it at that. Because they wouldn't be keyword-only arguments, then. Regards, Martin ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] global variable modification in functions [Re: elimination of scope bleeding of iteration variables]
On 5/1/06, Phillip J. Eby [EMAIL PROTECTED] wrote: While I agree that item #1 is a non-starter, it seems to me that in the case where the compiler statically knows a name is being bound in the module's globals, and there is a *non-argument* local variable being bound in a function body, the odds are quite high that the programmer forgot to use global. I could almost see issuing a warning, or having a way to enable such a warning. And for the case where the compiler can tell the variable is accessed before it's defined, there's definitely something wrong. This code, for example, is definitely missing a global and the compiler could in principle tell: foo = 1 def bar(): foo+=1 So I see no problem (in principle, as opposed to implementation) with issuing a warning or even a compilation error for that code. (And it's wrong even if the snippet I showed is in a nested function definition, although the error would be different.) If I recall correctly, the new compiler uses a control-flow graph that could possibly be used to determine whether there is a path on which a local could be read before it's stored. Sure. This is a quality of implementation issue. -- --Guido van Rossum (home page: http://www.python.org/~guido/) ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] PEP 3102: Keyword-only arguments
Martin v. Löwis wrote: I.e., why not just document that the arguments should be used as keyword arguments, and leave it at that. Because they wouldn't be keyword-only arguments, then. which reminds me of the following little absurdity gem from the language reference: The following identifiers are used as keywords of the language, and cannot be used as ordinary identifiers. They must be spelled exactly as written here: /.../ (maybe it's just me). btw, talking about idioms used in the language reference, can any of the native speakers on this list explain if A is a nicer way of spelling B means that A is preferred over B, B is preferred over A, A and B are the same word and whoever wrote this is just being absurd, or anything else ? /F ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] PEP 3102: Keyword-only arguments
On 5/1/06, Fredrik Lundh [EMAIL PROTECTED] wrote: btw, talking about idioms used in the language reference, can any of the native speakers on this list explain if A is a nicer way of spelling B means that A is preferred over B, B is preferred over A, A and B are the same word and whoever wrote this is just being absurd, or anything else ? Without any context, I'd read it as A is preferred over B. Paul ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] PEP 3102: Keyword-only arguments
Martin v. Löwis [EMAIL PROTECTED] wrote in message news:[EMAIL PROTECTED] Terry Reedy wrote: There are two subproposals: first, keyword-only args after a variable number of positional args, which requires allowing keyword parameter specifications after the *args parameter, and second, keyword-only args after a fixed number number of positional args, implemented with a naked '*'. To the first, I said The rationale for this is pretty obvious.. To the second, I asked, and still ask, Why?. One reason I see is to have keyword-only functions, i.e. with no positional arguments at all: This is not a reason for subproposal two, but a special case, as you yourself note below, and hence does say why you want to have such. def make_person(*, name, age, phone, location): pass And again, why would you *make* me, the user-programmer, type make_person(name=namex, age=agex, phone=phonex, location = locationx) #instead of make_person(namex,agex,phonex,locationx) ? Ditto for methods. In these cases, you don't *want* name, age to be passed in a positional way. I sure you know what I am going to ask, that you did not answer ;-) Terry Jan Reedy PS. I see that Guido finally gave a (different) use case for bare * that does make sense to me. ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] PEP 3102: Keyword-only arguments
On Mon, May 01, 2006, Edward Loper wrote: But is it necessary to syntactically *enforce* that the arguments be used as keywords? I.e., why not just document that the arguments should be used as keyword arguments, and leave it at that. If users insist on using them positionally, then their code will be less readable, and might break if you decide to change the order of the parameters, but we're all consenting adults. (And if you *do* believe that the parameters should all be passed as keywords, then I don't see any reason why you'd ever be motivated to change their order.) IIRC, part of the motivation for this is to make it easier for super() to work correctly. -- Aahz ([EMAIL PROTECTED]) * http://www.pythoncraft.com/ Argue for your limitations, and sure enough they're yours. --Richard Bach ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] PEP 3102: Keyword-only arguments
Terry Reedy wrote: And again, why would you *make* me, the user-programmer, type make_person(name=namex, age=agex, phone=phonex, location = locationx) #instead of make_person(namex,agex,phonex,locationx) ? because a good API designer needs to consider more than just the current release. I repeat my question: have you done API design for others, and have you studied how your API:s are used (and how they evolve) over a longer period of time ? /F ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] PEP 3102: Keyword-only arguments
Martin v. Löwis [EMAIL PROTECTED] wrote in message I clipped it because I couldn't understand your question: Why what? (the second question only gives Why not) I then assumed that the question must have applied to the text that immediately preceded the question - hence that's the text that I left. Now I understand, though. For future reference, when I respond to a post, I usually try to help readers by snipping away inessentials, leaving only the essential context of my responses. Terry ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] unittest argv
On 5/1/06, Guido van Rossum [EMAIL PROTECTED] wrote: Wouldn't this be an incompatible change? That would make it a no-no. Providing a dummy argv[0] isn't so hard is it? It would be incompatible with existing code, but that code is already broken (IMO) by passing a dummy argv[0]. I don't think fixing it would affect much code, because normally people don't specify the '-q' or '-v' in code, it is almost exclusively used on the command line. The only reason I came across it was that I was modifying an ant task (py-test) so it could handle all of the named arguments that TestProgram.__init__ supports. If the list index code can't change, at a minimum the default value for argv should change from None to sys.argv. Are the tests for unittest.py? Thanks, -John K On 4/30/06, John Keyes [EMAIL PROTECTED] wrote: Hi, main() in unittest has an optional parameter called argv. If it is not present in the invocation, it defaults to None. Later in the function a check is made to see if argv is None and if so sets it to sys.argv. I think the default should be changed to sys.argv[1:] (i.e. the command line arguments minus the name of the python file being executed). The parseArgs() function then uses getopt to parse argv. It currently ignores the first item in the argv list, but this causes a problem when it is called from another python function and not from the command line. So using the current code if I call: python mytest.py -v then argv in parseArgs is ['mytest.py', '-v'] But, if I call: unittest.main(module=None, argv=['-v','mytest']) then argv in parseArgs is ['mytest'], as you can see the verbosity option is now gone and cannot be used. Here's a diff to show the code changes I have made: 744c744 argv=None, testRunner=None, testLoader=defaultTestLoader): --- argv=sys.argv[1:], testRunner=None, testLoader=defaultTestLoader): 751,752d750 if argv is None: argv = sys.argv 757c755 self.progName = os.path.basename(argv[0]) --- #self.progName = os.path.basename(argv[0]) 769c767 options, args = getopt.getopt(argv[1:], 'hHvq', --- options, args = getopt.getopt(argv, 'hHvq', You may notice I have commented out the self.progName line. This variable is not used anywhere in the module so I guess it could be removed. To keep it then conditional check on argv would have to remain and be moved after the self.progName line. I hope this makes sense, and it's my first post so go easy on me ;) Thanks, -John K ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/guido%40python.org -- --Guido van Rossum (home page: http://www.python.org/~guido/) ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] PEP 3102: Keyword-only arguments
Fredrik Lundh wrote: And again, why would you *make* me, the user-programmer, type make_person(name=namex, age=agex, phone=phonex, location = locationx) #instead of make_person(namex,agex,phonex,locationx) ? because a good API designer needs to consider more than just the current release. I believe that it's quite possible that you're right, but I think more concrete answers would be helpful. I.e., how does having parameters that are syntactically keyword-only (as opposed to being simply documented as keyword-only) help you develop an API over time? I gave some thought to it, and can come up with a few answers. In all cases, assume a user BadUser, who decided to use positional arguments for arguments that you documented as keyword-only. - You would like to deprecate, and eventually remove, an argument to a function. For people who read your documentation, and use keyword args for the keyword-only arguments, their code will break in a clean, easy-to-understand way. But BadUser's code may break in strange hard-to-understand ways, since their positional arguments will get mapped to the wrong function arguments. - You would like to add a new parameter to a function, and would like to make that new parameter available for positional argument use. So you'd like to add it before all the keyword arguments. But this will break BadUser's code. - You have a function that takes one argument and a bunch of keyword options, and would like to change it to accept *varargs instead of just one argument. But this will break BadUser's code. I think that this kind of *concrete* use-case provides a better justification for the feature than just saying it will help API design. As someone who has had a fair amount of experience designing maintaining APIs over time, perhaps you'd care to contribute some more use cases where you think having syntactically keyword-only arguments would help? -Edward ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] unittest argv
On 5/1/06, John Keyes [EMAIL PROTECTED] wrote: On 5/1/06, Guido van Rossum [EMAIL PROTECTED] wrote: Wouldn't this be an incompatible change? That would make it a no-no. Providing a dummy argv[0] isn't so hard is it? It would be incompatible with existing code, but that code is already broken (IMO) by passing a dummy argv[0]. That's a new meaning of broken, one that I haven't heard before. It's broken because it follows the API?!?! I don't think fixing it would affect much code, because normally people don't specify the '-q' or '-v' in code, it is almost exclusively used on the command line. Famous last words. The only reason I came across it was that I was modifying an ant task (py-test) so it could handle all of the named arguments that TestProgram.__init__ supports. If the list index code can't change, at a minimum the default value for argv should change from None to sys.argv. No. Late binding of sys.argv is very important. There are plenty of uses where sys.argv is dynamically modified. Are the tests for unittest.py? Assuming you meant Are there tests, yes: test_unittest.py. But it needs work. -- --Guido van Rossum (home page: http://www.python.org/~guido/) ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] unittest argv
At 06:11 PM 5/1/2006 +0100, John Keyes wrote: On 5/1/06, Guido van Rossum [EMAIL PROTECTED] wrote: Wouldn't this be an incompatible change? That would make it a no-no. Providing a dummy argv[0] isn't so hard is it? It would be incompatible with existing code, but that code is already broken (IMO) by passing a dummy argv[0]. I don't think fixing it would affect much code, because normally people don't specify the '-q' or '-v' in code, it is almost exclusively used on the command line. Speak for yourself - I have at least two tools that would have to change for this, at least one of which would have to grow version testing code, since it's distributed for Python 2.3 and up. That's far more wasteful than providing an argv[0], which is already a common requirement for main program functions in Python. ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] PEP 3102: Keyword-only arguments
Fredrik Lundh [EMAIL PROTECTED] wrote in message news:[EMAIL PROTECTED] which reminds me of the following little absurdity gem from the language reference: The following identifiers are used as keywords of the language, and cannot be used as ordinary identifiers. They must be spelled exactly as written here: /.../ (maybe it's just me). I am not sure of what you see as absurdity, as opposed to clumbsiness. Keywords are syntacticly indentifiers, but are reserved for predefined uses, and thus sematically are not identifiers. Perhaps 'ordinary indentifiers' should be replaced by 'names'. The second sentence is referring to case variations, and that could be more explicit. Before the elevation of None to reserved word status, one could have just added , in lower case letters btw, talking about idioms used in the language reference, can any of the native speakers on this list explain if A is a nicer way of spelling B means that A is preferred over B, [alternatives snipped] Yes. Terry Jan Reedy ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] PEP 3102: Keyword-only arguments
Terry Reedy wrote: This is not a reason for subproposal two, but a special case, as you yourself note below, and hence does say why you want to have such. def make_person(*, name, age, phone, location): pass You weren't asking for a reason, you were asking for an example: this is one. And again, why would you *make* me, the user-programmer, type make_person(name=namex, age=agex, phone=phonex, location = locationx) #instead of make_person(namex,agex,phonex,locationx) ? Because there should be preferably only one obvious way to call that function. Readers of the code should not need to remember the order of parameters, instead, the meaning of the parameters should be obvious in the call. This is the only sane way of doing functions with many arguments. PS. I see that Guido finally gave a (different) use case for bare * that does make sense to me. It's actually the same use case: I don't *want* callers to pass these parameters positionally, to improve readability. Regards, Martin ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
[Python-Dev] signature object issues (to discuss while I am out of contact)
Signature objects (which has been lightly discussed on python-3000, but I realize should be retargeted to 2.6 since there is no incompatibility problems) are the idea of having an object that represents the parameters of a function for easy introspection. But there are two things that I can't quite decide upon. One is whether a signature object should be automatically created for every function. As of right now the PEP I am drafting has it on a per-need basis and have it assigned to __signature__ through a built-in function or putting it 'inspect'. Now automatically creating the object would possibly make it more useful, but it could also be considered overkill. Also not doing it automatically allows signature objects to possibly make more sense for classes (to represent __init__) and instances (to represent __call__). But having that same support automatically feels off for some reason to me. The second question is whether it is worth providing a function that will either figure out if a tuple and dict representing arguments would work in calling the function. Some have even suggested a function that returns the actual bindings if the call were to occur. Personally I don't see a huge use for either, but even less for the latter version. If people have a legit use case for either please speak up, otherwise I am tempted to keep the object simple. Now, I probably won't be participating in this discussion for the rest of the week. I am driving down to the Bay Area from Seattle for the next few days and have no idea what my Internet access will be like. But I wanted to get this discussion going since it kept me up last night thinking about it and I would like to sleep by knowing python-dev, in its infinite wisdom grin, is considering the issues. =) -Brett ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] introducing the experimental pyref wiki
On Sat, Apr 29, 2006 at 08:54:00PM +0200, Fredrik Lundh wrote: http://pyref.infogami.com/ I find this work very exciting. Time hasn't been kind to the reference guide -- as language features were added to 2.x, not everything has been applied to the RefGuide, and users will probably have been forced to read a mixture of the RefGuide and various PEPs. The Reference Guide tries to provide a formal specification of the language. A while ago I wondered if we needed a User's Guide that explains all the keywords, lists special methods, and that sort of thing, in a style that isn't as formal and as complete as the Reference Guide. Now maybe we don't -- maybe the RefGuide can be tidied bit by bit into something more readable. (Or are the two goals -- completeness and readability -- incompossible, unable to be met at the same time by one document?) --amk ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] unittest argv
On 5/1/06, Guido van Rossum [EMAIL PROTECTED] wrote: On 5/1/06, John Keyes [EMAIL PROTECTED] wrote: On 5/1/06, Guido van Rossum [EMAIL PROTECTED] wrote: Wouldn't this be an incompatible change? That would make it a no-no. Providing a dummy argv[0] isn't so hard is it? It would be incompatible with existing code, but that code is already broken (IMO) by passing a dummy argv[0]. That's a new meaning of broken, one that I haven't heard before. It's broken because it follows the API?!?! Fair enough, a bad use of language on my part. I don't think fixing it would affect much code, because normally people don't specify the '-q' or '-v' in code, it is almost exclusively used on the command line. Famous last words. Probably ;) The only reason I came across it was that I was modifying an ant task (py-test) so it could handle all of the named arguments that TestProgram.__init__ supports. If the list index code can't change, at a minimum the default value for argv should change from None to sys.argv. No. Late binding of sys.argv is very important. There are plenty of uses where sys.argv is dynamically modified. Can you explain this some more? If it all happens in the same function call so how can it be late binding? Are the tests for unittest.py? Assuming you meant Are there tests, yes: test_unittest.py. But it needs work. Ok thanks, -John K ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] More on contextlib - adding back a contextmanager decorator
On Tue, May 02, 2006, Greg Ewing wrote: Nick Coghlan wrote: the context expression in the with statement produces a context manager with __enter__ and __exit__ methods which set up and tear down a managed context for the body of the with statement. This is very similar to your later suggestion of context guard and guarded context. Currently I think I still prefer the term guard, since it does a better job of conjuring up the same sort of idea as a try-finally. Guard really doesn't work for me. It seems clear (especially in light of Fredrik's docs) that wrapper comes much closer to what's going on. -- Aahz ([EMAIL PROTECTED]) * http://www.pythoncraft.com/ Argue for your limitations, and sure enough they're yours. --Richard Bach ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] __getslice__ usage in sre_parse
I've also opened a bug for supporting __getslice__ in IronPython. Do you want to help develop Dynamic languages on CLR? (http://members.microsoft.com/careers/search/details.aspx?JobID=6D4754DE-11F0-45DF-8B78-DC1B43134038) -Original Message- From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Guido van Rossum Sent: Sunday, April 30, 2006 8:31 AM To: Sanghyeon Seo Cc: Discussion of IronPython; python-dev@python.org Subject: Re: [Python-Dev] __getslice__ usage in sre_parse On 4/28/06, Sanghyeon Seo [EMAIL PROTECTED] wrote: Hello, Python language reference 3.3.6 deprecates __getslice__. I think it's okay that UserList.py has it, but sre_parse shouldn't use it, no? Well, as long as the deprecated code isn't removed, there's no reason why other library code shouldn't use it. So I disagree that technically there's a reason why sre_parse shouldn't use it. __getslice__ is not implemented in IronPython and this breaks usage of _sre.py, a pure-Python implementation of _sre, on IronPython: http://ubique.ch/code/_sre/ _sre.py is needed for me because IronPython's own regex implementation using underlying .NET implementation is not compatible enough for my applications. I will write a separate bug report for this. It should be a matter of removing __getslice__ and adding isinstance(index, slice) check in __getitem__. I would very much appreciate it if this is fixed before Python 2.5. You can influence the fix yourself -- please write a patch (relative to Python 2.5a2 that was just released), submit it to Python's patch tracker on SourceForge (read python.org/dev first), and then sending an email here to alert the developers. This ought to be done well before the planned 2.5b1 release (see PEP 256 for the 2.5 release timeline). You should make sure that the patched Python 2.5 passes all unit tests before submitting your test. Good luck! -- --Guido van Rossum (home page: http://www.python.org/~guido/) ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/dinov%40microsoft.com ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] PEP 3102: Keyword-only arguments
Guido van Rossum [EMAIL PROTECTED] wrote in message news:[EMAIL PROTECTED] A function/method could have one argument that is obviously needed and a whole slew of options that few people care about. For most people, the signature they know is foo(arg). It would be nice if all the options were required to be written as keyword arguments, so the reader will not have to guess what foo(arg, True) means. Ok, this stimulates an old memory of written IBM JCL statements something like DD FILENAME,2,,HOP where you had to carefully count commas to correctly write and later read which defaulted options were being overridden. dd(filename, unit=2, meth = 'hop') is much nicer. So it seems to me now that '*, name1 = def1, name2=def2, ...' in the signature is a really a substitute for and usually an improvement upon (easier to write, read, and programmaticly extract) '**names' in the signature followed by name1 = names.get('name1', def1) name2 = names.get('name2', def2) ... (with the semantic difference being when defs are calculated) (But I still don't quite see why one would require that args for required, non-defaulted param be passed by name instead of position ;-). As something of an aside, the use of 'keyword' to describe arguments passed by name instead of position conflicts with and even contradicts the Ref Man definition of keyword as an identifier, with a reserved use, that cannot be used as a normal identifier. The parameter names used to pass an argument by name are normal identifiers and cannot be keywords in the other sense of the word. (Yes, I understand that this is usual CS usage, but Python docs can be more careful.) So I would like to see the name of this PEP changed to 'Name-only arguments' Terry Jan Reedy ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] PEP 3102: Keyword-only arguments
Terry Reedy [EMAIL PROTECTED] wrote in message news:[EMAIL PROTECTED] Fredrik Lundh [EMAIL PROTECTED] wrote in message news:[EMAIL PROTECTED] which reminds me of the following little absurdity gem from the language reference: I am not sure of what you see as absurdity, Perhaps I do. Were you referring to what I wrote in the last paragraph of my response to Guido? tjr ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] more pyref: continue in finally statements
Strange. I thought this was supposed to be fixed? (But I can confirm that it isn't.) BTW there's another bug in the compiler: it doesn't diagnose this inside while 0. --Guido On 5/1/06, Fredrik Lundh [EMAIL PROTECTED] wrote: the language reference says: continue may only occur syntactically nested in a for or while loop, but not nested in a function or class definition or finally statement within that loop. /.../ It may occur within an except or else clause. The restriction on occurring in the try clause is implementor's laziness and will eventually be lifted. and it looks like the new compiler still has the same issue: $ python test.py File test.py, line 5: continue SyntaxError: 'continue' not supported inside 'finally' clause how hard would it be to fix this ? (shouldn't the try clause in the note read finally clause, btw? continue within the try suite seem to work just fine...) /F ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/guido%40python.org -- --Guido van Rossum (home page: http://www.python.org/~guido/) ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] unittest argv
On 5/1/06, John Keyes [EMAIL PROTECTED] wrote: No. Late binding of sys.argv is very important. There are plenty of uses where sys.argv is dynamically modified. Can you explain this some more? If it all happens in the same function call so how can it be late binding? You seem to be unaware of the fact that defaults are computed once, when the 'def' is executed (typically when the module is imported). Consider module A containing this code: import sys def foo(argv=sys.argv): print argv and module B doing import sys import A sys.argv = [a, b, c] A.foo() This will print the initial value for sys.argv, not [a, b, c]. With the late binding version it will print [a, b, c]: def foo(argv=None): if argv is None: argv = sys.argv print argv -- --Guido van Rossum (home page: http://www.python.org/~guido/) ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] introducing the experimental pyref wiki
Agreed. Is it too late to also attempt to bring Doc/ref/*.tex completely up to date and remove confusing language from it? Ideally that's the authoritative Language Reference -- admittedly it's been horribly out of date but needn't stay so forever. --Guido On 5/1/06, A.M. Kuchling [EMAIL PROTECTED] wrote: On Sat, Apr 29, 2006 at 08:54:00PM +0200, Fredrik Lundh wrote: http://pyref.infogami.com/ I find this work very exciting. Time hasn't been kind to the reference guide -- as language features were added to 2.x, not everything has been applied to the RefGuide, and users will probably have been forced to read a mixture of the RefGuide and various PEPs. The Reference Guide tries to provide a formal specification of the language. A while ago I wondered if we needed a User's Guide that explains all the keywords, lists special methods, and that sort of thing, in a style that isn't as formal and as complete as the Reference Guide. Now maybe we don't -- maybe the RefGuide can be tidied bit by bit into something more readable. (Or are the two goals -- completeness and readability -- incompossible, unable to be met at the same time by one document?) --amk ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/guido%40python.org -- --Guido van Rossum (home page: http://www.python.org/~guido/) ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] introducing the experimental pyref wiki
At 11:37 AM 5/1/2006 -0700, Guido van Rossum wrote: Agreed. Is it too late to also attempt to bring Doc/ref/*.tex completely up to date and remove confusing language from it? Ideally that's the authoritative Language Reference -- admittedly it's been horribly out of date but needn't stay so forever. Well, I added stuff for PEP 343, but PEP 342 (yield expression plus generator-iterator methods) hasn't really been added yet, mostly because I was unsure of how to fit it in without one of those first let's explain it how it was, then how we changed it sort of things. :( ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] signature object issues (to discuss while I am out of contact)
Brett Cannon wrote: The second question is whether it is worth providing a function that will either figure out if a tuple and dict representing arguments would work in calling the function. Some have even suggested a function that returns the actual bindings if the call were to occur. Personally I don't see a huge use for either, but even less for the latter version. If people have a legit use case for either please speak up, otherwise I am tempted to keep the object simple. One use case that comes to mind is a type-checking decorator (or precondition-checking decorator, etc): @precondition(lambda x,y: xy) @precondition(lambda y,z: yz) def foo(x, y, z): ... where precondition is something like: def precondition(test): def add_precondition(func): def f(*args, **kwargs): bindings = func.__signature__.bindings(args, kwargs) if not test(**bindings): raise ValueError, 'Precontition not met' func(*args, **kwargs) return f return add_precondition -Edward ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] introducing the experimental pyref wiki
A.M. Kuchling wrote: On Sat, Apr 29, 2006 at 08:54:00PM +0200, Fredrik Lundh wrote: http://pyref.infogami.com/ I find this work very exciting. Time hasn't been kind to the reference guide -- as language features were added to 2.x, not everything has been applied to the RefGuide, and users will probably have been forced to read a mixture of the RefGuide and various PEPs. The Reference Guide tries to provide a formal specification of the language. A while ago I wondered if we needed a User's Guide that explains all the keywords, lists special methods, and that sort of thing, in a style that isn't as formal and as complete as the Reference Guide. Now maybe we don't -- maybe the RefGuide can be tidied bit by bit into something more readable. At my company we recently got badly bitten because the language syntax as defined in the 'language reference' varies wildly from the grammar in SVN. We had to implement it twice ! When we tried to check syntax usage (as distinct from the BNF type specification in the grammar) we found that things like keyword arguments (etc) are not documented anywhere, except possibly in the tutorial. Currently the language reference seems to neither reflect the language definition in the grammar, nor be a good reference for users (except parts that are excellent). A users guide which straddles the middle would be very useful, and with some shepherding can probably be mainly done by community input. I also find that when trying to implement objects with 'the magic methods', I have to search in several places in the documentation. For example, to implement a mapping type I will probably need to refer to the following pages : http://docs.python.org/ref/sequence-types.html http://docs.python.org/lib/typesmapping.html http://docs.python.org/ref/customization.html Michael Foord (Or are the two goals -- completeness and readability -- incompossible, unable to be met at the same time by one document?) --amk ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/fuzzyman%40voidspace.org.uk ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] introducing the experimental pyref wiki
Guido van Rossum wrote: Agreed. Is it too late to also attempt to bring Doc/ref/*.tex completely up to date and remove confusing language from it? Ideally that's the authoritative Language Reference -- admittedly it's been horribly out of date but needn't stay so forever. It's never too late to update the specification. I really think there should be a specification, and I really think it should be as precise as possible - where possible takes both of these into account: - it may get out of date due to lack of contributors. This is free software, and you don't always get what you want unless you do it yourself (and even then, sometimes not). - it might be deliberately vague to allow for different implementation strategies. Ideally, it would be precise in pointing out where it is deliberately vague. So I think the PEPs all should be merged into the documentation, at least their specification parts (rationale, history, examples might stay in the PEPs). To some degree, delivery of documentation can be enforced by making acceptance of the PEP conditional upon creation of documentation patches. Once the feature is committed, we can only hope for (other) volunteers to provide the documentation, or keep nagging the author of the code to produce it. Regards, Martin ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] methods on the bytes object
Before I get into my reply, I'm going to start out by defining a new term: operationX - the operation of interpreting information differently than how it is presented, generally by constructing a data structure based on the input information. eg; programming language source file - parse tree, natual language - parse tree or otherwise, structured data file - data structure (tree, dictionary, etc.), etc. synonyms: parsing, unmarshalling, interpreting, ... Any time I would previously describe something as some variant of 'parse', replace that with 'operationX'. I will do that in all of my further replies. Martin v. Löwis [EMAIL PROTECTED] wrote: Josiah Carlson wrote: Certainly that is the case. But how would you propose embedded bytes data be represented? (I talk more extensively about this particular issue later). Can't answer: I don't know what embedded bytes data are. I described this before as the output of img2py from wxPython. Here's a sample which includes the py.ico from Python 2.3 . #-- # This file was generated by C:\Python23\Scripts\img2py # from wx import ImageFromStream, BitmapFromImage import cStringIO, zlib def getData(): return zlib.decompress( 'x\xda\x01\x14\x02\xeb\xfd\x89PNG\r\n\x1a\n\x00\x00\x00\rIHDR\x00\x00\x00 \ \x00\x00\x00 \x08\x06\x00\x00\x00szz\xf4\x00\x00\x00\x04sBIT\x08\x08\x08\x08\ |\x08d\x88\x00\x00\x01\xcbIDATX\x85\xb5W\xd1\xb6\x84 \x08\x1ct\xff;\xf6\xc3\ \x93\xfb`\x18\x92f\xb6^:\x9e\xd4\x94\x19\x04\xd4\x88BDO$\xed\x02\x00\x14i]\ \xdb\xddI\x93B=\x02\x92va\xceu\xe6\\T\x98\xd7\x91h\x12\xb0\xd6z\xb1\xa4V\x90\ \xf8\xf4\xc8A\x81\xe8\xack\xdb\xae\xc6\xbf\x11\xc8`\x02\x80L\x1d\xa5\xbdJ\ \xc2Rm\xab\t\x88PU\xb7m\xe0V^\x13(\xa9G\xa7\xbf\xb5n\xfd\xafoI\xbbhyC\xa0\ \xc4\x80*h\x05\xd8]\xd0\xd5\xe9y\xee\x1bO\t\x10\x85X\xe5\xfc\x8c\xf0\x06\xa0\ \x91\x153)\x1af\xc1y\xab\xdf\x906\x81\xa7.)1\xe0w\xba\x1e\xb0\xaf\xff*C\x02\ \x17`\xc2\xa3\xad\xe0\xe9*\x04U\xec\x97\xb6i\xb1\x02\x0b\xc0_\xd3\xf7C2\xe6\ [EMAIL PROTECTED];\xdd\x125s\xf0\ \x8c\x94\xd3\xd0\xfa\xab\xb5\xeb{\xcb\xcb\x1d\xe1\xd7\x15\xf0\x1d\x1e\x9c9wz\ p\x0f\xfa\x06\x1cp\xa7a\x01?\x82\x8c7\x80\xf5\xe3\xa1J\x95\xaa\xf5\xdc\x00\ \x9f\x91\xe2\x82\xa4g\x80\x0f\xc8\x06p9\x0f\xb66\xf8\xccNH\x14\xe2\t\xde\x1a\ `\x14\x8d|\x0b\x0e\x00\x9f\x94v\t!RJ\xbb\xf4VV/\x04\x97\xb4K\xe5\x82\xe0\ \x97\xcc\x18X\xfd\x16\x1cxx+\x06\xfa\xfeVp+\x17\xb7\xb9~\xd5\xcd\xb8\x13V\ \xdb\xf1\r\xf8\xf54\xcc\xee\xbc\x18\xc1\xd7;G\x93\x80\x0f\xb6.\xc1\x06\xf8\ \xd9\x7f=\xe6[c\xbb\xff\x05O\x97\xff\xadh\xcct]\xb0\xf2\xcc/\xc6\x98mV\xe3\ \xe1\xf1\xb5\xbcGhDT--\x87\x9e\xdb\xca\xa7\xb2\xe0\xe6~\xd0\xfb6LM\n\xb1[\ \x90\xef\n\xe5aj\x19R\xaaq\xae\xdc\xe9\xad\xca\xdd\xef\xb9\xaeD\x83\xf4\xb2\ \xff\xb3?\x1c\xcd1U-7%\x96\x00\x00\x00\x00IEND\xaeB`\x82\xdf\x98\xf1\x8f' ) def getBitmap(): return BitmapFromImage(getImage()) def getImage(): stream = cStringIO.StringIO(getData()) return ImageFromStream(stream) That data is non-textual. It is bytes within a string literal. And it is embedded (within a .py file). I am apparently not communicating this particular idea effectively enough. How would you propose that I store parsing literals for non-textual data, and how would you propose that I set up a dictionary to hold some non-trivial number of these parsing literals? I can't answer that question: I don't know what a parsing literal for non-textual data is. If you are asking how you represent bytes object in source code: I would encode them as a list of integers, then use, say, parsing_literal = bytes([3,5,30,99]) An operationX literal is a symbol that describes how to interpret the subsequent or previous data. For an example of this, see the pickle module (portions of which I include below). From what I understand, it would seem that you would suggest that I use something like the following... handler = {bytes('...', encoding=...).encode('latin-1'): ..., #or '\u\u...': ..., #or even without bytes/str (0xXX, 0xXX, ...): ..., } Note how two of those examples have non-textual data inside of a Python 3.x string? Yeah. Unfortunately, I don't notice. I assume you don't mean a literal '...'; if this is what you represent, I would write handler = { '...': some text } But I cannot guess what you want to put into '...' instead. I described before how you would use this kind of thing to perform operationX on structured information. It turns out that pickle (in Python) uses a dictionary of operationX symbols/literals - unbound instance methods to perform operationX on the pickled representation of Python objects (literals where = '...' are defined, and symbols using the names). The relevant code for unpickling is the while 1: section of the following. def load(self): Read a pickled object representation from the open file.
Re: [Python-Dev] more pyref: continue in finally statements
Guido van Rossum wrote: Strange. I thought this was supposed to be fixed? (But I can confirm that it isn't.) Perhaps you were confusing it with this HISTORY entry? - A 'continue' statement can now appear in a try block within the body of a loop. It is still not possible to use continue in a finally clause. This was added as r19261 | jhylton | 2001-02-01 23:53:15 +0100 (Do, 01 Feb 2001) | 2 lines Geänderte Pfade: M /python/trunk/Misc/NEWS continue now allowed in try block r19260 | jhylton | 2001-02-01 23:48:12 +0100 (Do, 01 Feb 2001) | 3 lines Geänderte Pfade: M /python/trunk/Doc/ref/ref7.tex M /python/trunk/Include/opcode.h M /python/trunk/Lib/dis.py M /python/trunk/Lib/test/output/test_exceptions M /python/trunk/Lib/test/output/test_grammar M /python/trunk/Lib/test/test_exceptions.py M /python/trunk/Lib/test/test_grammar.py M /python/trunk/Python/ceval.c M /python/trunk/Python/compile.c Allow 'continue' inside 'try' clause SF patch 102989 by Thomas Wouters Regards, Martin ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] introducing the experimental pyref wiki
A.M. Kuchling wrote: I find this work very exciting. Time hasn't been kind to the reference guide -- as language features were added to 2.x, not everything has been applied to the RefGuide, and users will probably have been forced to read a mixture of the RefGuide and various PEPs. or as likely, mailing list archives. The Reference Guide tries to provide a formal specification of the language. A while ago I wondered if we needed a User's Guide that explains all the keywords, lists special methods, and that sort of thing, in a style that isn't as formal and as complete as the Reference Guide. Now maybe we don't -- maybe the RefGuide can be tidied bit by bit into something more readable. (Or are the two goals -- completeness and readability -- incompossible, unable to be met at the same time by one document?) well, I'm biased, but I'm convinced that the pyref material (which consists of the entire language reference plus portions of the library reference) can be both complete and readable. I don't think it can be complete, readable, and quite as concise as before, though ;-) (see the a bit more inviting part on the front-page for the guidelines I've been using this far). /F ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] introducing the experimental pyref wiki
Guido van Rossum wrote: Agreed. Is it too late to also attempt to bring Doc/ref/*.tex completely up to date and remove confusing language from it? Ideally that's the authoritative Language Reference -- admittedly it's been horribly out of date but needn't stay so forever. it's perfectly possible to generate a list of changes from the wiki which some volunteer could apply to the existing document. or we could generate a complete new set of Latex documents from the wiki contents. it's just a small matter of programming... I'm not sure I would bother, though -- despite Fred's heroic efforts, the toolchain is nearly as dated as the language reference itself. XHTML and ODT should be good enough, really. /F ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] more pyref: continue in finally statements
Fredrik Lundh wrote: the language reference says: continue may only occur syntactically nested in a for or while loop, but not nested in a function or class definition or finally statement within that loop. /.../ It may occur within an except or else clause. The restriction on occurring in the try clause is implementor's laziness and will eventually be lifted. and it looks like the new compiler still has the same issue: $ python test.py File test.py, line 5: continue SyntaxError: 'continue' not supported inside 'finally' clause how hard would it be to fix this ? (shouldn't the try clause in the note read finally clause, btw? continue within the try suite seem to work just fine...) For the latter: the documentation apparently wasn't fully updated in r19260: it only changed ref7.tex, but not ref6.tex. IOW, it really means to say in the try clause, and it is out-of-date in saying so. As for continue in the 'finally' clause: What would that mean? Given def f(): raise Exception while 1: try: f() finally: g() continue then what should be the meaning of continue here? The finally block *eventually* needs to re-raise the exception. When should that happen? So I would say: It's very easy to fix, just change the message to SyntaxError: 'continue' not allowed inside 'finally' clause :-) Regards, Martin ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] methods on the bytes object
Josiah Carlson wrote: Certainly that is the case. But how would you propose embedded bytes data be represented? (I talk more extensively about this particular issue later). Can't answer: I don't know what embedded bytes data are. Ok. I think I would use base64, of possibly compressed content. It's more compact than your representation, as it only uses 1.3 characters per byte, instead of the up-to-four bytes that the img2py uses. If ease-of-porting is an issue, img2py should just put an .encode(latin-1) at the end of the string. return zlib.decompress( 'x\xda\x01\x14\x02\xeb\xfd\x89PNG\r\n\x1a\n\x00\x00\x00\rIHDR\x00\x00\x00 \ [...] That data is non-textual. It is bytes within a string literal. And it is embedded (within a .py file). In Python 2.x, it is that, yes. In Python 3, it is a (meaningless) text. I am apparently not communicating this particular idea effectively enough. How would you propose that I store parsing literals for non-textual data, and how would you propose that I set up a dictionary to hold some non-trivial number of these parsing literals? An operationX literal is a symbol that describes how to interpret the subsequent or previous data. For an example of this, see the pickle module (portions of which I include below). I don't think there can be, or should be, a general solution for all operationX literals, because the different applications of operationX all have different requirements wrt. their literals. In binary data, integers are the most obvious choice for operationX literals. In text data, string literals are. I described before how you would use this kind of thing to perform operationX on structured information. It turns out that pickle (in Python) uses a dictionary of operationX symbols/literals - unbound instance methods to perform operationX on the pickled representation of Python objects (literals where = '...' are defined, and symbols using the names). The relevant code for unpickling is the while 1: section of the following. Right. I would convert the top of pickle.py to read MARK= ord('(') STOP= ord('.') ... def load(self): Read a pickled object representation from the open file. Return the reconstituted object hierarchy specified in the file. self.mark = object() # any new unique object self.stack = [] self.append = self.stack.append read = self.read dispatch = self.dispatch try: while 1: key = read(1) and then this to key = ord(read(1)) dispatch[key](self) except _Stop, stopinst: return stopinst.value For an example of where people use '...' to represent non-textual information in a literal, see the '# Protocol 2' section of pickle.py ... Right. # Protocol 2 PROTO = '\x80' # identify pickle protocol This should be changed to PROTO = 0x80 # identify pickle protocol etc. The point of this example was to show that operationX isn't necessarily the processing of text, but may in fact be the interpretation of binary data. It was also supposed to show how one may need to define symbols for such interpretation via literals of some kind. In the pickle module, this is done in two parts: XXX = literal; dispatch[XXX] = fcn. I've also seen it as dispatch = {literal: fcn} Yes. For pickle, the ordinals of the type code make good operationX literals. See any line-based socket protocol for where .find() is useful. Any line-based protocol is textual, usually based on ASCII. Regards, Martin ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] signature object issues (to discuss while I am out of contact)
On Mon, May 01, 2006, Brett Cannon wrote: But there are two things that I can't quite decide upon. One is whether a signature object should be automatically created for every function. As of right now the PEP I am drafting has it on a per-need basis and have it assigned to __signature__ through a built-in function or putting it 'inspect'. Now automatically creating the object would possibly make it more useful, but it could also be considered overkill. Also not doing it automatically allows signature objects to possibly make more sense for classes (to represent __init__) and instances (to represent __call__). But having that same support automatically feels off for some reason to me. My take is that we should do it automatically and provide a helper function that does additional work. The class case is already complicated by __new__(); we probably don't want to automatically sort out __init__() vs __new__(), but I think we do want regular functions and methods to automatically have a __signature__ attribute. Aside from the issue with classes, are there any other drawbacks to automatically creating __signature__? -- Aahz ([EMAIL PROTECTED]) * http://www.pythoncraft.com/ Argue for your limitations, and sure enough they're yours. --Richard Bach ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] introducing the experimental pyref wiki
[A.M. Kuchling] ... (Or are the two goals -- completeness and readability -- incompossible, unable to be met at the same time by one document?) No, but it's not easy, and it's not necessarily succinct. For an existence proof, see Guy Steele's Common Lisp the Language. I don't think it's a coincidence that Steele worked on the readable The Java Language Specification either, or on the original Scheme spec. Google should hire him to work on Python docs now ;-) ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] introducing the experimental pyref wiki
Tim Peters wrote: (Or are the two goals -- completeness and readability -- incompossible, unable to be met at the same time by one document?) No, but it's not easy, and it's not necessarily succinct. For an existence proof, see Guy Steele's Common Lisp the Language. I don't think it's a coincidence that Steele worked on the readable The Java Language Specification either, or on the original Scheme spec. Google should hire him to work on Python docs now ;-) on the other hand, it's important to realize that the Python audience have changed a lot since Guido wrote the first (carefully crafted, and mostly excellent) version of the language reference. I'm sure Guy could create a document that even a martian could read [1], and I'm pretty sure that we could untangle the huge pile of peep- hole tweaks that the reference has accumulated and get back to some- thing close to Guido's original, but I'm not sure that is what the Python community needs. (my goal is to turn pyref into more of a random-access encyclopedia, and less of an ISO-style it's all there; just keep reading it over and over again until you get it specification. it should be possible to link from the tutorial to a reference page without causing brain implosions) /F 1) see http://pyref.infogami.com/introduction ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] PEP 3102: Keyword-only arguments
Terry Reedy wrote: which reminds me of the following little absurdity gem from the language reference: I am not sure of what you see as absurdity, Perhaps I do. Were you referring to what I wrote in the last paragraph of my response to Guido? I don't know; I've lost track of all the subthreads in this subthread. it wasn't quite obvious to me that be spelled exactly meant use the same case rather than must have the same letters in the same order. if you use the latter interpretation, the paragraph looks a bit... odd. /F ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
[Python-Dev] New methods for weakref.Weak*Dictionary types
I'd like to commit this for Python 2.5: http://www.python.org/sf/1479988 The WeakKeyDictionary and WeakValueDictionary don't provide any API to get just the weakrefs out, instead of the usual mapping API. This can be desirable when you want to get a list of everything without creating new references to the underlying objects at that moment. This patch adds methods to make the references themselves accessible using the API, avoiding requiring client code to have to depend on the implementation. The WeakKeyDictionary gains the .iterkeyrefs() and .keyrefs() methods, and the WeakValueDictionary gains the .itervaluerefs() and .valuerefs() methods. The patch includes tests and docs. -Fred -- Fred L. Drake, Jr. fdrake at acm.org ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
[Python-Dev] More Path comments (PEP 355)
I just read over the changes to the proposed Path class since the discussion last summer. A big thanks to Bjorn Lindqvist for writing a PEP, Jason Orendorff for the original path.py and his suggestions on how the Path class should be different, and the writers of the Python-Dev Summary for bringing the discussion to my attention. I've been testing/using the interim Path class in the Python subversion (/sandbox/trunk/path, last modified in September), and have a few comments about PEP 355: - .walk*() return a list rather than an iterator. Was this an intentional change or a typo? Most typical uses yield thousands of paths which do not need to be in memory simultaneously. - An equivalent to os.listdir() is frequently useful in applications. This would return a list of filenames (strings) without the parent info. Path.listdir() calls os.listdir() and wraps all the items into Paths, and then I have to unwrap them again, which seems like a waste. I end up calling os.listdir(my_path) instead. If we decide not to subsume many os.* functions into Path, that's fine, but if we deprecate os.listdir(), it's not. - -1 on removing .joinpath(), whatever it's called. Path(basepath, *args) is good but not the same. (1) it's less intuitive: I expect this to be a method on a directory. (2) the class name is hardcoded: do I really have to do self.__class__(self, *args) to make my code forward compatible with whatever nifty subclasses might appear? - +1 on renaming .directory back to .parent. - -1 on losing a 1-liner to read/iterate a file's contents. This is a frequent operation, and having to write a 2-liner or a custom function is a pain. - +1 on consolidating mkdir/makedirs and rmdir/rmdirs. I'd also suggest not raising an error if the operation is already done, and a .purge() method that deletes recursively no matter what it is. This was suggested last summer as a rename for my .delete_dammit() proposal. Unsure what to do if permission errors prevent the operation; I guess propagating the exception is best. This would make .rmtree() redundant, which chokes if the item is a file. - +1 for rationalizing .copy*(). - +1 for .chdir(). This is a frequent operation, and it makes no sense not to include it. -- Mike Orr [EMAIL PROTECTED] ([EMAIL PROTECTED] address is semi-reliable) ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
[Python-Dev] more pyref: a better term for string conversion
for some reason, the language reference uses the term string con- version for the backtick form of repr: http://docs.python.org/ref/string-conversions.html any suggestions for a better term ? should backticks be deprecated, and documented in terms of repr (rather than the other way around) ? /F ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] more pyref: a better term for string conversion
Fredrik Lundh wrote: for some reason, the language reference uses the term string con- version for the backtick form of repr: The language reference also says that trailing commas for expressions work with backticks. This is incorrect. I think this is necessary to allow nested 'string conversions', so it is a doc error rather than an implementation error. I can't think of a better term than string conversion. At least it is distinct from 'string formatting'. Personally I think that backticks in code look like an ugly hack and ``repr(expression)`` is clearer. If backticks were documented as a hackish shortcut for repr then great. :-) Michael Foord http://docs.python.org/ref/string-conversions.html any suggestions for a better term ? should backticks be deprecated, and documented in terms of repr (rather than the other way around) ? /F ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/fuzzyman%40voidspace.org.uk ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] methods on the bytes object
This discussion seems to have gotten a bit out of hand. I believe it belongs on the python-3000 list. As a quick commentary, I see good points made by both sides. My personal view is that we should *definitely* not introduce a third type, and that *most* text-based activities should be done in the (Unicode) string domain. That said, I expect a certain amount of parsing to happen on bytes objects -- for example, I would say that CPython's current parser is parsing bytes since its input is UTF-8. There are also plenty of text-based socket protocols that are explicitly defined in terms of octets (mostly containing ASCII bytes only); I can see why some people would want to write handlers that parse the bytes directly. But instead of analyzing or arguing the situation to death, I'd like to wait until we have a Py3k implementation that implements something approximating the proposed end goal, where 'str' represents unicode characters, and 'bytes' represents bytes, and we have separate I/O APIs for binary (bytes) and character (str) data. I'm hoping to make some progress towards this goal in the p3yk (sic) branch. It appears that before we can switch the meaning of 'str' we will first have to implement the new I/O library, which is what I'm focusing on right now. I already have a fairly minimal but functional bytes type, which I'll modify as I go along and understand more of the requirements. -- --Guido van Rossum (home page: http://www.python.org/~guido/) ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
[Python-Dev] Path.ancestor()
This is a potentially long discussion so I'm putting it in a separate thread. When finding one file relative to another, it's difficult to read multiple .parent attributes stacked together. Worse, if you have the wrong number, you end up at the wrong directory level, potentially causing destructive damage. Doubly worse, the number of .parent is non-intuitive for those used to the near-universal . and .. conventions. Say I'm in apps/myapp/bin/myprogram.py and want to add apps/myapp/lib and apps/shared/lib to sys.path in a portable way. app_root = Path(__file__).realpath().abspath().parent.parent assert app_root.parent.name == 'apps' sys.path.insert(0, app_root.parent / 'shared/lib') sys.path.insert(0, app_root / 'lib') Yikes! At least it's better than: lib = os.path.join(os.path.dirname(os.path.dirname(x)), lib) which is completely unreadable. (Silence to those who say __path__ is obsolete now that setuptools has a function for finding a file in an egg. (1) I don't understand that part of the setuptools docs. (2) It will be many months before most Python programmers are ready to switch to it.) The tricky thing with . and .. is they have a different meaning depending on whether the original path is a file or directory. With a directory there's one less .parent. I've played a bit with the argument and come up with this: # N is number of ..; None (default arg) is special case for .. .ancestor() = . = p.parent or d .ancestor(0) = ValueError .ancestor(1) = .. = p.parent.parent or d.parent .ancestor(2) = ../.. = p.parent.parent.parent or d.parent.parent The simplest alternative is making N the number of .parent. This has some merit, and would solve the original problem of too many .parent stacking up. But it means Path wouldn't have any equivalent to . and .. behavior. Another alternative is to make .ancestor(0) mean .. I don't like this because . is a special case, and this should be shown in the syntax. Another alternative is to move every number down by 1, so .ancestor(0) is equivalent to ... The tidiness of this is outweighed by the difficulty of remembering that N is not the number of ... -- Mike Orr [EMAIL PROTECTED] ([EMAIL PROTECTED] address is semi-reliable) ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] more pyref: a better term for string conversion
Fredrik Lundh wrote: for some reason, the language reference uses the term string con- version for the backtick form of repr: http://docs.python.org/ref/string-conversions.html any suggestions for a better term ? should backticks be deprecated, and documented in terms of repr (rather than the other way around) ? I vaguely recall that they are deprecated, but I can't remember the details. The one obvious way to invoke that functionality is the repr() builtin. Regards, Martin ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] New methods for weakref.Weak*Dictionary types
[Fred L. Drake, Jr.] I'd like to commit this for Python 2.5: http://www.python.org/sf/1479988 The WeakKeyDictionary and WeakValueDictionary don't provide any API to get just the weakrefs out, instead of the usual mapping API. This can be desirable when you want to get a list of everything without creating new references to the underlying objects at that moment. This patch adds methods to make the references themselves accessible using the API, avoiding requiring client code to have to depend on the implementation. The WeakKeyDictionary gains the .iterkeyrefs() and .keyrefs() methods, and the WeakValueDictionary gains the .itervaluerefs() and .valuerefs() methods. The patch includes tests and docs. +1. A real need for this is explained in ZODB's ZODB/util.py's WeakSet class, which contains a WeakValueDictionary: # Return a list of weakrefs to all the objects in the collection. # Because a weak dict is used internally, iteration is dicey (the # underlying dict may change size during iteration, due to gc or # activity from other threads). as_weakref_list() is safe. # # Something like this should really be a method of Python's weak dicts. # If we invoke self.data.values() instead, we get back a list of live # objects instead of weakrefs. If gc occurs while this list is alive, # all the objects move to an older generation (because they're strongly # referenced by the list!). They can't get collected then, until a # less frequent collection of the older generation. Before then, if we # invoke self.data.values() again, they're still alive, and if gc occurs # while that list is alive they're all moved to yet an older generation. # And so on. Stress tests showed that it was easy to get into a state # where a WeakSet grows without bounds, despite that almost all its # elements are actually trash. By returning a list of weakrefs instead, # we avoid that, although the decision to use weakrefs is now very # visible to our clients. def as_weakref_list(self): # We're cheating by breaking into the internals of Python's # WeakValueDictionary here (accessing its .data attribute). return self.data.data.values() As that implementation suggests, though, I'm not sure there's real payback for the extra time taken in the patch's `valuerefs` implementation to weed out weakrefs whose referents are already gone: the caller has to make this check anyway when it iterates over the returned list of weakrefs. Iterating inside the implementation, to build the list via itervalues(), also creates that much more vulnerability to dict changed size during iteration multi-threading surprises. For that last reason, if the patch went in as-is, I expect ZODB would still need to cheat; obtaining the list of weakrefs directly via plain .data.values() is atomic, and so immune to these multi-threading surprises. ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] more pyref: a better term for string conversion
Backticks certainly are deprecated -- Py3k won't have them (nor will they become available for other syntax; they are undesirable characters due to font issues and the tendency of word processing tools to generate backticks in certain cases where you type forward ticks). So it would be a good idea to document them as a deprecated syntax for spelling repr(). --Guido On 5/1/06, Martin v. Löwis [EMAIL PROTECTED] wrote: Fredrik Lundh wrote: for some reason, the language reference uses the term string con- version for the backtick form of repr: http://docs.python.org/ref/string-conversions.html any suggestions for a better term ? should backticks be deprecated, and documented in terms of repr (rather than the other way around) ? I vaguely recall that they are deprecated, but I can't remember the details. The one obvious way to invoke that functionality is the repr() builtin. Regards, Martin ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/guido%40python.org -- --Guido van Rossum (home page: http://www.python.org/~guido/) ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] New methods for weakref.Weak*Dictionary types
On Monday 01 May 2006 16:57, Tim Peters wrote: +1. A real need for this is explained in ZODB's ZODB/util.py's WeakSet class, which contains a WeakValueDictionary: ... As that implementation suggests, though, I'm not sure there's real payback for the extra time taken in the patch's `valuerefs` implementation to weed out weakrefs whose referents are already gone: the caller has to make this check anyway when it iterates over the Good point; I've updated the patch accordingly. -Fred -- Fred L. Drake, Jr. fdrake at acm.org ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
[Python-Dev] more pyref: comparison precedence
one last one for tonight; the operator precedence summary says that in and not in has lower precedence than is and is not, which has lower precedence than , =, , =, , !=, ==: http://docs.python.org/ref/summary.html but the comparisions chapter http://docs.python.org/ref/comparisons.html says that they all have the same priority. which one is right ? /F ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] more pyref: comparison precedence
They're all the same priority. On 5/1/06, Fredrik Lundh [EMAIL PROTECTED] wrote: one last one for tonight; the operator precedence summary says that in and not in has lower precedence than is and is not, which has lower precedence than , =, , =, , !=, ==: http://docs.python.org/ref/summary.html but the comparisions chapter http://docs.python.org/ref/comparisons.html says that they all have the same priority. which one is right ? /F ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/guido%40python.org -- --Guido van Rossum (home page: http://www.python.org/~guido/) ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] methods on the bytes object
Martin v. Löwis [EMAIL PROTECTED] wrote: Josiah Carlson wrote: Certainly that is the case. But how would you propose embedded bytes data be represented? (I talk more extensively about this particular issue later). Can't answer: I don't know what embedded bytes data are. Ok. I think I would use base64, of possibly compressed content. It's more compact than your representation, as it only uses 1.3 characters per byte, instead of the up-to-four bytes that the img2py uses. I never said it was the most efficient representation, just one that was being used (and one in which I had no control over previously defining). What I provided was automatically generated by a script provided with wxPython. If ease-of-porting is an issue, img2py should just put an .encode(latin-1) at the end of the string. Ultimately, this is still the storage of bytes in a textual string. It may be /encoded/ as text, but it is still conceptually bytes in text, which is at least as confusing as text in bytes. return zlib.decompress( 'x\xda\x01\x14\x02\xeb\xfd\x89PNG\r\n\x1a\n\x00\x00\x00\rIHDR\x00\x00\x00 \ [...] That data is non-textual. It is bytes within a string literal. And it is embedded (within a .py file). In Python 2.x, it is that, yes. In Python 3, it is a (meaningless) text. Toss an .encode('latin-1'), and it isn't meaningless. type(x) type 'unicode' zlib.decompress(x.encode('latin-1'))[:4] '\x89PNG' I am apparently not communicating this particular idea effectively enough. How would you propose that I store parsing literals for non-textual data, and how would you propose that I set up a dictionary to hold some non-trivial number of these parsing literals? An operationX literal is a symbol that describes how to interpret the subsequent or previous data. For an example of this, see the pickle module (portions of which I include below). I don't think there can be, or should be, a general solution for all operationX literals, because the different applications of operationX all have different requirements wrt. their literals. In binary data, integers are the most obvious choice for operationX literals. In text data, string literals are. [snip] Yes. For pickle, the ordinals of the type code make good operationX literals. But, as I brought up before, while single integers are sufficient for some operationX literals, that may not be the case for others. Say, for example, a tool which discovers the various blobs from quicktime .mov files (movie portions, audio portions, images, etc.). I don't remember all of the precise names to parse, but I do remember that they were all 4 bytes long. This means that we would generally use the following... dispatch = {(ord(ch), ord(ch), ord(ch), ord(ch)): ..., #or tuple(ord(i) for i in '...'): ..., } And in the actual operationX process... #if we are reading bytes... key = tuple(read(4)) #if we are reading str... key = tuple(bytes(read(4), 'latin-1')) #or tuple(read(4).encode('latin-1')) #or tuple(ord(i) for i in read(4)) There are, of course, other options which could use struct and 8, 16, 32, and/or 64 bit integers (with masks and/or shifts), for the dispatch = ... or key = ... cases, but those, again, would rely on using Python 3.x strings as a container for non-text data. I described before how you would use this kind of thing to perform operationX on structured information. It turns out that pickle (in Python) uses a dictionary of operationX symbols/literals - unbound instance methods to perform operationX on the pickled representation of Python objects (literals where = '...' are defined, and symbols using the names). The relevant code for unpickling is the while 1: section of the following. Right. I would convert the top of pickle.py to read MARK= ord('(') STOP= ord('.') ... For an example of where people use '...' to represent non-textual information in a literal, see the '# Protocol 2' section of pickle.py ... Right. # Protocol 2 PROTO = '\x80' # identify pickle protocol This should be changed to PROTO = 0x80 # identify pickle protocol etc. I see that you don't see ord(...) as a case where strings are being used to hold bytes data. I would disagree, in much the same way that I would disagree with the idea that bytes.encode('base64') only holds text. But then again, I also see that the majority of this rethink your data structures and dispatching would be unnecessary if there were an immutable bytes literal in Python 3.x. People could then use... MARK= b'(' STOP= b'.' ... PROTO = b'\x80' ... dispatch = {b'...': fcn} key = read(X) dispatch[X](self) #regardless of X ... etc., as they have already been doing (only
Re: [Python-Dev] methods on the bytes object
Guido van Rossum [EMAIL PROTECTED] wrote: This discussion seems to have gotten a bit out of hand. I believe it belongs on the python-3000 list. I accidentally jumped the gun on hitting 'send' on my most recent reply, I'll repost it in the Py3k list and expect further discussion to proceed there. - Josiah ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] methods on the bytes object
Martin v. Löwis wrote: Ok. I think I would use base64, of possibly compressed content. It's more compact than your representation, as it only uses 1.3 characters per byte, instead of the up-to-four bytes that the img2py uses. only if you're shipping your code as PY files. in PYC format (ZIP, PY2EXE, etc), the img2py format is more efficient. /F ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] more pyref: comparison precedence
Guido van Rossum wrote: They're all the same priority. yet another description that is obvious only if you already know what it says, in other words: Operators in the same box have the same precedence. /.../ Operators in the same box group left to right (except for com- parisons, including tests, which all have the same precedence and chain from left to right /.../ I think I'll do something about this one too ;-) /F ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
[Python-Dev] Assigning Group on SF tracker?
When opening patches on the SF tracker for bugs that affect Python 2.5, but may be candidates for backporting (to 2.4 ATM), should I leave Group as None, or set it to Python 2.5 to indicate it affects 2.5? If it's known to be a candidate for backporting, should I set it to Python 2.4 to indicate that? I'm guessing I should always select Python 2.5 if it affects 2.5, but I've been using None up till now, I think... John ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] elimination of scope bleeding ofiteration variables
Greg Ewing wrote: for x in stuff: for x in otherstuff: dosomethingelse(x) would be a SyntaxError because the inner loop is trying to use x while it's still in use by the outer loop. So would this also be a SyntaxError? for x in stuff: x = somethingelse Tim Delaney ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Assigning Group on SF tracker?
[John J Lee] When opening patches on the SF tracker for bugs that affect Python 2.5, but may be candidates for backporting (to 2.4 ATM), should I leave Group as None, or set it to Python 2.5 to indicate it affects 2.5? If it's known to be a candidate for backporting, should I set it to Python 2.4 to indicate that? I'm guessing I should always select Python 2.5 if it affects 2.5, but I've been using None up till now, I think... I think it's best to set it to the earliest still-maintained Python version to which it applies. So that would be 2.4 now. The body of the report should say that the problem still exists in 2.5 (assuming it does). Or ;-) you could set it to 2.5, and note in the body that it's also a bug in 2.4. The _helpful_ part is that it be clear the bug exists in both 2.4 and 2.5 (when you know that), so that the next helpful elf doesn't have to figure that out again. ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] PEP 3102: Keyword-only arguments
Martin v. Löwis [EMAIL PROTECTED] wrote in message news:[EMAIL PROTECTED] You weren't asking for a reason, you were asking for an example: No wonder we weren't connecting very well. You somehow have it backwards. 'Why' means for what reason. But to continue with examples: my way to call your example (given the data in separate variables): make_person(name, age, phone, location) your way: make_person(name=name, age=age, phone=phone, location = location) my way (given the data in one sequence): make_person(*person_data) your way: make_person(name=person_data[0], age=person_data[2], phone=person_data[3], location=person_data[3]) Because there should be preferably only one obvious way to call that function. It is a feature of Python that arguments can usually be matched to parameters either by position or name, as the *caller* chooses. But if you want to (ab)use (my opinion) the 'one obvious way' mantra to disable that , then 'my way' above is obviously the more obvious way to do so ;-). It is to me anyway. Try typing the 'your way' version of the second pair without making and having to correct typos, Readers of the code should not need to remember the order of parameters, And they need not; it is right there in front of them. As for writers, modern IDEs should try to list the parameter signature upon typing 'func('. instead, the meaning of the parameters should be obvious in the call. With good naming of variables in the calling code, I think it is. I don't *want* callers to pass these parameters positionally, to improve readability. The code I write is my code, not yours, and I consider your version to be less readable, as well as harder to type without mistake. Do you think Python should be changed to prohited more that one, or maybe two named positional parameters? Terry Jan Reedy ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] signature object issues (to discuss while I am out of contact)
On 5/1/06, Aahz [EMAIL PROTECTED] wrote: On Mon, May 01, 2006, Brett Cannon wrote: But there are two things that I can't quite decide upon. One is whether a signature object should be automatically created for every function. As of right now the PEP I am drafting has it on a per-need basis and have it assigned to __signature__ through a built-in function or putting it 'inspect'. Now automatically creating the object would possibly make it more useful, but it could also be considered overkill. Also not doing it automatically allows signature objects to possibly make more sense for classes (to represent __init__) and instances (to represent __call__). But having that same support automatically feels off for some reason to me. My take is that we should do it automatically and provide a helper function that does additional work. The class case is already complicated by __new__(); we probably don't want to automatically sort out __init__() vs __new__(), but I think we do want regular functions and methods to automatically have a __signature__ attribute. Aside from the issue with classes, are there any other drawbacks to automatically creating __signature__? Well, one issue is the dichotomy between Python and C functions not both providing a signature object. There is no good way to provide a signature object automatically for C functions (at least at the moment; could add the signature string for PyArg_ParseTuple() to the PyMethodDef and have it passed in to the wrapped C function so that initialization of the class can get to the parameters string). So you can't fully rely on the object being available for all functions and methods unless a worthless signature object is placed for C functions. -Brett ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] signature object issues (to discuss while I am out of contact)
On Mon, May 01, 2006, Brett Cannon wrote: On 5/1/06, Aahz [EMAIL PROTECTED] wrote: On Mon, May 01, 2006, Brett Cannon wrote: But there are two things that I can't quite decide upon. One is whether a signature object should be automatically created for every function. As of right now the PEP I am drafting has it on a per-need basis and have it assigned to __signature__ through a built-in function or putting it 'inspect'. Now automatically creating the object would possibly make it more useful, but it could also be considered overkill. Also not doing it automatically allows signature objects to possibly make more sense for classes (to represent __init__) and instances (to represent __call__). But having that same support automatically feels off for some reason to me. My take is that we should do it automatically and provide a helper function that does additional work. The class case is already complicated by __new__(); we probably don't want to automatically sort out __init__() vs __new__(), but I think we do want regular functions and methods to automatically have a __signature__ attribute. Aside from the issue with classes, are there any other drawbacks to automatically creating __signature__? Well, one issue is the dichotomy between Python and C functions not both providing a signature object. There is no good way to provide a signature object automatically for C functions (at least at the moment; could add the signature string for PyArg_ParseTuple() to the PyMethodDef and have it passed in to the wrapped C function so that initialization of the class can get to the parameters string). So you can't fully rely on the object being available for all functions and methods unless a worthless signature object is placed for C functions. From my POV, that suggests changing the C API rather than not having automatic signatures. That probably requires Py3K, though. -- Aahz ([EMAIL PROTECTED]) * http://www.pythoncraft.com/ Argue for your limitations, and sure enough they're yours. --Richard Bach ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] [Python-checkins] r45850 - in python/trunk: Doc/lib/libfuncs.tex Lib/test/test_subprocess.py Misc/NEWS Objects/fileobject.c Python/bltinmodule.c
Author: neal.norwitz Date: Tue May 2 06:43:14 2006 New Revision: 45850 Modified: python/trunk/Doc/lib/libfuncs.tex python/trunk/Lib/test/test_subprocess.py python/trunk/Misc/NEWS python/trunk/Objects/fileobject.c python/trunk/Python/bltinmodule.c Log: SF #1479181: split open() and file() from being aliases for each other. Umm ... why? I suppose I wouldn't care, except it left test_subprocess failing on all the Windows buildbots, and I don't feel like figuring out why. To a first approximation, test_universal_newlines_communicate() now takes the # Interpreter without universal newline support branch on Windows, but shouldn't. ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com