Re: [Python-Dev] refcounting vs PyModule_AddObject
"Martin v. Löwis" <[EMAIL PROTECTED]> writes:
> Michael Hudson wrote:
>
>> if (ProfilerError == NULL)
>> ProfilerError = PyErr_NewException("hotshot.ProfilerError",
>>NULL, NULL);
>> if (ProfilerError != NULL) {
>> Py_INCREF(ProfilerError);
>> PyModule_AddObject(module, "ProfilerError", ProfilerError);
>> }
>>
>>
>> I think the Py_INCREF should just be removed, but I'm wondering if I'm
>> missing something...
>
> It may be me who is missing something, but...
Well, quite possibly not.
> On reference is added to the dictionary, this is the one the explicit
> INCREF creates. The other reference is held in the C variable
> ProfilerError; this is the one that creating the exception object
> creates.
>
> It is convention that C variables which are explicitly used also
> hold their own references, even if they are global, and even if there
> is no procedure to clear them. The reason is that they would become
> stale if the module object went away. As there is no way to protect
> against this case, they just keep a garbage reference.
This means two things, as I see it:
1) Py_Initialize()/Py_Finalize() loops are going to leak quite a lot.
Maybe we don't care about this.
2) In the case of the init_hotshot code above and such a loop, the
ProfilerError object from the first interpreter will be reused by
the second, which doesn't seem like a good idea (won't it be
inheriting from the wrong PyExc_Exception?).
Currently running Demo/embed/loop 'import gc' crashes for a similar
kind of reason -- the gc.garbage object is shared between
interpreters, but the only reference to it is in the module's
__dict__ (well, if the module exists...).
I've been looking at this area partly to try and understand this bug:
[ 1163563 ] Sub threads execute in restricted mode
but I'm not sure the whole idea of multiple interpreters isn't
inherently doomed :-/
Cheers,
mwh
--
Premature optimization is the root of all evil.
-- Donald E. Knuth, Structured Programming with goto Statements
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe:
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Thoughts on stdlib evolvement
Fredrik Lundh wrote: [snip] > in my experience, any external library that supports more than one > Python version on more than one platform is likely to be more robust > than code in the core. add the multilevel volunteer approach de- > described by Steven (with the right infrastructure, things like that > just appear), and you get more competent manpower contributing > to the standard distribution than you can get in any other way. In this context PEP 2 might be useful to look at again: http://www.python.org/peps/pep-0002.html It separates between library integrators (into the Python standard library) and library maintainers, and tries to ensure maintenance happens on a continuing basis. A multi-level setup to develop the Python standard library could take other forms, of course. I sometimes feel the Python-dev community is more focused on the development of the interpreter than of the library, and that this priority tends to be reversed outside the Python-dev community. So, it might be nice if the Python standard library development integrators and maintainers could be more separate from the Python core developers. A python-library-dev, say. Then again, this shouldn't result in large changes in the standard library, as old things should continue to work for the forseeable future. So for larger reorganizations and refactorings, such development should likely take place entirely outside the scope of the core distribution, at least for the time being. Finally, I haven't really seen much in the way of effort by developers to actually do such a large-scale cleanup. Nobody seems to have stepped up, taking the standard library, and made it undergo a radical refactoring (and just releasing it separately). That this hasn't happened seems to indicate the priority is not very high in the mind of people, so the problem might not be high either. :) Regards, Martijn ___ Python-Dev mailing list [email protected] http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] PEP 343 - next steps
On Sun, 2005-06-12 at 00:52, Nick Coghlan wrote: > The idea behind 'with' is that the block is executed while holding > (i.e. 'with') the resource. > > I think the '-ing' form of the example templates is a holdover from > the days when the PEP used the 'do' keyword - I find that past tense > or noun forms tend to read better than present tense for custom built > with templates named after the action that occurs on entry: > ># Past tense >with locked(my_lock): >with opened(my_file, mode): >with redirected_stdout(my_stream): Work for me. Thanks. -Barry signature.asc Description: This is a digitally signed message part ___ Python-Dev mailing list [email protected] http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] iter alternate form and *args and **kwargs (Was: Wishlist: dowhile)
On 6/15/05, Benji York <[EMAIL PROTECTED]> wrote:
> Steven Bethard wrote:
> > I would prefer that the alternate iter() form was broken off into
> > another separate function, say, iterfunc(), that would let me write
> > Jp's solution something like:
> >
> > for chunk in iterfunc('', f1.read, CHUNK_SIZE):
> > f2.write(chunk)
>
> How about 2.5's "partial":
>
> for chunk in iter(partial(f1.read, CHUNK_SIZE), ''):
> f2.write(chunk)
Yeah, there are a number of workarounds. Using partial, def-ing a
function, or using a lambda will all work. My point was that, with
the right API, these workarounds wouldn't be necessary. Look at
unittest.TestCase.assertRaises to see another example of the kind of
API I think should be supported (when possible, of course).
Steve
--
You can wordify anything if you just verb it.
--- Bucky Katt, Get Fuzzy
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe:
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] iter alternate form and *args and **kwargs
Steven Bethard <[EMAIL PROTECTED]> writes:
> On 6/15/05, Benji York <[EMAIL PROTECTED]> wrote:
>> Steven Bethard wrote:
>> > I would prefer that the alternate iter() form was broken off into
>> > another separate function, say, iterfunc(), that would let me write
>> > Jp's solution something like:
>> >
>> > for chunk in iterfunc('', f1.read, CHUNK_SIZE):
>> > f2.write(chunk)
>>
>> How about 2.5's "partial":
>>
>> for chunk in iter(partial(f1.read, CHUNK_SIZE), ''):
>> f2.write(chunk)
>
> Yeah, there are a number of workarounds. Using partial, def-ing a
> function, or using a lambda will all work. My point was that, with
> the right API, these workarounds wouldn't be necessary.
Well, I dunno. I can see where you're coming from, but I think you
could make the argument that the form using partial is clearer to read
-- it's not absolutely clear that the CHUNK_SIZE argument is intended
to be passed to f1.read. Also, the partial approach works better when
there is more than one callable.
Cheers,
mwh
--
Like most people, I don't always agree with the BDFL (especially
when he wants to change things I've just written about in very
large books), ...
-- Mark Lutz, http://python.oreilly.com/news/python_0501.html
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe:
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] iter alternate form and *args and **kwargs (Was: Wishlist: dowhile)
On Jun 16, 2005, at 10:50 AM, Steven Bethard wrote:
> On 6/15/05, Benji York <[EMAIL PROTECTED]> wrote:
>
>> Steven Bethard wrote:
>>
>>> I would prefer that the alternate iter() form was broken off into
>>> another separate function, say, iterfunc(), that would let me write
>>> Jp's solution something like:
>>>
>>> for chunk in iterfunc('', f1.read, CHUNK_SIZE):
>>> f2.write(chunk)
>>>
>>
>> How about 2.5's "partial":
>>
>> for chunk in iter(partial(f1.read, CHUNK_SIZE), ''):
>> f2.write(chunk)
>>
>
> Yeah, there are a number of workarounds. Using partial, def-ing a
> function, or using a lambda will all work. My point was that, with
> the right API, these workarounds wouldn't be necessary. Look at
> unittest.TestCase.assertRaises to see another example of the kind of
> API I think should be supported (when possible, of course).
I think it's really the other way around. Forcing every API that
takes a callable to also take a *args, **kwargs is a workaround for
not having partial.
James
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe:
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Multiple interpreters not compatible with current thread module
Jeremy Maxfield <[EMAIL PROTECTED]> writes: > The current threadmodule.c does not seem to correctly support multiple > (sub) interpreters. This would seem to be an accurate statement. A short history: The GILState functions were implemented. The way they work is that when you call PyGILState_Ensure, (essentially) a thread local variable is checked to see if a thread state is known for this thread. If one is found, fine, it is used. If not, one is created (using the interpreter state what was passed to _PyGILState_Init() by Py_Initialize()) and stored in the thread local variable. This had a (pretty obvious in hindsight) problem (bug #1010677): when you create a thread via thread.start_new_thread a thread state is created, but not stored in the thread local variable consulted by PyGILState_Ensure. So if you call PyGILState_Ensure another thread state is created for the thread (generally a no-no) and if you already hold the GIL PyGILState_Ensure will attempt to acquire it again -- a deadlock. This was fixed by essentially using PyGILState_Ensure to create the threadstate. This has a (pretty obvious in hindsight) problem (bug #1163563): PyGILState_Ensure always uses the interpreter state created by Py_Initialize, ignoring the interpreter state carefully supplied to t_bootstrap. Hilarity ensues. So, what's the fix? Well, the original problem was only the lack of association between a C level thread and a thread state. This can be fixed by setting up this association in t_bootstrap (and I've posted a patch that does this to the report of #1163563). This suffices for all known problems, but it's a bit hackish. Another approach is to set up this association PyThreadState_New(), which is possibly a bit more elegant, but carries a risk: is PyThreadState_New always called from the new thread? ISTM that it is, but I'm not sure. I'm not expecting anyone else to think hard about this on recent form, so I'll think about it for a bit and then fix it in the way that seems best after that. Feel free to surprise me. Cheers, mwh -- I would hereby duly point you at the website for the current pedal powered submarine world underwater speed record, except I've lost the URL. -- Callas, cam.misc ___ Python-Dev mailing list [email protected] http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
[Python-Dev] Please spread the word about OSCON early reg deadline
FYI. Check it out, the Python program is really good! -- Forwarded message -- From: Gina Blaber <[EMAIL PROTECTED]> Date: Jun 16, 2005 11:31 AM Subject: [Oscon] please spread the word about OSCON early reg deadline To: OSCON Committee Mailing List <[EMAIL PROTECTED]> Cc: Gina Blaber <[EMAIL PROTECTED]> Can all of you on the OSCON program committee please take a moment today to either blog OSCON or mention it on some relevant lists? Timing is important because the Early Registration period (with the early reg discount) ends on Monday June 20 (end of day). It would be great if you could mention the early reg deadline. Here are few relevant links to include, if you're so inclined: http://conferences.oreillynet.com/os2005/index_new.csp (OSCON home page) http://conferences.oreillynet.com/pub/w/38/speakers.html (OSCON speakers) http://conferences.oreillynet.com/cs/os2005/create/ord_os05 (OSCON attendee registration page) Thanks, -Gina --- Gina Blaber, Director of Conferences O'Reilly Media, Inc. 1005 Gravenstein Highway North Sebastopol, CA 95472 [EMAIL PROTECTED] (707) 827-7185 http://conferences.oreilly.com ___ Oscon mailing list [EMAIL PROTECTED] http://labs.oreilly.com/mailman/listinfo/oscon -- --Guido van Rossum (home page: http://www.python.org/~guido/) ___ Python-Dev mailing list [email protected] http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] iter alternate form and *args and **kwargs
Steven Bethard wrote:
> I would prefer that the alternate iter() form was broken off into
> another separate function, say, iterfunc(), that would let me write
> Jp's solution something like:
>
> for chunk in iterfunc('', f1.read, CHUNK_SIZE):
> f2.write(chunk)
Benji York wrote:
> for chunk in iter(partial(f1.read, CHUNK_SIZE), ''):
> f2.write(chunk)
Steven Bethard wrote:
> Yeah, there are a number of workarounds. Using partial, def-ing a
> function, or using a lambda will all work. My point was that, with
> the right API, these workarounds wouldn't be necessary.
Michael Hudson wrote:
> Well, I dunno. I can see where you're coming from, but I think you
> could make the argument that the form using partial is clearer to read
> -- it's not absolutely clear that the CHUNK_SIZE argument is intended
> to be passed to f1.read.
True, but the same argument could be made for
unittest.TestCase.assertRaises, and I don't find that confusing at
all. YMMV, of course.
> Also, the partial approach works better when there is more than one
> callable.
Yeah, I thought of that. But how often are there multiple callables?
Just scanning through the builtin functions I see that filter, iter,
map, and reduce all take functions to be called, and none of them take
multiple functions. (Note that classmethod, property and staticmethod
are not good examples because they don't take callables to be
*called*, they take callables to be *wrapped*.) OTOH, filter, map and
reduce are all basically deprecated at this point thanks to list
comprehensions and generator expressions, so I guess YMMV.
Steve
--
You can wordify anything if you just verb it.
--- Bucky Katt, Get Fuzzy
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe:
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
[Python-Dev] Withdrawn PEP 288 and thoughts on PEP 342
PEP 288 is now withdrawn. The generator exceptions portion is subsumed by PEP 343, and the generator attributes portion never garnered any support. The fate of generator attributes is interesting vís-a-vís PEP 342. The motivation was always related to supporting advanced generator uses such as emulating coroutines and writing generator based data consumer functions. At the time, Guido and everyone else found those use cases to be less than persuasive. Also, people countered that that functionality could be easily simulated with class based iterators, global variables, or passing a mutable argument to a generator. Amazingly, none of those objections seem to be directed toward 342 which somehow seems on the verge of acceptance even without use cases, clear motivation, examples, or a draft implementation. Looking back at the history of 288, generator attributes surfaced only in later drafts. In the earlier drafts, the idea for passing arguments to and from running generators used an argument to next() and a return value for yield. If this sounds familiar, it is because it is not much different from the new PEP 342 proposal. However, generator argument passing via next() was shot down early-on. The insurmountable concept flaw was an off-by-one issue. The very first call to next() does not correspond to a yield statement; instead, it corresponds to the first lines of a generator (those run *before* the first yield). All of the proposed use cases needed to have the data passed in earlier. With the death of that idea, generator attributes were born as a way of being able to pass in data before the first yield was encountered and to receive data after the yield. This was workable and satisfied the use cases. Coroutine simulations such as those in Dr Mertz's articles were easily expressed with generator attributes. As a further benefit, using attributes was a natural approach because that same technique has long been used with classes (so no new syntax was needed and the learning curve was zero). In contrast to PEP 288's low impact approach, PEP 342 changes the implementation of the for-loop, alters the semantics of "continue", introduces new and old-style iterators, and creates a new magic method. Meanwhile, it hasn't promised any advantages over the dead PEP 288 proposals. IOW, I don't follow how 342 got this far, how 342 intends to overcome the off-by-one issue, how it addresses all of the other objections leveled at the now dead PEP 288, and why no one appears concerned about introducing yet another new-style/old-style issue that will live in perpetuity. Raymond Sidenote: generator attributes also failed because generators lacked a sufficiently elegant way to refer to running instances of themselves (there is no self argument so we would need an access function or have a dynamic function attribute accessible only from within a running generator). ___ Python-Dev mailing list [email protected] http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
[Python-Dev] Propose to reject PEP 265 -- Sorting Dictionaries by Value
May I suggest rejecting PEP 265.
As of Py2.4, its use case is easily solved with:
>>> sorted(d.iteritems(), key=itemgetter(1), reverse=True)
[('b', 23), ('d', 17), ('c', 5), ('a', 2), ('e', 1)]
Further, Py2.5 offers a parallel solution to the more likely use case of
wanting the access only the largest counts:
>>> nlargest(2, d.iteritems(), itemgetter(1))
[('b', 23), ('d', 17)]
Raymond
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe:
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
[Python-Dev] Propose to reject PEP 281 -- Loop Counter Iteration with range and xrange
The need for the indices() proposal was mostly met by PEP 279's enumerate() builtin. Commenting on 279 before it was accepted for Py2.3, PEP 281's author, Magnus Lie Hetland, wrote, "I'm quite happy to have it make PEP 281 obsolete." Raymond ___ Python-Dev mailing list [email protected] http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
[Python-Dev] PEP 304 "Controlling Generation of Bytecode Files" - patch updated
I updated the patch that supports PEP 304, "Controlling Generation of Bytecode Files" to apply cleanly against current CVS. I've tested it on Mac OS X (straight Unix build only). I'd appreciate it if some Linux, Windows and Mac framework folks could apply the patch, rebuild, then run the tests (there is a "testbcb" target in the Makefile that should give Windows people an idea what to do). The patch is attached to http://python.org/sf/677103 Now that I think about it, there is probably a file in the Windows build tree (equivalent of pyconfig.h?) that still needs to be updated. The text of the PEP has not been updated in awhile. I will try to look at that in the next couple of days. I'd appreciate some critical review by people with Windows filesystem experience. There was a comment ages ago about problems with this scheme due to Windows multi-rooted directory tree that I can no longer find (and failed to record in the PEP at the time). I'd like to see if the problem can be resurrected then addressed. Thanks, Skip ___ Python-Dev mailing list [email protected] http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Propose to reject PEP 265 -- Sorting Dictionaries by Value
Raymond Hettinger wrote:
> May I suggest rejecting PEP 265.
>
> As of Py2.4, its use case is easily solved with:
>
> >>> sorted(d.iteritems(), key=itemgetter(1), reverse=True)
> [('b', 23), ('d', 17), ('c', 5), ('a', 2), ('e', 1)]
+1.
I find that usually when I want something like this, I use:
sorted(d, key=d.__getitem__, reverse=True)
because it doesn't require the operator module and most of the time I
just need the keys anyway.
py> sorted(d, key=d.__getitem__, reverse=True)
['b', 'd', 'c', 'a', 'e']
Steve
--
You can wordify anything if you just verb it.
--- Bucky Katt, Get Fuzzy
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe:
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Withdrawn PEP 288 and thoughts on PEP 342
At 08:24 PM 6/16/2005 -0400, Raymond Hettinger wrote: >Looking back at the history of 288, generator attributes surfaced only >in later drafts. In the earlier drafts, the idea for passing arguments >to and from running generators used an argument to next() and a return >value for yield. If this sounds familiar, it is because it is not much >different from the new PEP 342 proposal. However, generator argument >passing via next() was shot down early-on. The insurmountable concept >flaw was an off-by-one issue. The very first call to next() does not >correspond to a yield statement; instead, it corresponds to the first >lines of a generator (those run *before* the first yield). All of the >proposed use cases needed to have the data passed in earlier. Huh? I don't see why this is a problem. PEP 342 says: """When the *initial* call to __next__() receives an argument that is not None, TypeError is raised; this is likely caused by some logic error.""" >With the death of that idea, generator attributes were born as a way of >being able to pass in data before the first yield was encountered and to >receive data after the yield. This was workable and satisfied the use >cases. Coroutine simulations such as those in Dr Mertz's articles were >easily expressed with generator attributes. As a further benefit, using >attributes was a natural approach because that same technique has long >been used with classes (so no new syntax was needed and the learning >curve was zero). Ugh. Having actually emulated co-routines using generators, I have to tell you that I don't find generator attributes natural for this at all; returning a value or error (via PEP 343's throw()) from a yield expression as in PEP 342 is just what I've been wanting. >In contrast to PEP 288's low impact approach, PEP 342 changes the >implementation of the for-loop, alters the semantics of "continue", >introduces new and old-style iterators, and creates a new magic method. I could definitely go for dropping __next__ and the next() builtin from PEP 342, as they don't do anything extra. I also personally don't care about the new continue feature, so I could do without for-loop alteration too. I'd be perfectly happy passing arguments to next() explicitly; I just want yield expressions. >Meanwhile, it hasn't promised any advantages over the dead PEP 288 >proposals. Reading the comments in PEP 288's revision history, it sounds like the argument was to postpone implementation of next(arg) and yield expressions to a later version of Python, after more community experience with generators. We've had that experience now. >IOW, I don't follow how 342 got this far, how 342 intends to overcome >the off-by-one issue, It explicitly addresses it already. > how it addresses all of the other objections >leveled at the now dead PEP 288 Arguments for waiting aren't the same thing as arguments for never doing. I interpret the comments in 288's history as ranging from -0 to +0 on the yield expr/next(arg) issue, and didn't see any -1's except on the generator attribute concept. >and why no one appears concerned about >introducing yet another new-style/old-style issue that will live in >perpetuity. I believe it has been brought up before, and I also believe I pointed out once or twice that __next__ wasn't needed. I think Guido even mentioned something to that effect himself, but everybody was busy with PEP 340-inspired ideas at the time. 342 was split off in part to avoid losing the ideas that were in it. ___ Python-Dev mailing list [email protected] http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Withdrawn PEP 288 and thoughts on PEP 342
[Phillip] > I could definitely go for dropping __next__ and the next() builtin from > PEP > 342, as they don't do anything extra. I also personally don't care about > the new continue feature, so I could do without for-loop alteration > too. I'd be perfectly happy passing arguments to next() explicitly; I > just > want yield expressions. That's progress! Please do what you can to get the non-essential changes out of 342. > >Meanwhile, it hasn't promised any advantages over the dead PEP 288 > >proposals. > > Reading the comments in PEP 288's revision history, it sounds like the > argument was to postpone implementation of next(arg) and yield expressions > to a later version of Python, after more community experience with > generators. We've had that experience now. 288 was brought out of retirement a few months ago. Guido hated every variation of argument passing and frequently quipped that data passing was trivially accomplished though mutable arguments to a generator, through class based iterators, or via a global variable. I believe all of those comments were made recently and they all apply equally to 342. Raymond ___ Python-Dev mailing list [email protected] http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Withdrawn PEP 288 and thoughts on PEP 342
At 10:26 PM 6/16/2005 -0400, Raymond Hettinger wrote: >288 was brought out of retirement a few months ago. Guido hated every >variation of argument passing and frequently quipped that data passing >was trivially accomplished though mutable arguments to a generator, >through class based iterators, or via a global variable. I believe all >of those comments were made recently and they all apply equally to 342. Clearly, then, he's since learned the error of his ways. :) More seriously, I would say that data passing is not the same thing as coroutine suspension, and that PEP 340 probably gave Guido a much better look at at least one use case for the latter. In the meantime, I applaud your foresight in having invented significant portions of PEP 343 years ahead of time. Now give Guido back his time machine, please. :) If you hadn't borrowed it to write the earlier PEP, he could have seen for himself that all this would happen, and neatly avoided it by just approving PEP 288 to start with. :) ___ Python-Dev mailing list [email protected] http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Withdrawn PEP 288 and thoughts on PEP 342
On 6/16/05, Raymond Hettinger <[EMAIL PROTECTED]> wrote: > [Phillip] > > I could definitely go for dropping __next__ and the next() builtin from PEP > > 342, as they don't do anything extra. I also personally don't care about > > the new continue feature, so I could do without for-loop alteration > > too. I'd be perfectly happy passing arguments to next() explicitly; I just > > want yield expressions. > > That's progress! Please do what you can to get the non-essential > changes out of 342. Here's my current position: instead of g.__next__(arg) I'd like to use g.next(arg). The next() builtin then isn't needed. I do like "continue EXPR" but I have to admit I haven't even tried to come up with examples -- it may be unnecessary. As Phillip says, yield expressions and g.next(EXPR) are the core -- and also incidentally look like they will cause the most implementation nightmares. (If someone wants to start implementing these two now, go right ahead!) > > >Meanwhile, it hasn't promised any advantages over the dead PEP 288 > > >proposals. > > > > Reading the comments in PEP 288's revision history, it sounds like the > > argument was to postpone implementation of next(arg) and yield expressions > > to a later version of Python, after more community experience with > > generators. We've had that experience now. > > 288 was brought out of retirement a few months ago. Guido hated every > variation of argument passing and frequently quipped that data passing > was trivially accomplished though mutable arguments to a generator, > through class based iterators, or via a global variable. I believe all > of those comments were made recently and they all apply equally to 342. That was all before I (re-)discovered yield-expressions (in Ruby!), and mostly in response to the most recent version of PEP 288, with its problem of accessing the generator instance. I now strongly feel that g.next(EXPR) and yield-expressions are the way to go. Making g.next(EXPR) an error when this is the *initial* resumption of the frame was also a (minor) breakthrough. Any data needed by the generator at this point can be passed in as an argument to the generator. Someone should really come up with some realistic coroutine examples written using PEP 342 (with or without "continue EXPR"). -- --Guido van Rossum (home page: http://www.python.org/~guido/) ___ Python-Dev mailing list [email protected] http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Propose to reject PEP 265 -- Sorting Dictionaries by Value
Agreed. I don't want to add sorting abilities (with all its infinite
variants) to every data structure -- or even one or two common data
structures. You want something sorted that's not already a list? Use
the sorted() method.
On 6/16/05, Steven Bethard <[EMAIL PROTECTED]> wrote:
> Raymond Hettinger wrote:
> > May I suggest rejecting PEP 265.
> >
> > As of Py2.4, its use case is easily solved with:
> >
> > >>> sorted(d.iteritems(), key=itemgetter(1), reverse=True)
> > [('b', 23), ('d', 17), ('c', 5), ('a', 2), ('e', 1)]
>
> +1.
>
> I find that usually when I want something like this, I use:
>sorted(d, key=d.__getitem__, reverse=True)
> because it doesn't require the operator module and most of the time I
> just need the keys anyway.
>
> py> sorted(d, key=d.__getitem__, reverse=True)
> ['b', 'd', 'c', 'a', 'e']
>
> Steve
> --
> You can wordify anything if you just verb it.
> --- Bucky Katt, Get Fuzzy
> ___
> Python-Dev mailing list
> [email protected]
> http://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe:
> http://mail.python.org/mailman/options/python-dev/guido%40python.org
>
--
--Guido van Rossum (home page: http://www.python.org/~guido/)
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe:
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Propose to reject PEP 265 -- Sorting Dictionaries by Value
On 6/16/05, Guido van Rossum <[EMAIL PROTECTED]> wrote: > Agreed. I don't want to add sorting abilities (with all its infinite > variants) to every data structure -- or even one or two common data > structures. You want something sorted that's not already a list? Use > the sorted() method. I meant the sorted() function, of course. Java on my mind. :-) -- --Guido van Rossum (home page: http://www.python.org/~guido/) ___ Python-Dev mailing list [email protected] http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Withdrawn PEP 288 and thoughts on PEP 342
At 08:03 PM 6/16/2005 -0700, Guido van Rossum wrote:
>Someone should really come up with some realistic coroutine examples
>written using PEP 342 (with or without "continue EXPR").
How's this?
def echo(sock):
while True:
try:
data = yield nonblocking_read(sock)
yield nonblocking_write(sock, data)
except ConnectionLost:
pass
def run_server(sock, handler):
while True:
connected_socket = yield nonblocking_accept(sock)
schedule_coroutine(handler(connected_socket))
schedule_coroutine(
run_server(
setup_listening_socket("localhost","echo"),
echo
)
Of course, I'm handwaving a lot here, but this is a much clearer example
than anything I tried to pull out of the coroutines I've written for actual
production use. That is, I originally started this email with a real
routine from a complex multiprocess application doing lots of IPC, and
quickly got bogged down in explaining all the details of things like
yielding to semaphores and whatnot. But I can give you that example too,
if you like.
Anyway, the handwaving above is only in explanation of details, not in
their implementability.It would be pretty straightforward to use
Twisted's callback facilities to trigger next() or throw() calls to resume
the coroutine in progress. In fact, schedule_coroutine is probably
implementable as something like this in Twisted:
def schedule_coroutine(geniter, *arg):
def resume():
value = geniter.next(*arg)
if value is not None:
schedule_coroutine(value)
reactor.callLater(0, resume)
This assumes, of course, that you only yield between coroutines. A better
implementation would need to be more like the events.Task class in
peak.events, which can handle yielding to Twisted's "Deferreds" and various
other kinds of things that can provide callbacks. But this snippet is
enough to show that yield expressions let you write event-driven code
without going crazy writing callback functions.
And of course, you can do this without yield expressions today, with a
suitably magic function, but it doesn't read as well:
yield nonblocking_accept(sock); connected_socket = events.resume()
This is how I actually do this stuff today. 'events.resume()' is a magic
function that uses sys._getframe() to peek at the argument passed to the
equivalent of 'next()' on the Task that wraps the
generator. events.resume() can also raise an error if the equivalent of
'throw()' was called instead.
With yield expressions, the code in those Task methods would just do
next(arg) or throw(*sys.exc_info()) on the generator-iterator, and
'events.resume()' and its stack hackery could go away.
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe:
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Propose to reject PEP 281 -- Loop Counter Iteration with range and xrange
On 6/16/05, Raymond Hettinger <[EMAIL PROTECTED]> wrote: > The need for the indices() proposal was mostly met by PEP 279's > enumerate() builtin. > > Commenting on 279 before it was accepted for Py2.3, PEP 281's author, > Magnus Lie Hetland, wrote, "I'm quite happy to have it make PEP 281 > obsolete." Yes please. These examples are especially jarring: >>> range(range(5), range(10), range(2)) [5, 7, 9] (etc.) -- --Guido van Rossum (home page: http://www.python.org/~guido/) ___ Python-Dev mailing list [email protected] http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Withdrawn PEP 288 and thoughts on PEP 342
At 12:07 AM 6/17/2005 -0400, Phillip J. Eby wrote: > def schedule_coroutine(geniter, *arg): > def resume(): > value = geniter.next(*arg) > if value is not None: > schedule_coroutine(value) > reactor.callLater(0, resume) Oops. I just realized that this is missing a way to return a value back to a calling coroutine, and that I also forgot to handle exceptions: def schedule_coroutine(coroutine, stack=(), *args): def resume(): try: if len(args)==3: value = coroutine.throw(*args) else: value = coroutine.next(*args) except: if stack: # send the error back to the "calling" coroutine schedule_coroutine(stack[0], stack[1], *sys.exc_info()) else: # Nothing left in this pseudothread, let the # event loop handle it raise if isinstance(value,types.GeneratorType): # Yielded to a specific coroutine, push the current # one on the stack, and call the new one with no args schedule_coroutine(value, (coroutine,stack)) elif stack: # Yielded a result, pop the stack and send the # value to the caller schedule_coroutine(stack[0], stack[1], value) # else: this pseudothread has ended reactor.callLater(0, resume) There, that's better. Now, if a coroutine yields a coroutine, the yielding coroutine is pushed on a stack. If a coroutine yields a non-coroutine value, the stack is popped and the value returned to the previously-suspended coroutine. If a coroutine raises an exception, the stack is popped and the exception is thrown to the previously-suspended coroutine. This little routine basically replaces a whole bunch of code in peak.events that manages a similar coroutine stack right now, but is complicated by the absence of throw() and next(arg); the generators have to be wrapped by objects that add equivalent functionality, and the whole thing gets a lot more complicated as a result. Note that we could add a version of the above to the standard library without using Twisted. A simple loop class could have a deque of "callbacks to invoke", and the reactor.callLater() could be replaced by appending the 'resume' closure to the deque. A main loop function would then just peel items off the deque and call them, looping until an unhandled exception (such as SystemExit) occurs, or some other way of indicating an exit occurs. ___ Python-Dev mailing list [email protected] http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Withdrawn PEP 288 and thoughts on PEP 342
[Phillip] > > I also personally don't care about the new continue feature, > > so I could do without for-loop alteration too. [Guido] > I do like "continue EXPR" but I have to admit I haven't even tried to > come up with examples -- it may be unnecessary. As Phillip says, yield > expressions and g.next(EXPR) are the core -- and also incidentally > look like they will cause the most implementation nightmares. Let me go on record as a strong -1 for "continue EXPR". The for-loop is our most basic construct and is easily understood in its present form. The same can be said for "continue" and "break" which have the added advantage of a near zero learning curve for people migrating from other languages. Any urge to complicate these basic statements should be seriously scrutinized and held to high standards of clarity, explainability, obviousness, usefulness, and necessity. IMO, it fails most of those tests. I would not look forward to explaining "continue EXPR" in the tutorial and think it would stand out as an anti-feature. Raymond ___ Python-Dev mailing list [email protected] http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
[Python-Dev] Propose to reject PEP 276 -- Simple iterator for ints
The principal use case was largely met by enumerate(). From PEP 276's rationale section: """ A common programming idiom is to take a collection of objects and apply some operation to each item in the collection in some established sequential order. Python provides the "for in" looping control structure for handling this common idiom. Cases arise, however, where it is necessary (or more convenient) to access each item in an "indexed" collection by iterating through each index and accessing each item in the collection using the corresponding index. """ Also, while some nice examples are provided, the proposed syntax allows and encourages some horrid examples as well: >>> for i in 3: print i 0 1 2 The backwards compatability section lists another problematic consequence; the following would stop being a syntax error and would become valid: x, = 1 The proposal adds iterability to all integers but silently does nothing for negative values. A minor additional concern is that floats are not given an equivalent capability (for obvious reasons) but this breaks symmetry with range/xrange which still accept float args. Raymond ___ Python-Dev mailing list [email protected] http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Propose to reject PEP 276 -- Simple iterator for ints
I've never liked that idea. Down with it! On 6/16/05, Raymond Hettinger <[EMAIL PROTECTED]> wrote: > The principal use case was largely met by enumerate(). From PEP 276's > rationale section: > > """ > A common programming idiom is to take a collection of objects and apply > some operation to each item in the collection in some established > sequential order. Python provides the "for in" looping control > structure for handling this common idiom. Cases arise, however, where > it is necessary (or more convenient) to access each item in an "indexed" > collection by iterating through each index and accessing each item in > the collection using the corresponding index. > """ > > Also, while some nice examples are provided, the proposed syntax allows > and encourages some horrid examples as well: > > >>> for i in 3: print i > 0 > 1 > 2 > > The backwards compatability section lists another problematic > consequence; the following would stop being a syntax error and would > become valid: > >x, = 1 > > The proposal adds iterability to all integers but silently does nothing > for negative values. > > A minor additional concern is that floats are not given an equivalent > capability (for obvious reasons) but this breaks symmetry with > range/xrange which still accept float args. > > > Raymond > > -- --Guido van Rossum (home page: http://www.python.org/~guido/) ___ Python-Dev mailing list [email protected] http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
[Python-Dev] Propose to reject PEP 313 -- Adding Roman Numeral Literals to Python
While the majority of Python users deem this to be a nice-to-have feature, the community has been unable to reach a consensus on the proper syntax after more than two years of intensive debate (the PEP was introduced in early April 2003). Most agree that there should be only-one-way-to-do-it; however, the proponents are evenly split into two camps, with the modernists preferring IX for nine and the classicists preferring V which was the most likely spelling in ancient Rome. The classicists not only rely on set-in-stone tradition, they point to pragmatic issues such as avoidance of subtraction, ease of coding, easier mental parsing (much less error prone), and ease of teaching to beginners. They assert that the modernists have introduced unnecessary algorithmic complexity just to save two keystrokes. The modernists point to compatible Java implementations and current grade school textbooks. They believe that users from other languages will expect the IX form. Note however, not all the modernists agree on whether MXM would be a well-formed spelling of 1990; most, but not all prefer MCMXC despite its likelihood of being mis-parsed on a first reading. There is also a small but vocal user group demanding that lowercase forms be allowed. Their use cases fall into four categories: (i) academia, (ii) the legal profession, (iii) research paper writing, and (iv) powerpoint slideshows. Reportedly, this is also a common convention among Perl programmers. Links: http://hrsbstaff.ednet.ns.ca/waymac/History%20A/A%20Term%201/1.%20Rome/R oman_Numerals.htm http://www.sizes.com/numbers/roman_numerals.htm Raymond ___ Python-Dev mailing list [email protected] http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Withdrawn PEP 288 and thoughts on PEP 342
[Raymond]
> Let me go on record as a strong -1 for "continue EXPR". The for-loop is
> our most basic construct and is easily understood in its present form.
> The same can be said for "continue" and "break" which have the added
> advantage of a near zero learning curve for people migrating from other
> languages.
>
> Any urge to complicate these basic statements should be seriously
> scrutinized and held to high standards of clarity, explainability,
> obviousness, usefulness, and necessity. IMO, it fails most of those
> tests.
>
> I would not look forward to explaining "continue EXPR" in the tutorial
> and think it would stand out as an anti-feature.
You sometimes seem to compound a rational argument with too much rhetoric.
The correct argument against "continue EXPR" is that there are no use
cases yet; if there were a good use case, the explanation would follow
easily.
The original use case (though not presented in PEP 340) was to serve
as the equivalent to "return EXPR" in a Ruby block. In Ruby you have
something like this (I probably get the syntax wrong):
a.foreach() { |x| ...some code... }
This executes the block for each item in a, with x (a formal parameter
to the block) set to each consecutive item. In Python we would write
it like this of course:
for x in a:
...some code...
In Ruby, the block is an anonymous procedure (a thunk) and foreach() a
method that receives a margic (anonymous) parameter which is the
thunk. Inside foreach(), you write "yield EXPR" which calls the block
with x set to EXPR. When the block contains a return statement, the
return value is delivered to the foreach() method as the return value
of yield, which can be assigned like this:
VAR = yield EXPR
Note that Ruby's yield is just a magic call syntax that calls the thunk!
But this means that the thunks can be used for other purposes as well.
One common use is to have the block act as a Boolean function that
selects items from a list; this way you could write filter() with an
inline selection, for example (making this up):
a1 = a.filter() { |x| return x > 0 }
might set a1 to the list of a's elements that are > 0. (Not saying
that this is a built-in array method in Ruby, but I think you could
write one.)
This particular example doesn't translate well into Python because a
for-loop doesn't have a return value. Maybe that would be a future
possibility if yield-expressions become accepted (just kidding:-).
However, I can see other uses for looping over a sequence using a
generator and telling the generator something interesting about each
of the sequence's items, e.g. whether they are green, or should be
printed, or which dollar value they represent if any (to make up a
non-Boolean example).
Anyway, "continue EXPR" was born as I was thinking of a way to do this
kind of thing in Python, since I didn't want to give up return as a
way of breaking out of a loop (or several!) and returning from a
function.
But I'm the first to admit that the use case is still very much
hypothetical -- unlike that for g.next(EXPR) and VAR = yield.
--
--Guido van Rossum (home page: http://www.python.org/~guido/)
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe:
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
[Python-Dev] Propose to reject PEP 336 -- Make None Callable
After nine months, no support has grown beyond the original poster. The
PEP did however generate some negative responses when brought-up on
comp.lang.python (it made some people's stomach churn).
The PEP fails the tests of obviousness and necessity. The PEP's switch
example is easily handled by replacing the callable None with a simple,
null lambda:
def __call__(self, input):
return {1 : self.a,
2 : self.b,
3 : self.c
}.get(input, lambda *args: 0)(input)
The PEP does not address conflicts with other uses of None (such as
default arguments or indicating values that are not applicable).
Mysteriously, the PEP consumes only *args but not **kwargs.
It also fails with respect to clarity and explicitness. Defining a
short, explicit default function is a hands-down winner in these two
important measures of merit.
Raymond
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe:
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
