Re: [Python-Dev] microthreading vs. async io

2007-02-15 Thread Josiah Carlson

Martin v. Löwis [EMAIL PROTECTED] wrote:
 
 [EMAIL PROTECTED] schrieb:
  Asyncore *only* implements asynchronous IO -- any tasks performed in
  its context are the direct result of an IO operation, so it's hard to
  say it implements cooperative multitasking (and Josiah can correct me if
  I'm wrong, but I don't think it intends to).
 
 I'm trying to correct you: By your definition, asyncore implements
 cooperative multi-tasking. You didn't define 'task', but I understand
 it as 'separate activity'. With asyncore, you can, for example, run
 a web server and an IRC server in a single thread, as separate tasks,
 and asyncore deals with the scheduling between these tasks.
 In your terminology, it is based on continuations: the chunk you specify
 is the event handler.
 
 Indeed, asyncore's doc string starts with
 
 # There are only two ways to have a program on a single
 # processor do more than one thing at a time.
 
 and goes on suggesting that asyncore provides one of them.

Well, when asyncore was written, it probably didn't have coroutines,
generator trampolines, etc., what we would consider today, in this
particular context, cooperative multithreading.

What that documentation /should/ have said is...

# There are at least two ways to have a program on a single
# processor do more than one thing at a time.

Then go on to describe threads, a 'polling for events' approach like
asyncore, then left the rest for someone else to add later.  I'll add it
as my first patch to asyncore.

 - Josiah

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] New syntax for 'dynamic' attribute access

2007-02-15 Thread Oleg Broytmann
On Thu, Feb 15, 2007 at 12:48:48AM +, Steve Holden wrote:
 Given that they say a camel is a horse designed by a committee

   Metaphors can go that far but not farther. And, BTW, camels are very
suited for their environments...
   I am not afraid of committees for large tasks. Well, that has to be a
small committee ruling by a cleverest ruler.

 require a 
 single individual with good language design skills and the ability to 
 veto features that were out of line with the design requirements. A lot 
 like a BDFL, really.

   Of course, but I don't know if CP4E idea is still on his agenda and
with what priority.

Oleg.
-- 
 Oleg Broytmannhttp://phd.pp.ru/[EMAIL PROTECTED]
   Programmers don't die, they just GOSUB without RETURN.
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Twisted Isn't Specific (was Re: Trial balloon: microthreads library in stdlib)

2007-02-15 Thread Andrew Dalke
I was the one on the Stackless list who last September or so
proposed the idea of monkeypatching and I'm including that
idea in my presentation for PyCon.  See my early rough draft
at http://www.stackless.com/pipermail/stackless/2007-February/002212.html
which contains many details about using Stackless, though
none on the Stackless implementation. (A lot on how to tie things together.)

So people know, I am an applications programmer and not a
systems programmer.  Things like OS-specific event mechanisms
annoy and frustrate me.  If I could do away with hardware and
still write useful programs I would.

I have tried 3 times to learn Twisted.  The first time I found
and reported various problems and successes.  See emails at
  http://www.twistedmatrix.com/pipermail/twisted-python/2003-June/thread.html
The second time was to investigate a way to report upload
progress: http://twistedmatrix.com/trac/ticket/288
and the third was to compare Allegra and Twisted
  
http://www.dalkescientific.com/writings/diary/archive/2006/08/28/levels_of_abstraction.html

In all three cases I've found it hard to use Twisted because
the code didn't do as I expected it to do and when something
went wrong I got results which were hard to interpret.  I
believe others have similar problems and is one reason Twisted
is considered to be a big, complicated, inseparable hairy mess.



I find the Stackless code also hard to understand.  Eg,
I don't know where the watchdog code is for the run()
command.  It uses several layers of macros and I haven't
been able get it straight in my head.  However, so far
I've not run into strange errors in Stackless that I
have in Twisted.

I find the normal Python code relatively easy to understand.


Stackless only provides threadlets.  It does no I/O.
Richard Tew developed a stacklesssocket module which emulates
the API for the stdlib socket module.  I tweaked it a
bit and showed that by doing the monkeypatch

  import stacklesssocket
  import sys
  sys.modules[socket] = stacklesssocket

then code like urllib.urlopen became Stackless compatible.
Eg, in my PyCon talk draft I show something like


import slib
# must monkeypatch before any other module imports socket
slib.use_monkeypatch()

import urllib2
import time
import hashlib

def fetch_and_reverse(host):
 t1 = time.time()
 s = urllib2.urlopen(http://+host+/;).read()[::-1]
 dt = time.time() - t1
 digest = hashlib.md5(s).hexdigest()
 print hash of %r/ = %s in %.2f s % (host, digest, dt)

slib.main_tasklet(fetch_and_reverse)(www.python.org)
slib.main_tasklet(fetch_and_reverse)(docs.python.org)
slib.main_tasklet(fetch_and_reverse)(planet.python.org)
slib.run_all()

where the three fetches occur in parallel.

The choice of asyncore is, I think, done because 1) it
prevents needing an external dependency, 2) asyncore is
smaller and easier to understand than Twisted, and
3) it was for demo/proof of concept purposes.  While
tempting to improve that module I know that Twisted
has already gone though all the platform-specific crap
and I don't want to go through it again myself.  I don't
want to write a reactor to deal with GTK, and one for
OS X, and one for ...


Another reason I think Twisted is considered tangled-up
Deep Magic, only for Wizards Of The Highest Order is because
it's infused with event-based processing.  I've done a lot
of SAX processing and I can say that few people think that
way or want to go through the process of learning how.

Compare, for example, the following

  f = urllib2.urlopen(http://example.com/;)
  for i, line in enumerate(f):
print (%06d % i), repr(line)

with the normal equivalent in Twisted or other
async-based system.

Yet by using the Stackless socket monkeypatch, this
same code works in an async framework.  And the underlying
libraries have a much larger developer base than Twisted.
Want NNTP?  import nntplib  Want POP3?  import poplib
Plenty of documentation about them too.

On the Stackless mailing list I have proposed someone work
on a talk for EuroPython titled Stackless and Twisted.
 Andrew Francis has been looking into how to do that.

All the earlier quotes were lifted from glyph.  Here's another:
  When you boil it down, Twisted's event loop is just a
  notification for a connection was made, some data was
  received on a connection, a connection was closed, and
  a few APIs to listen or initiate different kinds of
  connections, start timed calls, and communicate with threads.
  All of the platform details of how data is delivered to the
  connections are abstracted away..  How do you propose we
  would make a less specific event mechanism?

What would I need to do to extract this Twisted core so
I could replace asyncore?  I know at minimum I need
twisted.internet and twisted.python (the latter for
logging) and twisted.persisted for styles.Ephemeral.

But I say this hesitantly recalling the frustrations
I had in dealing with a connection error in Twisted,
described in the aforementioned 

Re: [Python-Dev] Twisted Isn't Specific (was Re: Trial balloon: microthreads library in stdlib)

2007-02-15 Thread Bob Ippolito
On 2/14/07, Greg Ewing [EMAIL PROTECTED] wrote:
 Thomas Wouters wrote:

   *I* don't like the idea of something in the Python installation
   deciding which reactor to use.

 I wouldn't mind if some way were provided of changing
 the reactor if you want. I'd just like to see a long
 term goal of making it unnecessary as far as possible.

  In any case, your idea requires a lot of changes in external, non-Python
  code -- PyGTK simply exposes the GTK mainloop, which couldn't care less
  about Python's idea of a perfect event reactor model.

 On unix at least, I don't think it should be necessary
 to change gtk, only pygtk. If it can find out the file
 descriptor of the connection to the X server, it can
 plug that into the reactor, and then call
 gtk_main_iteration_do() whenever something comes in
 on it.

 A similar strategy ought to work for any X11-based
 toolkit that exposes a function to perform one
 iteration of its main loop.

 Mileage on other platforms may vary.

   The PerfectReactor can be added later, all current reactors
   aliased to it, and no one would have to change a single line
   of code.

 Sure.

 The other side to all this is the client side, i.e. the
 code that installs event callbacks. At the moment there's
 no clear direction to take, so everyone makes their own
 choice -- some use asyncore, some use Twisted, some use
 the gtk event loop, some roll their own, etc.

There is no single PerfectReactor. There are several use cases where
you need to wait on 1 different event systems, which guarantees at
least two OS threads (and two event loops). In general it's nice to
have a single Python event loop (the reactor) to act on said threads
(e.g. something just sitting on a mutex waiting for messages) but
waiting for IO to occur should *probably* happen on one or more
ancillary threads -- one per event system (e.g. select, GTK,
WaitForMultipleEvents, etc.)

-bob
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Trial balloon: microthreads library in stdlib

2007-02-15 Thread Andrew Dalke
On 2/14/07, [EMAIL PROTECTED] [EMAIL PROTECTED] wrote
in response to [EMAIL PROTECTED]:
 As far as I can tell, you still haven't even clearly expressed what your
 needs are, let alone whether or not Twisted is suitable.  In the reply
 you're citing, you said that this sounded like something low level that
 twisted would be written on top of - but the this you were talking
 about, based on your previous messages, sounded like monkeypatching the
 socket and asyncore modules to provide asynchronous file I/O based on the
 platform-specific IOCP API for Windows.

I don't know Richard's needs nor requirements.  I do know the mechanism
of the monkeypatch he's talking about.  I describe it a bit in my draft
for my Stackless talk at PyCon
  http://www.stackless.com/pipermail/stackless/2007-February/002212.html

It uses asyncore for the I/O and a scheduler which can figure out if
there are other running tasklets and how long is needed until the next
tasklet needs to wake up.

Yes, this is another reactor.  Yes, it's not widely cross-platform.
Yes, it doesn't work with gtk and other GUI frameworks.  Yes,
as written it doesn't handle threads.

But also yes, it's a different style of writing reactors because of
how Stackless changes the control flow.

I and others would like to see something like the stacklesssocket
implemented on top of Twisted. Andrew Francis is looking in to it
but I don't know to what degree of success he's had.  Err, looking
through the email archive, he's has had some difficulties in doing
a Twisted/Stackless integration.  I don't think Twisted people
understand Stackless well enough (nor obviously he Twisted) or
what he's trying to do.

 It is a large dependency and it is a secondary framework.

 Has it occurred to you that it is a large dependency not because we like
 making bloated and redundant code, but because it is doing something that is
 actually complex and involved?

Things like Twisted's support for NNTP, POP3, etc. aren't needed
with the monkeypatch approach because the standard Python
libraries will work, with Stackless and the underlying asyc library
collaborating under the covers.  So those parts of Twisted aren't
needed or relevant.

Twisted is, after all, many things.


 I thought that I provided several reasons before as well, but let me state
 them as clearly as I can here.  Twisted is a large and mature framework with
 several years of development and an active user community.  The pluggable
 event loop it exposes is solving a hard problem, and its implementation
 encodes a lot of knowledge about how such a thing might be done.  It's also
 tested on a lot of different platforms.

 Writing all this from scratch - even a small subset of it - is a lot of
 work, and is unlikely to result in something robust enough for use by all
 Python programs without quite a lot of effort.  It would be a lot easier to
 get the Twisted team involved in standardizing event-driven communications
 in Python.  Every other effort I'm aware of is either much smaller, entirely
 proprietary, or both.  Again, I would love to be corrected here, and shown
 some other large event-driven framework in Python that I was heretofore
 unaware of.

Sorry for the long quote; wasn't sure how to trim it.

I made this point elsewhere and above but feel it's important
enough to emphasize once more.

Stackless lets people write code which looks like blocking code
but which is not.  The blocking functions are forwarded to the
reactor, which does whatever it's supposed to do, and the results
returned.

Because most Python networking code will work unchanged
(assuming changes to the underlying APIs to work with
Stackless, as we hack now through monkeypatches), a
microthreads solution almost automatically and trivially gains
a large working code base, documentation, and active developer
base.

There does not need to be some other large event-driven
framework in Python because all that's needed is the 13kLOC
of reactor code from twisted.internet and not 140+kLOC in
all of Twisted.

  Standardization is much easier to achieve when you have
 multiple interested parties with different interests to accommodate.  As
 Yitzhak Rabin used to say, you don't engage in API standardization with
 your friends, you engage in API standardization with your enemies - or...
 something like that.

I thought you made contracts even with -- or especially with -- your
friends regarding important matters so that both sides know what
they are getting into and so you don't become enemies in the future.

 You say that you weren't proposing an alternate implementation of an event
 loop core, so I may be reacting to something you didn't say at all.
  However, again, at least Tristan thought the same thing, so I'm not the
 only one.

For demonstration (and for my case pedagogical reasons) we have
an event core loop.  I would rather pull out the parts I need from
Twisted.  I don't know how.  I don't need to know how right now.

 I think that *that* 

Re: [Python-Dev] Summary: rejection of 'dynamic attribute' syntax

2007-02-15 Thread Steve Holden
Greg Ewing wrote:
 Steve Holden wrote:
 
 A further data point is that modern machines seem to give timing 
 variabilities due to CPU temperature variations even if you always eat 
 exactly the same thing.
 
 Oh, great. Now we're going to have to run our
 benchmarks in a temperature-controlled oven...
 
... with the fan running at constant speed :)
-- 
Steve Holden   +44 150 684 7255  +1 800 494 3119
Holden Web LLC/Ltd  http://www.holdenweb.com
Skype: holdenweb http://del.icio.us/steve.holden
Blog of Note:  http://holdenweb.blogspot.com
See you at PyCon? http://us.pycon.org/TX2007

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Summary: rejection of 'dynamic attribute' syntax

2007-02-15 Thread Anthony Baxter
On Thursday 15 February 2007 21:48, Steve Holden wrote:
 Greg Ewing wrote:
  Steve Holden wrote:
  A further data point is that modern machines seem to give
  timing variabilities due to CPU temperature variations even if
  you always eat exactly the same thing.
 
  Oh, great. Now we're going to have to run our
  benchmarks in a temperature-controlled oven...

 ... with the fan running at constant speed :)

Unless the fans are perfectly balanced, small changes in gravity are 
going to affect the rate at which they spin. So I guess the 
position of the moon will affect it :-)
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] microthreading vs. async io

2007-02-15 Thread Joachim König-Baltes
[EMAIL PROTECTED] wrote:

[...]
 microtreading:
   Exploiting language features to use cooperative multitasking in tasks
   that read like they are single-threaded.

 asynchronous IO:
   Performing IO to/from an application in such a way that the
   application does not wait for any IO operations to complete, but
   rather polls for or is notified of the readiness of any IO operations.

   
[...]
 Asyncore *only* implements asynchronous IO -- any tasks performed in
 its context are the direct result of an IO operation, so it's hard to
 say it implements cooperative multitasking (and Josiah can correct me if
 I'm wrong, but I don't think it intends to).

 Much of the discussion here has been about creating a single, unified
 asynchronous IO mechanism that would support *any* kind of cooperative
 multitasking library.  I have opinions on this ($0.02 each, bulk
 discounts available), but I'll keep them to myself for now.
   
Talking only about async I/O in order to write cooperative tasks that 
smell single threaded is to
restricted IMO.

If there are a number of cooperative tasks that read single-threaded 
(or sequential) than the goal
is to avoid a _blocking operation_ in any of them because the other 
tasks could do useful things
in the meantime.

But there are a number of different blocking operations, not only async 
IO (which is easily
handled by select()) but also:

- waiting for a child process to exit
- waiting for a posix thread to join()
- waiting for a signal/timer
- ...

Kevent (kernel event) on BSD e.g. tries to provide a common 
infrastructure to provide a file descriptor
where one can push some conditions onto and select() until one of the 
conditions is met. Unfortunately,
thread joining is not covered by it, so one cannot wait (without some 
form of busy looping) until one
of the conditions is true if thread joining is one of them, but for all 
the other cases it would be possible.

There are many other similar approaches (libevent, notify, to name a few).

So in order to avoid blocking in a task, I'd prefer that the task:

- declaratively specifies what kind of conditions (events) it wants to 
wait for. (API)

If that declaration is a function call, then this function could 
implicitely yield if the underlying implementation
would be stackless or greenlet based.

Kevent on BSD systems already has a usable API for defining the 
conditions by structures and there is
also a python module for it.

The important point IMO is to have an agreed API for declaring the 
conditions a task wants to wait for.
The underlying implementation in a scheduler would be free to use 
whatever event library it wants to
use.

E.g. have a wait(events = [], timeout = -1) method would be sufficient 
for most cases, where an event would specify

- resource type (file, process, timer, signal, ...)
- resource id (fd, process id, timer id, signal number, ...)
- filter/flags (when to fire, e.g. writable, readable exception for fd, ...)
- ...

the result could be a list of events that have fired, more or less 
similar to the events in the argument list,
but with added information on the exact condition.

The task would return from wait(events) when at least 1 of the 
conditions is met. The task then knows e.g.
that an fd is readable and can then do the read() on its own in the way 
it likes to do it, without being forced
to let some uber framework do the low level IO. Just the waiting for 
conditions without blocking the
application is important.

I have implemented something like the above, based on greenlets.

In addition to the event types specified by BSD kevent(2) I've added a 
TASK and CHANNEL resource type
for the events, so that I can wait for tasks to complete or send/receive 
messages to/from other tasks without
blocking the application.

But the implementation is not the important thing, the API is, and then 
we can start writing competing implementations.

Joachim






   





















___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Summary: rejection of 'dynamic attribute' syntax

2007-02-15 Thread Barry Warsaw
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On Feb 15, 2007, at 6:27 AM, Anthony Baxter wrote:

 On Thursday 15 February 2007 21:48, Steve Holden wrote:
 Greg Ewing wrote:
 Steve Holden wrote:
 A further data point is that modern machines seem to give
 timing variabilities due to CPU temperature variations even if
 you always eat exactly the same thing.

 Oh, great. Now we're going to have to run our
 benchmarks in a temperature-controlled oven...

 ... with the fan running at constant speed :)

 Unless the fans are perfectly balanced, small changes in gravity are
 going to affect the rate at which they spin. So I guess the
 position of the moon will affect it :-)

Except on Tuesdays.
- -Barry

-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.5 (Darwin)

iQCVAwUBRdRKcXEjvBPtnXfVAQJUNAP/ebCrt2RV2pmKTElHUyzWIWxPFCqIiIuF
FDkiSx4x/ZZtOcdXlJHOA6lBWjuPAxe1jxE9BVpNy/6qCtky+mpnM5nXqIeAlQUk
XByguxKmsxF4HWSlk6GJ4hZWbZqsMdFiw8WZttCihQJwmr58rDMTUKRHss2IPOhL
egl76Tg1gE4=
=OTwQ
-END PGP SIGNATURE-
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Summary: rejection of 'dynamic attribute' syntax

2007-02-15 Thread Steve Holden
Barry Warsaw wrote:
 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA1
 
 On Feb 15, 2007, at 6:27 AM, Anthony Baxter wrote:
 
 On Thursday 15 February 2007 21:48, Steve Holden wrote:
 Greg Ewing wrote:
 Steve Holden wrote:
 A further data point is that modern machines seem to give
 timing variabilities due to CPU temperature variations even if
 you always eat exactly the same thing.

 Oh, great. Now we're going to have to run our
 benchmarks in a temperature-controlled oven...

 ... with the fan running at constant speed :)

 Unless the fans are perfectly balanced, small changes in gravity are
 going to affect the rate at which they spin. So I guess the
 position of the moon will affect it :-)
 
 Except on Tuesdays.
[off-list, because this is getting silly ...]

Anthony's antipodean antecedents (alliteration, all right?) remind me we 
will also have to factor Coriolis effects in.

regards
  Steve
-- 
Steve Holden   +44 150 684 7255  +1 800 494 3119
Holden Web LLC/Ltd  http://www.holdenweb.com
Skype: holdenweb http://del.icio.us/steve.holden
Blog of Note:  http://holdenweb.blogspot.com
See you at PyCon? http://us.pycon.org/TX2007
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Summary: rejection of 'dynamic attribute' syntax

2007-02-15 Thread Steve Holden
Steve Holden wrote:
 Barry Warsaw wrote:
 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA1

 On Feb 15, 2007, at 6:27 AM, Anthony Baxter wrote:

 On Thursday 15 February 2007 21:48, Steve Holden wrote:
 Greg Ewing wrote:
 Steve Holden wrote:
 A further data point is that modern machines seem to give
 timing variabilities due to CPU temperature variations even if
 you always eat exactly the same thing.
 Oh, great. Now we're going to have to run our
 benchmarks in a temperature-controlled oven...
 ... with the fan running at constant speed :)
 Unless the fans are perfectly balanced, small changes in gravity are
 going to affect the rate at which they spin. So I guess the
 position of the moon will affect it :-)
 Except on Tuesdays.
 [off-list, because this is getting silly ...]
Apparently the off-list part was a desire that didn't become reality. 
Sorry, I'll shut up now.

regards
  Steve
-- 
Steve Holden   +44 150 684 7255  +1 800 494 3119
Holden Web LLC/Ltd  http://www.holdenweb.com
Skype: holdenweb http://del.icio.us/steve.holden
Blog of Note:  http://holdenweb.blogspot.com
See you at PyCon? http://us.pycon.org/TX2007

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] microthreading vs. async io

2007-02-15 Thread Adam Olsen
On 2/15/07, Joachim König-Baltes [EMAIL PROTECTED] wrote:
 [EMAIL PROTECTED] wrote:
 E.g. have a wait(events = [], timeout = -1) method would be sufficient
 for most cases, where an event would specify

I agree with everything except this.  A simple function call would
have O(n) cost, thus being unacceptable for servers with many open
connections.  Instead you need it to maintain a set of events and let
you add or remove from that set as needed.


 I have implemented something like the above, based on greenlets.

I assume greenlets would be an internal implementation detail, not
exposed to the interface?

-- 
Adam Olsen, aka Rhamphoryncus
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] microthreading vs. async io

2007-02-15 Thread Joachim König-Baltes
Adam Olsen wrote:
 I agree with everything except this.  A simple function call would
 have O(n) cost, thus being unacceptable for servers with many open
 connections.  Instead you need it to maintain a set of events and let
 you add or remove from that set as needed.
We can learn from kevent here, it already has EV_ADD,
EV_DELETE,  EV_ENABLE, EV_DISABLE, EV_ONESHOT
flags. So the event-conditions would stay in the scheduler (per task)
so that they can fire multiple times without the need to be handled
over again and again.

Thanks, that's exactly the discussion I'd like to see, discussing about
a simple API.

 I have implemented something like the above, based on greenlets.

 I assume greenlets would be an internal implementation detail, not
 exposed to the interface?
Yes, you could use stackless, perhaps even Twisted,
but I'm not sure if that would work because the requirement for the
reads single-threaded is the simple wait(...) function call that does 
a yield
(over multiple stack levels down to the function that created the task),
something that is only provided by greenlet and stackless to my knowledge.

Joachim

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] microthreading vs. async io

2007-02-15 Thread Adam Olsen
On 2/15/07, Joachim König-Baltes [EMAIL PROTECTED] wrote:
 Adam Olsen wrote:
  I have implemented something like the above, based on greenlets.
 
  I assume greenlets would be an internal implementation detail, not
  exposed to the interface?
 Yes, you could use stackless, perhaps even Twisted,
 but I'm not sure if that would work because the requirement for the
 reads single-threaded is the simple wait(...) function call that does
 a yield
 (over multiple stack levels down to the function that created the task),
 something that is only provided by greenlet and stackless to my knowledge.

I don't think we're on the same page then.  The way I see it you want
a single async IO implementation shared by everything while having a
collection of event loops that cooperate just enough.  The async IO
itself would likely end up being done in C.

-- 
Adam Olsen, aka Rhamphoryncus
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Twisted Isn't Specific (was Re: Trial balloon: microthreads library in stdlib)

2007-02-15 Thread A.M. Kuchling
On Thu, Feb 15, 2007 at 09:19:30AM -0500, Jean-Paul Calderone wrote:
 That feels like 6 layers too many, given that
  _logrun(selectable, _drdw, selectable, method, dict)
  return context.call({ILogContext: newCtx}, func, *args, **kw)
  return self.currentContext().callWithContext(ctx, func, *args, **kw)
  return func(*args, **kw)
  getattr(selectable, method())
  klass(number, string)
 
 are all generic calls.
 
 I know function calls are expensive in Python, and method calls even more
 so... but I still don't understand this issue.  Twisted's call stack is too
 deep?  It is fair to say it is deep, I guess, but I don't see how that is a
 problem.  If it is, I don't see how it is specific to this discussion.

It's hard to debug the resulting problem.  Which level of the *12*
levels in the stack trace is responsible for a bug?  Which of the *6*
generic calls is calling the wrong thing because a handler was set up
incorrectly or the wrong object provided?  The code is so 'meta' that
it becomes effectively undebuggable.

--amk
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] generic async io (was: microthreading vs. async io)

2007-02-15 Thread dustin
On Thu, Feb 15, 2007 at 04:28:17PM +0100, Joachim K?nig-Baltes wrote:
 No, I'd like to have:
 
 - An interface for a task to specifiy the events it's interested in, and 
   waiting for at least one of the events (with a timeout).
 - an interface for creating a task (similar to creating a thread)
 - an interface for a schedular to manage the tasks

I think this discussion would be facilitated by teasing the first
bullet-point from the latter two: the first deals with async IO, while
the latter two deal with cooperative multitasking.

It's easy to write a single package that does both, but it's much harder
to write *two* fairly generic packages with a clean API between them,
given the varied platform support for async IO and the varied syntax and
structures (continuations vs. microthreads, in my terminology) for
multitasking.  Yet I think that division is exactly what's needed.

Since you asked (I'll assume the check for $0.02 is in the mail), I
think a strictly-async-IO library would offer the following:

 - a sleep queue object to which callables can be added
 - wrappers for all/most of the stdlib blocking IO operations which
   add the operation to the list of outstanding operations and return
   a sleep queue object
   - some relatively easy method of extending that for new IO operations
 - a poll() function (for multitasking libraries) and a serve_forever()
   loop (for asyncore-like uses, where all the action is IO-driven)

The mechanisms for accomplishing all of that on the chosen platform
would be an implementation detail, possibly with some intitialization
hinting from the application.  The library would also need to expose
its platform-based limitations (can't wait on thd.join(), can only wait
on 64 fd's, etc.) to the application for compatibility-checking
purposes.

Thoughts?

Dustin
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] microthreading vs. async io

2007-02-15 Thread Joachim König-Baltes
Joachim König-Baltes wrote:
 The problem solved by this approach is to allow a number of cooperating
 threads to wait for an event without the need to busy loop or block by 
 delegating
 the waiting to a central instance, the scheduler. How this efficient 
 waiting is
 implemented is the responsability of the scheduler, but the schedular would
 not do the (possibly blocking) io operation, it would only guaranty to
 continue a task, when it can do an IO operation without blocking.

   
 From the point of view of the task, it only has to sprinkle a number of 
wait(...) calls
before doing blocking calls, so there is no need to use callbacks or 
writing the
inherently sequential code upside down. That is the gain I'm interested in.

The style used in asyncore, inheriting from a class and calling return 
in a method
and being called later at a different location (different method) just 
interrupts the
sequential flow of operations and makes it harder to understand. The same is
true for all other strategies using callbacks or similar mechanisms.

All this can be achieved with a multilevel yield() that is hidden in a 
function call.
So the task does a small step down (wait) in order to jump up (yield) to 
the scheduler
without disturbing the eye of the beholder.

Joachim


___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Twisted Isn't Specific (was Re: Trial balloon: microthreads library in stdlib)

2007-02-15 Thread Jean-Paul Calderone
On Thu, 15 Feb 2007 10:46:05 -0500, A.M. Kuchling [EMAIL PROTECTED] wrote:
On Thu, Feb 15, 2007 at 09:19:30AM -0500, Jean-Paul Calderone wrote:
 That feels like 6 layers too many, given that
  _logrun(selectable, _drdw, selectable, method, dict)
  return context.call({ILogContext: newCtx}, func, *args, **kw)
  return self.currentContext().callWithContext(ctx, func, *args, **kw)
  return func(*args, **kw)
  getattr(selectable, method())
  klass(number, string)
 
 are all generic calls.

 I know function calls are expensive in Python, and method calls even more
 so... but I still don't understand this issue.  Twisted's call stack is too
 deep?  It is fair to say it is deep, I guess, but I don't see how that is a
 problem.  If it is, I don't see how it is specific to this discussion.

It's hard to debug the resulting problem.  Which level of the *12*
levels in the stack trace is responsible for a bug?  Which of the *6*
generic calls is calling the wrong thing because a handler was set up
incorrectly or the wrong object provided?  The code is so 'meta' that
it becomes effectively undebuggable.

I've debugged plenty of Twisted applications.  So it's not undebuggable. :)

Application code tends to reside at the bottom of the call stack, so Python's
traceback order puts it right where you're looking, which makes it easy to
find.  For any bug which causes something to be set up incorrectly and only
later manifests as a traceback, I would posit that whether there is 1 frame or
12, you aren't going to get anything useful out of the traceback.  Standard
practice here is just to make exception text informative, I think, but this is
another general problem with Python programs and event loops, not one specific
to either Twisted itself or the particular APIs Twisted exposes.

As a personal anecdote, I've never once had to chase a bug through any of the
6 generic calls singled out.  I can't think of a case where I've helped any
one else who had to do this, either.  That part of Twisted is very old, it is
_very_ close to bug-free, and application code doesn't have very much control
over it at all.  Perhaps in order to avoid scaring people, there should be a
way to elide frames from a traceback (I don't much like this myself, I worry
about it going wrong and chopping out too much information, but I have heard
other people ask for it)?

Jean-Paul
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Twisted Isn't Specific (was Re: Trial balloon: microthreads library in stdlib)

2007-02-15 Thread Larry Hastings
Bob Ippolito wrote:
 There is no single PerfectReactor. There are several use cases where
 you need to wait on 1 different event systems, which guarantees at
 least two OS threads (and two event loops). In general it's nice to
 have a single Python event loop (the reactor) to act on said threads
 (e.g. something just sitting on a mutex waiting for messages) but
 waiting for IO to occur should *probably* happen on one or more
 ancillary threads -- one per event system (e.g. select, GTK,
 WaitForMultipleEvents, etc.)
Why couldn't PerfectReactor be a reactor for other reactors?  A sort of 
concentrator for these multiple event systems and multiple threads.

You ask to listen to sockets, so it instantiates a singleton 
PerfectReactor which instantiates a select() reactor and listens to it 
directly in a single-threaded manner.  If you then ask to listen to 
Win32 messages, the PerfectReactor instantiates a GetMessage() reactor.  
Then, realizing it has two event systems, it spawns a thread for each 
child reactor with a listener that serializes the incoming events into 
the PerfectReactor's queue.  Bingo, your application doesn't need to be 
written thread-safe, PerfectReactor is platform-agnostic, and you don't 
have to know in advance all the event types you might ever listen to.

Sorry if this is a dumb question (or if I'm mangling the terminology),


/larry/
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] microthreading vs. async io

2007-02-15 Thread Phillip J. Eby
At 11:00 PM 2/14/2007 -0600, [EMAIL PROTECTED] wrote:
Instead, I would like to concentrate on producing a small, clean,
consistent, generator-based microthreading library.  I've seen several
such libraries (including the one in PEP 342, which is fairly skeletal),
and they all work *almost* the same way, but differ in, for example, the
kinds of values that can be yielded, their handling of nested calls, and
the names for the various special values one can yield.

Which is one reason that any standard coroutine library would need to be 
extensible with respect to the handling of yielded values, e.g. by using a 
generic function to implement the yielding.  See the 'yield_to' function in 
the example I posted.

Actually, the example I posted would work fine as a microthreads core by 
adding a thread-local variable that points to some kind of scheduler to 
replace the Twisted scheduling functions I used.  It would need to be a 
variable, because applications would have to be able to replace it, e.g. 
with the Twisted reactor.

In other words, the code I posted isn't really depending on Twisted for 
anything but reactor.callLater() and the corresponding .isActive() and 
.cancel() methods of the objects it returns.  If you add a default 
implementation of those features that can be replaced with the Twisted 
reactor, and dropped the code that deals with Deferreds and TimeoutErrors, 
you'd have a nice standalone library.

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] microthreading vs. async io

2007-02-15 Thread Jean-Paul Calderone
On Thu, 15 Feb 2007 10:36:21 -0600, [EMAIL PROTECTED] wrote:
 [snip]

def fetchSequence(...):
  fetcher = Fetcher()
  yield fetcher.fetchHomepage()
  firstData = yield fetcher.fetchPage('http://...')
  if someCondition(firstData):
while True:
  secondData = yield fetcher.fetchPage('http://...')
  # ...
  if someOtherCondition(secondData): break
  else:
# ...

Ahem:

from twisted.internet import reactor
from twisted.internet.defer import inlineCallbacks
from twisted.web.client importt getPage

@inlineCallbacks
def fetchSequence(...):
homepage = yield getPage(homepage)
firstData = yield getPage(anotherPage)
if someCondition(firstData):
while:
secondData = yield getPage(wherever)
if someOtherCondition(secondData):
break
else:
...

So as I pointed out in another message in this thread, for several years it
has been possible to do this with Twisted.  Since Python 2.5, you can do it
exactly as I have written above, which looks exactly the same as your example
code.

Is the only problem here that this style of development hasn't had been made
visible enough?

Jean-Paul
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] microthreading vs. async io

2007-02-15 Thread Phillip J. Eby
At 11:47 AM 2/15/2007 -0500, Jean-Paul Calderone wrote:
Is the only problem here that this style of development hasn't had been made
visible enough?

You betcha.  I sure as heck wouldn't have bothered writing the module I 
did, if I'd known it already existed.  Or at least only written whatever 
parts that the module doesn't do.  The Twisted used by the current version 
of Chandler doesn't include this feature yet, though, AFAICT.

But this is excellent; it means people will be able to write plugins that 
do network I/O without needing to grok CPS.  They'll still need to be able 
to grok some basics (like not blocking the reactor), but this is good to 
know about.  Now I won't have to actually test that module I wrote.  ;-)

You guys should be trumpeting this - it's real news and in fact a motivator 
for people to upgrade to Python 2.5 and whatever version of Twisted 
supports this.  You just lifted a *major* barrier to using Twisted for 
client-oriented tasks, although I imagine there probably needs to be even 
more of it.

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] microthreading vs. async io

2007-02-15 Thread Adam Olsen
On 2/15/07, Jean-Paul Calderone [EMAIL PROTECTED] wrote:
 Is the only problem here that this style of development hasn't had been made
 visible enough?

Perhaps not the only problem, but definitely a big part of it.  I
looked for such a thing in twisted after python 2.5 came out and was
unable to find it.  If I had I may not have bothered to update my own
microthreads to use python 2.5 (my proof-of-concept was based on real
threads).

-- 
Adam Olsen, aka Rhamphoryncus
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Twisted Isn't Specific (was Re: Trial balloon: microthreads library in stdlib)

2007-02-15 Thread Larry Hastings
Martin v. Löwis wrote:
 Now, for these generalized event loops, it may not be possible anymore 
 to combine all event sources into a single blocking call.

Right, that's why my proposal assumed that each disparate event source 
would need its own thread.


 Ah, threads :-( It turns out that you need to invoke GetMessage in the
 context of the thread in which the window was created. In a different
 thread, you won't get any messages.

Oof!  I'm embarrassed to have forgotten that.  But that's not a fatal 
problem.  It means that on Windows the PerfectReactor must service the 
blocking GetMessage loop, and these other threads notify the 
PerfectReactor of new events by sending a message.  (Either that, or, it 
could poll GetMessage and its incoming event queue without ever 
blocking.  But that is obviously suboptimal.)  I think I've done this 
sort of thing before, in fact.

Of course, in the absence of any windows, the Windows PerfectReactor 
could fall back to a mutex.  Abstract this inside PerfectReactor and its 
event sources wouldn't notice the difference.

 Integrating with threads might be a solution in some cases, and a 
 problem in others. You can't assume it is a universal solution.

Universal?  Yeah, I doubt it too.  But perhaps it would be good enough 
for nearly all cases.  In the cases where it wasn't, it could throw an 
I can't listen to that type of event right now exception, forcing you 
to fall back to preconfiguring your central reactor by hand.

Anyway, like many folks I'm hoping this whole conversation results in 
establishing basic standard duck-typed conventions--what other languages 
might call interfaces--for event senders and receivers.  In which case 
this non-universal threaded approach would simply be one of several to 
choose from.

I'd be interested to hear about other situations where threading would 
cause a problem.  My suspicion is that Windows is the hard one, and as 
I've shown that one is solvable.


Thanks for the thoughtful reply,


/larry/
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] microthreading vs. async io

2007-02-15 Thread dustin
On Thu, Feb 15, 2007 at 11:47:27AM -0500, Jean-Paul Calderone wrote:
 Is the only problem here that this style of development hasn't had been made
 visible enough?

Yep -- I looked pretty hard about two years ago, and although I haven't
been looking for that specifically since, I haven't heard anything about
it.

The API docs don't provide a good way to find things like this, and the
Twisted example tutorial didn't mention it at my last check.

So if we have an in-the-field implementation of this style of
programming (call it what you will), is it worth *standardizing* that
style so that it's the same in Twisted, my library, and anyone else's
library that cares to follow the standard?  

Dustin
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] generic async io (was: microthreading vs. async io)

2007-02-15 Thread Nick Maclaren
[EMAIL PROTECTED] wrote:

 I think this discussion would be facilitated by teasing the first
 bullet-point from the latter two: the first deals with async IO, while
 the latter two deal with cooperative multitasking.
 
 It's easy to write a single package that does both, but it's much harder
 to write *two* fairly generic packages with a clean API between them,
 given the varied platform support for async IO and the varied syntax and
 structures (continuations vs. microthreads, in my terminology) for
 multitasking.  Yet I think that division is exactly what's needed.

Hmm.  Now, please, people, don't take offence, but I don't know how
to phrase this tactfully :-(

The 'threading' approach to asynchronous I/O was found to be a BAD
IDEA back in the 1970s, was abandoned in favour of separating
asynchronous I/O from threading, and God alone knows why it was
reinvented - except that most of the people with prior experience
had died or retired :-(

Let's go back to the days when asynchronous I/O was the norm, and
I/O performance critical applications drove the devices directly.
In those days, yes, that approach did make sense.  But it rapidly
ceased to do so with the advent of 'semi-intelligent' devices and
the virtualisation of I/O by the operating system.  That was in
the mid-1970s.  Nowadays, ALL devices are semi-intelligent and no
system since Unix has allowed applications direct access to devices,
except for specialised HPC and graphics.

We used to get 90% of theoretical peak performance on mainframes
using asynchronous I/O from clean, portable applications, but it
was NOT done by treating the I/O as threads and controlling their
synchronisation by hand.  In fact, quite the converse!  It was done
by realising that asynchronous I/O and explicit threading are best
separated ENTIRELY.  There were two main models:

Streaming, as in most languages (Fortran, C, Python, but NOT in
POSIX).  The key properties here are that the transfer boundaries
have no significance, only heavyweight synchronisation primitives
(open, close etc.) provide any constraints on when data are actually
transferred and (for very high performance) buffers are unavailable
from when a transfer is started to when it is checked.  If copying
is acceptable, the last constraint can be dropped.

In the simple case, this allows the library/system to reblock and
perform transfers asynchronously.  In the more advanced case, the
application has to use multiple buffering (at least double), but
can get full performance without any form of threading.  IBM MVT
applications used to get up to 90% without hassle in parallel with
computation and using only a single thread (well, there was only a
single CPU, anyway).

The other model is transactions.  This has the property that there
is a global commit primitive, and the order of transfers is undefined
between commits.  Inter alia, it means that overlapping transfers
are undefined behaviour, whether in a single thread or in multiple
threads.  BSP uses this model.

The MPI-2 design team included a lot of ex-mainframe people and
specifies both models.  While it is designed for parallel applications,
the I/O per se is not controlled like threads.


Regards,
Nick Maclaren,
University of Cambridge Computing Service,
New Museums Site, Pembroke Street, Cambridge CB2 3QH, England.
Email:  [EMAIL PROTECTED]
Tel.:  +44 1223 334761Fax:  +44 1223 334679
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] generic async io (was: microthreading vs. async io)

2007-02-15 Thread dustin
On Thu, Feb 15, 2007 at 07:46:59PM +, Nick Maclaren wrote:
 [EMAIL PROTECTED] wrote:
 
  I think this discussion would be facilitated by teasing the first
  bullet-point from the latter two: the first deals with async IO, while
  the latter two deal with cooperative multitasking.
  
  It's easy to write a single package that does both, but it's much harder
  to write *two* fairly generic packages with a clean API between them,
  given the varied platform support for async IO and the varied syntax and
  structures (continuations vs. microthreads, in my terminology) for
  multitasking.  Yet I think that division is exactly what's needed.
 
 The 'threading' approach to asynchronous I/O was found to be a BAD
 IDEA back in the 1970s, was abandoned in favour of separating
 asynchronous I/O from threading, and God alone knows why it was
 reinvented - except that most of the people with prior experience
 had died or retired :-(
snip

Knowing the history of something like this is very helpful, but I'm not
sure what you mean by this first paragraph.  I think I'm most unclear
about the meaning of The 'threading' approach to asynchronous I/O?
Its opposite (separating asynchronous I/O from threading) doesn't
illuminate it much more.  Could you elaborate?

Dustin
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


[Python-Dev] Py2.6 ideas

2007-02-15 Thread Raymond Hettinger
* Teach vars() to work with classes defining __slots__.  Essentially, __slots__ 
are just an implementation detail designed to make instances a bit more compact.

* Make the docstring writable for staticmethods, classmethods, and properties. 
We did this for function objects and it worked-out well.

* Have staticmethods and classmethods automatically fill-in a docstring from 
the 
wrapped function.  An editor's tooltips would benefit nicely.

* Add a pure python named_tuple class to the collections module.  I've been 
using the class for about a year and found that it greatly improves the 
usability of tuples as records. 
http://aspn.activestate.com/ASPN/Cookbook/Python/Recipe/500261

* Give builtin types a __name__ attribute so that we have a uniform way of 
accessing type names.

* Enhance doctest with a method that computes updated doctest results.  If the 
only change I make to a matrix suite is to add commas to long numbers in the 
matrix repr(), then it would be nice to have an automated way to update the 
matrix output on all the other test cases in the module.

* add an optional position argument to heapq.heapreplace to allow an arbitrary 
element to be updated in-place and then have the heap condition restored.  I've 
now encountered three separate parties trying to figure-out how to do this.


___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] generic async io (was: microthreading vs. async io)

2007-02-15 Thread Nick Maclaren
[EMAIL PROTECTED] wrote:

 Knowing the history of something like this is very helpful, but I'm not
 sure what you mean by this first paragraph.  I think I'm most unclear
 about the meaning of The 'threading' approach to asynchronous I/O?
 Its opposite (separating asynchronous I/O from threading) doesn't
 illuminate it much more.  Could you elaborate?

I'll try.  Sorry about being unclear - it is one of my failings.
Here is an example draft of some interfaces:

Threading
-

An I/O operation passes a buffer, length, file and action and receives a
token back.  This token can be queried for completion, waited on and so
on, and is cancelled by waiting on it and getting a status back.  I.e.
it is a thread-like object.  This is the POSIX-style operation, and is
what I say cannot be made to work effectively.

Streaming
-

An I/O operation either writes some data to a stream or reads some data
from it; such actions are sequenced within a thread, but not between
threads (even if the threads coordinate their I/O).  Data written goes
into limbo until it is read, and there is no way for a reader to find
the block boundaries it was written with or whether data HAS been
written.  A non-blocking read merely tests if data are ready for
reading, which is not the same.

There are no positioning operations, and only open, close and POSSIBLY a
heavyweight synchronise or rewind (both equivalent to close+open) force
written data to be transferred.  Think of Fortran sequential I/O without
BACKSPACE or C I/O without ungetc/ungetchar/fseek/fsetpos.

Transactions


An I/O operation either writes some data to a file or reads some data
from it.  There is no synchronisation of any form until a commit.  If
two transfers between a pair of commits overlap (including file length
changes), the behaviour is undefined.  All I/O includes its own
positioning, and no positioning is relative.


Regards,
Nick Maclaren,
University of Cambridge Computing Service,
New Museums Site, Pembroke Street, Cambridge CB2 3QH, England.
Email:  [EMAIL PROTECTED]
Tel.:  +44 1223 334761Fax:  +44 1223 334679
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] generic async io

2007-02-15 Thread Joachim Koenig-Baltes
[EMAIL PROTECTED] wrote:
 I think this discussion would be facilitated by teasing the first
 bullet-point from the latter two: the first deals with async IO, while
 the latter two deal with cooperative multitasking.
 
 It's easy to write a single package that does both, but it's much harder
 to write *two* fairly generic packages with a clean API between them,
 given the varied platform support for async IO and the varied syntax and
 structures (continuations vs. microthreads, in my terminology) for
 multitasking.  Yet I think that division is exactly what's needed.
 
 Since you asked (I'll assume the check for $0.02 is in the mail), I
 think a strictly-async-IO library would offer the following:
 
  - a sleep queue object to which callables can be added
  - wrappers for all/most of the stdlib blocking IO operations which
add the operation to the list of outstanding operations and return
a sleep queue object
- some relatively easy method of extending that for new IO operations
  - a poll() function (for multitasking libraries) and a serve_forever()
loop (for asyncore-like uses, where all the action is IO-driven)

A centralized approach of wrapping all blocking IO operation in stdlib
could only work in pure python applications. What about extensions that
integrate e.g. gtk2, gstreamer and other useful libraries that come
with their own low level IO. Python is not the right place to solve this
problem, and there are so many C-Libraries which tried it, e.g. gnu-pth
tries to implement pthreads on a single-threaded OS.

But none of these approaches is perfect. E.g. if you want to read 5 
bytes from a fd, you can use FIONREAD on a socket and get the number
of bytes available from the OS, so you can be sure to not block, but
FIONREAD on a normal file fd (e.g. on a NFS mount) will not tell you, 
how many  bytes the OS has prefetched, so you might block, even if you
are reading only 1 byte.

I think it's best to decide how to do the low level IO for each case
in the task. It knows what's it's doing and how to avoid blocking.

Therefore I propose to decouple the waiting for a condition/event
from the actual blocking operation. And to avoid the blocking, there
is no need to reinvent the wheel, the socket module already provides
ways to avoid it for network IO and a lot of C libraries exist to do
it in a portable way, but none is perfect.

And based on these events it's much easier to design a schedular
than to write one which also has to do the non blocking IO operations
in order to give the tasks the illusion of a blocking operation.

The BSD kevent is the most powerful event waiting mechanism with kernel 
support (as it unifies the waiting on different events on different 
resources like fd, process, timers, signals) but its API can be emulated 
to a sudden degree in the other event mechanism like notify on Linux
or Niels Provos' libevent.

The real showstopper for making the local event waiting easy are the 
missing coroutines or at least a form of non local goto like 
setjmp/longjump in C (that's what greenlets provides), remember that 
yield() only suspends the current
function, so every function on the stack must be prepared to handle
the yield, even if they are not interested in it (hiding this fact
with decorators does not make it better IMO)

Joachim























___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Twisted Isn't Specific (was Re: Trial balloon: microthreads library in stdlib)

2007-02-15 Thread Baptiste Carvello
 Ah, threads :-( It turns out that you need to invoke GetMessage in the
 context of the thread in which the window was created. In a different
 thread, you won't get any messages.
 
 I'd be interested to hear about other situations where threading would 
 cause a problem.  My suspicion is that Windows is the hard one, and as 
 I've shown that one is solvable.
 
 
I've tried something similar on Linux, with gtk an wx.

You can run the gtk main loop in its own thread, but because gtk is not thread
safe, you have to grab a mutex everytime you run gtk code outside the thread the
mainloop is running in. So you have to surround your calls to the gtk api with
calls to gtk.threads_enter and gtk.threads_leave. Except for callbacks of
course, because they are executed in the main thread... Doable, but not fun.

The same goes for wx. Then all hell breaks loose when you try to use both gtk
and wx at the same time. That's because on Linux, the wx main loop calls the gtk
mainloop behind the scenes. As far as I know, that problem can not be solved
from python.

So yes that strategy can work, but it's no silver bullet.

Cheers,
Baptiste

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Twisted Isn't Specific (was Re: Trial balloon: microthreads library in stdlib)

2007-02-15 Thread Bob Ippolito
On 2/15/07, Baptiste Carvello [EMAIL PROTECTED] wrote:
  Ah, threads :-( It turns out that you need to invoke GetMessage in the
  context of the thread in which the window was created. In a different
  thread, you won't get any messages.
 
  I'd be interested to hear about other situations where threading would
  cause a problem.  My suspicion is that Windows is the hard one, and as
  I've shown that one is solvable.
 
 
 I've tried something similar on Linux, with gtk an wx.

 You can run the gtk main loop in its own thread, but because gtk is not thread
 safe, you have to grab a mutex everytime you run gtk code outside the thread 
 the
 mainloop is running in. So you have to surround your calls to the gtk api with
 calls to gtk.threads_enter and gtk.threads_leave. Except for callbacks of
 course, because they are executed in the main thread... Doable, but not fun.

 The same goes for wx. Then all hell breaks loose when you try to use both gtk
 and wx at the same time. That's because on Linux, the wx main loop calls the 
 gtk
 mainloop behind the scenes. As far as I know, that problem can not be solved
 from python.

 So yes that strategy can work, but it's no silver bullet.

And it's worse on Windows and Mac OS X where some GUI API calls *must*
happen on a particular thread or they simply don't work.

-bob
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Twisted Isn't Specific (was Re: Trial balloon: microthreads library in stdlib)

2007-02-15 Thread Josiah Carlson

Jean-Paul Calderone [EMAIL PROTECTED] wrote:
 On Thu, 15 Feb 2007 02:36:22 -0700, Andrew Dalke [EMAIL PROTECTED] wrote:
[snip]
 2) asyncore is
 smaller and easier to understand than Twisted,
 
 While I hear this a lot, applications written with Twisted _are_ shorter and
 contain less irrelevant noise in the form of boilerplate than the equivalent
 asyncore programs.  This may not mean that Twisted programs are easier to
 understand, but it is at least an objectively measurable metric.

In my experience, the boilerplate is generally incoming and outgoing
buffers.  If both had better (optional default) implementations, and
perhaps a way of saying use the default implementations of handle_close,
etc., then much of the boilerplate would vanish.  People would likely
implement a found_terminator method and be happy.


 and
 3) it was for demo/proof of concept purposes.
 While
 tempting to improve that module I know that Twisted
 has already gone though all the platform-specific crap
 and I don't want to go through it again myself.  I don't
 want to write a reactor to deal with GTK, and one for
 OS X, and one for ...
 
 Now if we can only figure out a way for everyone to benefit from this without
 tying too many brains up in knots. :)

Whenever I need to deal with these kinds of things (in wxPython
specifically), I usually set up a wxTimer to signal
asyncore.poll(timeout=0), but I'm lazy, and rarely need significant
throughput in my GUI applications.

[snip]
 Yet by using the Stackless socket monkeypatch, this
 same code works in an async framework.  And the underlying
 libraries have a much larger developer base than Twisted.
 Want NNTP?  import nntplib  Want POP3?  import poplib
 Plenty of documentation about them too.
 
 This is going to come out pretty harshly, for which I can only apologize in
 advance, but it bears mention.  The quality of protocol implementations in the
 standard library is bad.  As in not good.  Twisted's NNTP support is better
 (even if I do say so myself - despite only having been working on by myself,
 when I knew almost nothing about Twisted, and having essentially never been
 touched since).  Twisted's POP3 support is fantastically awesome.  Next to
 imaplib, twisted.mail.imap4 is a sparkling diamond.  And each of these
 implements the server end of the protocol as well: you won't find that in the
 standard library for almost any protocol.

Protocol support is hit and miss.  NNTP in Python could be better, but
that's not an asyncore issue (being that nntplib isn't implemented using
asyncore), that's an NNTP in Python could be done better issue.  Is it
worth someone's time to patch it, or should they just use Twisted?  Well,
if we start abandoning stdlib modules, because they can always use
Twisted, then we may as well just ship Twisted with Python.


 - Josiah

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


[Python-Dev] Summary: rejection of 'dynamic attribute' syntax

2007-02-15 Thread Ben North
[EMAIL PROTECTED] wrote:
I really, really wish that every feature proposal for Python had 
to meet
some burden of proof

Ben North wrote:
   This is what I understood the initial posting to python-ideas to be
   about.

[EMAIL PROTECTED] wrote:
  I'm suggesting that the standards of the community in _evaluating_ to
  the proposals should be clearer

Perhaps I didn't need to take your initial comments personally then,
sorry :-)

I do see what you're pointing out: the later part of the dynamic
attribute discussion was where the question of whether python really
needs new syntax for this was addressed, and the outcome made the
earlier discussion of x.[y] vs x.(y) vs x.{y} vs x-y
etc. irrelevant.

Ben.

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Py2.6 ideas

2007-02-15 Thread Guido van Rossum
On 2/15/07, Raymond Hettinger [EMAIL PROTECTED] wrote:
 * Add a pure python named_tuple class to the collections module.  I've been
 using the class for about a year and found that it greatly improves the
 usability of tuples as records.
 http://aspn.activestate.com/ASPN/Cookbook/Python/Recipe/500261

Hm, but why would they still have to be tuples? Why not just have a
generic 'record' class?

 * Give builtin types a __name__ attribute so that we have a uniform way of
 accessing type names.

This already exists, unless I misunderstand you:

Python 2.2.3+ (#94, Jun  4 2003, 08:24:18)
[GCC 2.96 2731 (Red Hat Linux 7.3 2.96-113)] on linux2
Type help, copyright, credits or license for more information.
 int.__name__
'int'


 * Enhance doctest with a method that computes updated doctest results.  If the
 only change I make to a matrix suite is to add commas to long numbers in the
 matrix repr(), then it would be nice to have an automated way to update the
 matrix output on all the other test cases in the module.

What I could have used for Py3k (specifically for the 2to3
transformer): mods to the DocTestParser class that allow you to
exactly reproduce the input string from the return value of parse().

Unrelated, I'd like the tokenize module to be more easily extensible.
E.g. I'd like to add new tokens, and I'd like to be able to change its
whitespace handling.

-- 
--Guido van Rossum (home page: http://www.python.org/~guido/)
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Py2.6 ideas

2007-02-15 Thread Delaney, Timothy (Tim)
Guido van Rossum wrote:

 On 2/15/07, Raymond Hettinger [EMAIL PROTECTED] wrote:
 * Add a pure python named_tuple class to the collections module. 
 I've been using the class for about a year and found that it greatly
 improves the usability of tuples as records.
 http://aspn.activestate.com/ASPN/Cookbook/Python/Recipe/500261
 
 Hm, but why would they still have to be tuples? Why not just have a
 generic 'record' class?

Hmm - possibilities. record definitely has greater connotations of
heterogeneous elements than tuple, which would put paid to the
constant arguments that a tuple is really just an immutable list.

list - primarily intended for homogeneous elements
record - primarily intended for heterogeneous elements, elements are
(optionally?) named

and have mutable and immutable versions of each. Maybe the current list
syntax would then continue to create a mutable list, and the current
tuple syntax would create an immutable record (with no element names)
i.e. the current tuple.

Tim Delaney
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Twisted Isn't Specific (was Re: Trial balloon: microthreads library in stdlib)

2007-02-15 Thread Greg Ewing
Martin v. Löwis wrote:

 That is insufficient. The gtk main loop has more input
 sources than just the connection to X:
 - timers
 - idle handlers
 - child handlers
 - additional file descriptors
 - a generalzed 'event source'

When gtk is not the central event mechanism, there's no
need to use the gtk event loop for these things -- you
can just use the central event mechanism directly.

The pygtk APIs for setting these up can redirect them
to the appropriate place, to accommodate existing code
that uses the gtk event loop for them.

--
Greg

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Py2.6 ideas

2007-02-15 Thread skip

 Hm, but why would they still have to be tuples? Why not just have a
 generic 'record' class?

Tim Hmm - possibilities. record definitely has greater connotations
Tim of heterogeneous elements than tuple, which would put paid to the
Tim constant arguments that a tuple is really just an immutable list.

(What do you mean by ... put paid ...?  It doesn't parse for me.)  Based
on posts the current thread in c.l.py with the improbable subject f---ing
typechecking, lots of people refuse to believe tuples are anything other
than immutable lists.

Skip
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] New syntax for 'dynamic' attribute access

2007-02-15 Thread Greg Ewing
Oleg Broytmann wrote:

  Given that they say a camel is a horse designed by a committee
 
 BTW, camels are very suited for their environments...

The quote is actually a camel is a *racehorse* designed by a committee.
Camels are very good at surviving in the desert, but not so good at
winning a horse race (not camel race). Which is the point of the saying.

--
Greg
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Py2.6 ideas

2007-02-15 Thread Delaney, Timothy (Tim)
[EMAIL PROTECTED] wrote:

  Hm, but why would they still have to be tuples? Why not just
 have a  generic 'record' class?
 
 Tim Hmm - possibilities. record definitely has greater
 connotations Tim of heterogeneous elements than tuple, which
 would put paid to the Tim constant arguments that a tuple is
 really just an immutable list. 
 
 (What do you mean by ... put paid ...?  It doesn't parse for me.) 
 Based on posts the current thread in c.l.py with the improbable
 subject f---ing typechecking, lots of people refuse to believe
 tuples are anything other than immutable lists.

Sorry - put paid to means to finish ...
http://www.phrases.org.uk/meanings/293200.html

That thread is a perfect example why I think a record type should be
standard in python, and tuple should be deprecated (and removed in
3.0).

Instead, have mutable and immutable lists, and mutable and immutable
records. You could add a mutable list and an immutable list (resulting
always in a new mutable list I think). You could *not* add two records
together (even if neither had named elements).

Cheers,

Tim Delaney
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] New syntax for 'dynamic' attribute access

2007-02-15 Thread Delaney, Timothy (Tim)
Greg Ewing wrote:

 Oleg Broytmann wrote:
 
 Given that they say a camel is a horse designed by a committee
 
 BTW, camels are very suited for their environments...
 
 The quote is actually a camel is a *racehorse* designed by a
 committee. Camels are very good at surviving in the desert, but not
 so good at winning a horse race (not camel race). Which is the point
 of the saying. 

Speaking of which, have you ever seen a camel race? Those things go
*fast* ...

I think we're getting way too off-topic now ;)

Tim Delaney
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Summary: rejection of 'dynamic attribute' syntax

2007-02-15 Thread Greg Ewing
Anthony Baxter wrote:

 Unless the fans are perfectly balanced, small changes in gravity are 
 going to affect the rate at which they spin. So I guess the 
 position of the moon will affect it :-)

A standard gravitational field could also be important
to eliminate relativistic effects.

So we need to standardise latitude/longitude/altitude,
time of year and phase of moon. Better check atmospheric
pressure and humidity, too, just to be on the safe
side.

--
Greg
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Wrapping up 'dynamic attribute' discussion

2007-02-15 Thread Anthony Baxter
On Friday 16 February 2007 09:08, Ben North wrote:
 Instead of new syntax, a new wrapper class was proposed,
 with the following specification / conceptual implementation
 suggested by Martin Loewis:

 ...snip...
 
 This was considered a cleaner and more elegant solution to
 the original problem.  The decision was made that the present PEP
 did not meet the burden of proof for the introduction of new
 syntax, a view which had been put forward by some from the
 beginning of the discussion.  The wrapper class idea was left
 open as a possibility for a future PEP.

A good first step would be to contribute something like this to the 
Python Cookbook, if it isn't already there.


-- 
Anthony Baxter [EMAIL PROTECTED]
It's never too late to have a happy childhood.
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] New syntax for 'dynamic' attribute access

2007-02-15 Thread Steve Holden
Greg Ewing wrote:
 Oleg Broytmann wrote:
 
 Given that they say a camel is a horse designed by a committee
 BTW, camels are very suited for their environments...
 
 The quote is actually a camel is a *racehorse* designed by a committee.
 Camels are very good at surviving in the desert, but not so good at
 winning a horse race (not camel race). Which is the point of the saying.
 
As far as I know Sir Alec Issigonis, the inventor of the Mini (the car, 
not the Mac Mini) said this, and he used horse, not racehorse.

The point of the saying is that a camel has properties that are 
completely unnecessary in a horse, such as the ability to travel many 
days without water. He was saying that committees tend to over-specify 
and add redundant features rather than designing strictly for purpose.

A bit like Python 3.0 ;-)

regards
  Steve
-- 
Steve Holden   +44 150 684 7255  +1 800 494 3119
Holden Web LLC/Ltd  http://www.holdenweb.com
Skype: holdenweb http://del.icio.us/steve.holden
Blog of Note:  http://holdenweb.blogspot.com
See you at PyCon? http://us.pycon.org/TX2007

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Py2.6 ideas

2007-02-15 Thread A.M. Kuchling
On Thu, Feb 15, 2007 at 05:41:51PM -0600, [EMAIL PROTECTED] wrote:
 Tim Hmm - possibilities. record definitely has greater connotations
 Tim of heterogeneous elements than tuple, which would put paid to the
 Tim constant arguments that a tuple is really just an immutable list.
 
 (What do you mean by ... put paid ...?  It doesn't parse for me.)  

Put paid usually means to finish off; Tim is saying this would
finish the constant arguments that c c...

--amk
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Py2.6 ideas

2007-02-15 Thread Raymond Hettinger
[Raymond Hettinger]
 * Add a pure python named_tuple class to the collections module.
 I've been using the class for about a year and found that it greatly
 improves the usability of tuples as records.
 http://aspn.activestate.com/ASPN/Cookbook/Python/Recipe/500261

[Delaney, Timothy]
 Hmm - possibilities. record definitely has greater connotations of
 heterogeneous elements than tuple, which would put paid to the
 constant arguments that a tuple is really just an immutable list.

No need to go so widely off-track.  The idea is to have an efficient type that 
is directly substitutable for tuples but is a bit more self-descriptive.  I 
like 
to have the doctest result cast at NamedTuple('TestResults failed attempted). 
The repr of that result looks like  TestResult(failed=0, attempted=15) but is 
still accessible as a tuple and passes easily into other functions that expect 
a 
tuple.  This sort of thing would be handly for things like os.stat(). 
http://aspn.activestate.com/ASPN/Cookbook/Python/Recipe/500261


Raymond 
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Py2.6 ideas

2007-02-15 Thread Giovanni Bajo
On 15/02/2007 20.59, Raymond Hettinger wrote:

 * Add a pure python named_tuple class to the collections module.  I've been 
 using the class for about a year and found that it greatly improves the 
 usability of tuples as records. 
 http://aspn.activestate.com/ASPN/Cookbook/Python/Recipe/500261

+1 from me too, I've been using a class with the same name and semantic 
(though an inferior implementation) for almost two years now, with great 
benefits.

As suggested in the cookbook comment, please consider changing the semantic of 
  the generated constructor so that it accepts a single iterable positional 
arguments (or keyword arguments). This matches tuple() (and other containers) 
in behaviour, and makes it easier to substitute existing uses with named tuples.
-- 
Giovanni Bajo

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Twisted Isn't Specific (was Re: Trial balloon: microthreads library in stdlib)

2007-02-15 Thread Jean-Paul Calderone
On Thu, 15 Feb 2007 13:55:31 -0800, Josiah Carlson [EMAIL PROTECTED] wrote:

Jean-Paul Calderone [EMAIL PROTECTED] wrote:
 [snip]

 Now if we can only figure out a way for everyone to benefit from this without
 tying too many brains up in knots. :)

Whenever I need to deal with these kinds of things (in wxPython
specifically), I usually set up a wxTimer to signal
asyncore.poll(timeout=0), but I'm lazy, and rarely need significant
throughput in my GUI applications.

And I guess you also don't mind that on OS X this is often noticably broken?
:)

 [snip]

Protocol support is hit and miss.  NNTP in Python could be better, but
that's not an asyncore issue (being that nntplib isn't implemented using
asyncore), that's an NNTP in Python could be done better issue.  Is it
worth someone's time to patch it, or should they just use Twisted?  Well,
if we start abandoning stdlib modules, because they can always use
Twisted, then we may as well just ship Twisted with Python.


We could always replace the stdlib modules with thin compatibility layers
based on the Twisted protocol implementations.  It's trivial to turn an
asynchronous API into a synchronous one.  I think you are correct in marking
this an unrelated issue, though.

Jean-Paul
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Wrapping up 'dynamic attribute' discussion

2007-02-15 Thread Greg Falcon
On 2/15/07, Anthony Baxter [EMAIL PROTECTED] wrote:
 On Friday 16 February 2007 09:08, Ben North wrote:
  The wrapper class idea was left
  open as a possibility for a future PEP.

 A good first step would be to contribute something like this to the
 Python Cookbook, if it isn't already there.

I could not find such a class in the cookbook.  (That's not to say
there's not one there that I missed.)

Because I think attrview() should happen, I submitted a recipe to the
Python Cookbook.  While it awaits editor approval, I have it posted at
http://www.verylowsodium.com/attrview.py .

One possibly controversial design choice here: since there is no
guaranteed way to enumerate attributes in the face of __getattr__ and
friends, my version of attrview() does not provide iteration or any
other operation that assumes object attributes can be enumerated.
Therefore, it technically does not implement a mapping.

Once Python grows a __dir__ special method, it's possible I could be
convinced this is the wrong choice, but I'm not sure of that.

Greg F
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Twisted Isn't Specific (was Re: Trial balloon: microthreads library in stdlib)

2007-02-15 Thread Josiah Carlson

Jean-Paul Calderone [EMAIL PROTECTED] wrote:
 
 On Thu, 15 Feb 2007 13:55:31 -0800, Josiah Carlson [EMAIL PROTECTED] wrote:
 
 Jean-Paul Calderone [EMAIL PROTECTED] wrote:
  [snip]
 
  Now if we can only figure out a way for everyone to benefit from this 
  without
  tying too many brains up in knots. :)
 
 Whenever I need to deal with these kinds of things (in wxPython
 specifically), I usually set up a wxTimer to signal
 asyncore.poll(timeout=0), but I'm lazy, and rarely need significant
 throughput in my GUI applications.
 
 And I guess you also don't mind that on OS X this is often noticably broken?
 :)

I don't own a Mac, and so far, of the perhaps dozen or so Mac users of
the software that does this, I've heard no reports of it being broken.

From what I understand, wxTimers work on all supported platforms (which
includes OSX), and if asyncore.poll() is broken on Macs, then someone
should file a bug report.  If it's asyncore's fault, assign it to me,
otherwise someone with Mac experience needs to dig into it.


  [snip]
 Protocol support is hit and miss.  NNTP in Python could be better, but
 that's not an asyncore issue (being that nntplib isn't implemented using
 asyncore), that's an NNTP in Python could be done better issue.  Is it
 worth someone's time to patch it, or should they just use Twisted?  Well,
 if we start abandoning stdlib modules, because they can always use
 Twisted, then we may as well just ship Twisted with Python.
 
 We could always replace the stdlib modules with thin compatibility layers
 based on the Twisted protocol implementations.  It's trivial to turn an
 asynchronous API into a synchronous one.  I think you are correct in marking
 this an unrelated issue, though.

If the twisted folks (or anyone else) want to implement a shim that
pretends to be nntplib, it's their business whether calling
twisted.internet.monkeypatch.nntplib() does what the name suggests. :)

That is to say, I don't believe anyone would be terribly distraught if
there was an easy way to use Twisted without drinking the kool-aid.

Then again, I do believe that it makes sense to patch the standard
library whenever possible - if Twisted has better parsing of nntp, smtp,
pop3, imap4, etc. responses, then perhaps we should get the original
authors to sign a PSF contributor agreement, and we could translate
whatever is better.


 - Josiah

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com