A different implementation, (e.g. one using windows IOCP), can do timeouts 
without using select (and must, select does not work with IOCP).   So will a 
gevent based implementation, it will timeout the accept on each socket 
individually, not by calling select on each of them.

The reason I'm fretting is latency.  There is only one thread accepting 
connections.  If it has to do an extra event loop dance for every socket that 
it accepts that adds to the delay in getting a response from the server.  
Accept() is indeed critical for socket server performance.

Maybe this is all just nonsense, still it seems odd to jump through extra hoops 
to emulate a functionality that is already supported by the socket spec, and 
can be done in the most appropriate way for each implementation.

K

-----Original Message-----
From: python-dev-bounces+kristjan=ccpgames....@python.org 
[mailto:python-dev-bounces+kristjan=ccpgames....@python.org] On Behalf Of 
Antoine Pitrou
Sent: 14. mars 2012 10:23
To: python-dev@python.org
Subject: Re: [Python-Dev] SocketServer issues

On Wed, 14 Mar 2012 16:59:47 +0000
Kristján Valur Jónsson <krist...@ccpgames.com> wrote:
> 
> It just seems odd to me that it was designed to use the "select" api 
> to do timeouts, > where timeouts are already part of the socket protocol and 
> can be implemented more efficiently there.

How is it more efficient if it uses the exact same system calls?
And why are you worrying exactly? I don't understand why accept() would be 
critical for performance.

Thanks

Antoine.



_______________________________________________
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com

Reply via email to