Bubba <nickn...@banelli.biz.invalid> wrote: <snip> > import asyncore > import socket > import string > import MySQLdb > import sys <snip> > def __init__(self, host, port): > asyncore.dispatcher.__init__(self) > self.create_socket(socket.AF_INET, socket.SOCK_STREAM) > self.set_reuse_addr() > self.bind((host, port)) > self.listen(5) > > def handle_accept(self): > pair = self.accept() > if pair is None: > pass > else: > sock, addr = pair > print 'Incoming connection from %s' % repr(addr) > handler = Handler(sock) > > server = Server('', 2020) > asyncore.loop() > <snip> > I do, however, have some more questions (thus crosspost to > comp.lang.python) - how many connections can this server handle without > a problem? I'm using Asyncore module, as it can be seen.
> Is it necessary, due to the fact that it should serve more than > thousand devices that send data every 10 seconds, to do threading (I > believe that is already done with Asyncore for sockets, but what about > SQL?) The MySQL C library is not asynchronous. Each request does blocking I/O. If the MySQLdb module is a wrapper for the MySQL C library--and it seems it is--then you will want to use threads (not coroutines or generators). For all I know your accept handler is threaded already. MySQL itself almost certainly can't handle tens of thousands of simultaneous requests. The backend connection and query handler is simply-threaded as well, which means for every connection your talking 2 * "tens of thousands" threads. I didn't read over your code much, but the only way to get around this would be to handle your socket I/O asynchronously; but I don't enough about Python to get the mixed behavior you'd want. I think that there's an asynchronous all-Python MySQL library, but I'm not sure. Maybe one day I can open source my asynchronous MySQL C library. (I always recommend people to use PostgreSQL, though; which is superior in almost every way, especially the C client library and the wire protocol.) -- http://mail.python.org/mailman/listinfo/python-list