In article <mailman.2717.1343634778.4697.python-l...@python.org>, Chris Angelico <ros...@gmail.com> wrote:
> Python's an excellent glue language, but it's also fine for huge > applications. Yes, it can't multithread across cores if you use > CPython and are CPU-bound. That's actually a pretty specific > limitation, and taking out any component of that eliminates the GIL as > a serious problem. These days, I'm working on a fairly large web application (songza.com). The business/application logic is written entirely in Python (mostly as two django apps). That's what we spend 80% of our developer time writing. As for scale, we're currently running on 80 cores worth of AWS servers for the front end. Another 50 or so cores for the database and other backend functions. Yesterday (Sunday, so a slow day), we served 27 million HTTP requests; we're not facebook-sized, but it's not some little toy application either. Every time we look at performance, we can't hardly measure the time it takes to run the Python code. Overall, we spend (way) more time waiting on network I/O than anything else. Other than I/O, our biggest performance issue is slow database queries, and making more queries than we really need to. The engineering work to improve performance involves restructuring our data representation in the database, caching (at multiple levels), or eliminating marginal features which cost more than they're worth. None of this would be any different if we used C++, except that we'd spend so much time writing and debugging code that we'd have no time left to think about the really important stuff. As far as the GIL is concerned, it's just not an issue for us. We run lots of server processes. Perhaps not as elegant as running fewer multi-threaded processes, but it works just fine, is easy to implement, and we never have to worry about all the horrors of getting memory management right in a multi-threaded C++ application. -- http://mail.python.org/mailman/listinfo/python-list