My problem is as follows: 

I'm developing an online game with the requirement of being able to handle 
thousands of requests every second.

The frontend consists of web server(s) exposing a rest api. These web servers 
in turn communicate with a game server over TCP. When a message arrives at the 
game server, each client handler inserts the client message into a shared 
message queue, and then waits for the result from the game loop. When the game 
loop has informed the waiting handler that a result is ready, the handler 
returns the result to the client.

Things to take note of:

1) The main game loop runs in a separate process, and the intention is to use a 
Queue from the multiprocess library.

2) The network layer of the game server runs a separate process as well, and my 
intention was to use gevent or tornado 
(http://nichol.as/asynchronous-servers-in-python).

3) The game server has a player limit of 50000. My requirement/desire is to be 
able to serve 50k requests per second (without any caching layer, although the 
game server will cache data), so people don't get a poor user experience during 
high peaks.

4) The game is not a real-time based game, but is catered towards the web.

And now to my little problem. All high performance async TCP servers use 
greenlets (or similar light threads), but does not seem to be compatible with 
the multiprocess library. From what I've read, Python (largely due to GIL) does 
not seem suited for this type of task, compared to other languages where 
threading and IPC is not an issue.

Due to this information, I have developed the initial server using netty in 
java. I would, however, rather develop the server using python, if possible. 
But if these limitations truly exist, then I'll go with java for the game 
server, and then use python for the frontend.

Has anyone developed something similar, and if so, could you point me in the 
right direction, or perhaps I've missed something along the way?

If one can solve the issue with IPC, it seems the high performance servers 
tested on this page, http://nichol.as/asynchronous-servers-in-python, only can 
handle roughly 8k requests per second before performance degrades. Does anyone 
know how Python high performance TCP servers compare to other language's TCP 
servers?

Thanks for all replies!
-- 
https://mail.python.org/mailman/listinfo/python-list

Reply via email to