Hi
I'm working on a project where I need to keep response times low, but where
the dataset continues to grow and we get thousands of requests per second.
 Each request I need to process this data and respond quickly.  Currently,
the data isn't large enough to slow down the response time very much, but I
foresee this becoming a problem as time goes on.  We've been using nginx
and uwsgi thus far, and it has been working pretty well, but I need some
way to process the data quicker.  Unfortunately, none of the computations
can really be done ahead of time, so I was thinking of a way to break the
data down into sections and processing it in parallel.  Fortunately,
though, the dataset is formed well for this, and compiling it after each
part is finished wouldn't be difficult.

After reading about the docs about how uwsgi works with mongrel2, I had the
idea that I could make several uwsgi "worker" servers and put a mongrel2
server in front of them, which communicates what should be processed one
each (with zeromq), then puts the pieces back together when it gets the
response from uwsgi servers and finally returns the compiled response to
the client.

I'm not sure if this is the best idea or if I am viewing this right
conceptually, but any feedback would be appreciated, whether it be telling
me I'm going down the wrong direction and which would be a bette option, or
giving me more specifics and guidance on this option.

Thanks for the patience, I know its a long one.

Tony
_______________________________________________
uWSGI mailing list
[email protected]
http://lists.unbit.it/cgi-bin/mailman/listinfo/uwsgi

Reply via email to