On Wed, Aug 8, 2012 at 5:44 PM, Waleed Abdulla <[email protected]> wrote:
> Thanks, Jeff. Is it possible to repeat the test with qps < 10 to rule out
> the limit that Johan pointed out? In other words, how big is the performance
> difference if you had less requests that do more work?

You must mean concurrency less than 10?

I'm not really certain how concurrency relates to this.  All the tests
I ran (Node.js, Twisted, Tornado, Simple) were nonblocking servers
with a concurrency of 1.  Maybe - just maybe - it would be possible to
increase throughput by using multiple system threads up to the number
of cores available... but then you would lose performance due to
synchronization.  Probably significantly.  Optimal hardware
utilization is one isolated, single-threaded, nonblocking server per
core.

I really don't know why backends are slow.  Maybe it has something to
do with the request queueing system?  Throughput sucks even when
backends are doing noops.  Maybe "increased concurrency" would allow
more requests to travel through the queueing system at once... but
it's hard to imagine this helping out the actual server process at
all.  More timeslicing and synchronization on a cpu- and memory-bound
problem will reduce performance, not improve it.

Jeff

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to [email protected].
To unsubscribe from this group, send email to 
[email protected].
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.

Reply via email to