I haven't tried Java or Go instances yet.  If I get some time next
week I will run the same test.

BTW random note:  With appstats turned on (for the no-op request), it
cut throughput about in half.  F1 frontends peaked around 70qps, B1
backends peaked under 30qps.  I presume this is caused by the memcache
put.

Jeff

On Thu, Jul 12, 2012 at 9:14 AM, Michael Hermus
<[email protected]> wrote:
> Jeff: that does sound pretty awful. Is this issue limited to Python, or have 
> you seen similar results with Java instances?
>
> On Wednesday, July 11, 2012 8:10:14 PM UTC-4, Jeff Schnitzer wrote:
>> I&#39;ve been doing some load testing on Python27 frontends and backends
>> and getting some fairly awful results.
>>
>> My test is a simple no-op that returns a 4-letter constant string.  I
>> hit frontend (F1) and backend (B1) versions with ab -c 100.
>>
>> The frontend peaks at about 140 requests/sec per instance.  I&#39;ve set
>> the min latency to 15s to keep the # of instances to a minimum, which
>> seems to work.  It never goes above 2 instances and one of them makes
>> the 140 mark.  The admin instances page shows avg latency of 10-15ms.
>> However, app logs show varying latency #s ranging from 30ms to 250ms.
>>
>> The backend (single instance) peaks under 80 requests/sec.  The
>> admin/instances page shows avg latency of 100ms.  App logs show
>> latencies of 3000-4000ms.
>>
>> Questions and observations:
>>
>> 1) What does the avg latency # on admin/instances mean?  Presumably
>> this is time spent executing my code and not time spent in the pending
>> queue.  Except that no-ops don&#39;t take 100ms to execute.  What else is
>> part of that number?
>>
>> 2) Request time in the app logs includes time spent waiting in the
>> pending queue, right?  It&#39;s the actual wall-clock time between when
>> the request enters Google and leaves Google?
>>
>> 3) Why are backends so abysmally slow?  If a backend is supposed to be
>> *at all* useful for maintaining in-memory game state for more than
>> five players, it needs to be able to process a high QPS rate.  That&#39;s
>> fine, a couple synchronized operations on an in-memory data structure
>> are lightning fast - a couple ms.  But where does this mysterious
>> 100ms come from?  It destroys throughput.
>>
>> 4) Try this exercise:  Deploy a frontend handler that does nothing but
>> urlfetch to a backend no-op.  Run it.  It usually takes *hundreds* of
>> milliseconds.  I&#39;ve verified this with appstats.  Huh?  I can make
>> urlfetches to Europe faster.
>>
>> Jeff
>
> --
> You received this message because you are subscribed to the Google Groups 
> "Google App Engine" group.
> To view this discussion on the web visit 
> https://groups.google.com/d/msg/google-appengine/-/bvaCIDFy0IcJ.
> To post to this group, send email to [email protected].
> To unsubscribe from this group, send email to 
> [email protected].
> For more options, visit this group at 
> http://groups.google.com/group/google-appengine?hl=en.
>

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to [email protected].
To unsubscribe from this group, send email to 
[email protected].
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.

Reply via email to