I never really cared much about instances as I don't have much influence on 
how they are created and torn down anyways. But maybe someone can shed some 
light on how the scheduler works as of today. Lets take a look at the 
following example that I just took from my apps admin console.

QPS* Latency* Requests Errors Age Memory Availability
1.300 29.4 ms 575  0   0:07:20   66.7 MBytes   Dynamic 
2.667 28.7 ms 5238 0   0:49:13   108.9 MBytes  Dynamic 
0.617 31.9 ms 4988 0   0:49:11   109.9 MBytes  Dynamic 
0.333 57.6 ms 4342 0   0:46:08   109.1 MBytes  Dynamic 
0.000 0.0 ms  25   0   0:07:26   65.5 MBytes   Dynamic 
0.000 0.0 ms  2    0   0:06:39   65.1 MBytes   Dynamic 

My python application is serving a lot of very low latency requests that 
usually involve just 2-3 datastore or memcache read operations. If I 
understand the numbers correctly it takes about 30-60ms to create a response 
for each request, but only up to 0.3-2.6 QPS are served from multiple 
instances. How does this make any sense? 

If Google is really going to change "fix" the scheduler I'm wondering why 
this hasn't happend before the new pricing model announcement, so Google 
itself would get some hard numbers first for building a new pricing 
parameters? 

Also, how can Google make sure that unresponsive instances are due to bad 
coding practices instead of just broken nodes? I mean I don't care that much 
if some datastore operations fail to execute to whatever reason as they can 
simply be re-executed from another task. But getting billed for instances 
that might be slow due to infrastructure issues, thats not acceptable.



-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to [email protected].
To unsubscribe from this group, send email to 
[email protected].
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.

Reply via email to