On Jul 18, 5:11 pm, Michael Hermus <[email protected]> wrote: > If that is true, I suppose I would be quite surprised, for the following > reasons: > > a) Google's entire infrastructure is designed for EVERYTHING to scale > massively and still work well.
"Massively" being the key word there. Every Google service is massive. Even abandoned or low use Google services (i.e. Wave, Buzz) are going to use the equivalent of a minimum 10,000 F4 instances. The GAE scheduler does inefficient scheduling at less than 50 instances. It works wonderfully once you hit scale at 100+ machines(correct me if I'm wrong, but most of the people seeing problems here are probably less than 50 instances). > b) By waiting for instances to warm up first, I don't think you would > really increase the maximum depth of the pending queue by a whole lot. I would have to disagree with you on this. My experience with big websites tells me that requests can very quickly pile up if you're not handling them expeditiously. > c) I don't think the pending queue is 'hosted' on a single machine; I am > pretty sure it relies on a resilient queue infrastructure designed to > tolerate failures and scale well. My analogy, like all analogies, breaks down if you apply it literally. Even if you (hypothetically) built a datacenter with 100,000 machines solely dedicated to hosting a single request queue, that datacenter can still go down (earthquake, power, hurricane, etc). Far better to simply dump requests into instance level queues and be done with it. Just to be clear, I am agreeing with you in that the GAE scheduler needs work; it is currently optimizing for high-scale apps, and not apps that are using double-digit or lower instances. -- You received this message because you are subscribed to the Google Groups "Google App Engine" group. To post to this group, send email to [email protected]. To unsubscribe from this group, send email to [email protected]. For more options, visit this group at http://groups.google.com/group/google-appengine?hl=en.
