A similar idea is being considered.

As far as reserved instances go, if you are at the point where you need more
instances, it would only be a small portion of your requests that hit
loading requests. The reason loading requests nowadays are a pain is because
they are a vast majority of requests for low-traffic applications.

On Thu, Sep 16, 2010 at 1:31 PM, andy stevko <[email protected]> wrote:

> I was just thinking about this issue during an interview this morning.
>
> I like your initServlet idea and think that it could also be used during
> takedown to cleanup any external resources (like batched sequence counters).
>
> An alternative for the scaling from 1 server to many startup problem is to
> not route the triggering request to the newly started service but rather
> keep the startup event trigger and route that request to an already warm
> service. Extra points for not routing any requests until the new instance is
> stabilized.
>
>
>
>
>
> On Thu, Sep 16, 2010 at 10:42 AM, Maxim Veksler <[email protected]>wrote:
>
>> Hey Guys,
>>
>> Just a quick 2.5cent idea.
>>
>> The current situation for applications running on the app engine is that a
>> request that causes instance spin up (be it first request, or the unlucky
>> one in high load) is usually cancelled because of timeout (30s). I'm
>> observing this situation even for "virgin" applications that do nothing more
>> then return "Hello World".
>>
>> Now, what if application developers could define a special API call (much
>> like mail delivery to the application is handled by GAE) that will be called
>> when GAE decides to spin yet another instance. This could solve the problem
>> of requests being "canceled" in high load while the new JVM instance is
>> loading. The GAE could allocate a "finite" amount of time from the "warm up"
>> call to the "start of request delivery" to the new jvm (something like 60
>> seconds?). The application developers could take advantage of this call
>> (that will go to the "InitServlet") to do start up logic (population of
>> MemCache, loading static maps from DataStore and co..)
>>
>> This could be poor's men solution for applications that *should* not lose
>> requests, even if it's 1/100000.
>>
>> I know about the planned "reserved instances" feature, don't really see
>> the logic in this though as this approach doesn't seem to scale well: Say
>> I'm holding 5 reserved instances. What happens if my requests get a peak...
>> a truly massive slashdot peak which in theory should require yet another
>> (well just for ex.) 100 instances? If GAE launches them and instantly starts
>> forwarding requests to them then, assuming each instance takes 15sec to load
>> - ~(15sec*95instances*<amount of traffic coming in 1 sec>)*0.40 requests are
>> timed out. That's bad.
>>
>>
>> Comments?
>>
>> Maxim.
>>
>> --
>> You received this message because you are subscribed to the Google Groups
>> "Google App Engine for Java" group.
>> To post to this group, send email to
>> [email protected].
>> To unsubscribe from this group, send email to
>> [email protected]<google-appengine-java%[email protected]>
>> .
>> For more options, visit this group at
>> http://groups.google.com/group/google-appengine-java?hl=en.
>>
>
>  --
> You received this message because you are subscribed to the Google Groups
> "Google App Engine for Java" group.
> To post to this group, send email to
> [email protected].
> To unsubscribe from this group, send email to
> [email protected]<google-appengine-java%[email protected]>
> .
> For more options, visit this group at
> http://groups.google.com/group/google-appengine-java?hl=en.
>

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine for Java" group.
To post to this group, send email to [email protected].
To unsubscribe from this group, send email to 
[email protected].
For more options, visit this group at 
http://groups.google.com/group/google-appengine-java?hl=en.

Reply via email to