Hey, So, I've noticed several users complaining about *frequent* restarts, not long restarts. I created this issue awhile back but no one has starred it. My impression from the Google developers I've talked to is that this is not common enough to become a priority. If you disagree, star the issue.
http://code.google.com/p/googleappengine/issues/detail?id=2931 If it truly isn't common, then I'll probably just end up creating another account and redeploying since it has been established that my application isn't the cause. Jake On Mar 17, 11:35 pm, James Koch <[email protected]> wrote: > As a followup, today (3/17) from 1-3PM PST I received several > instances of "Request > was aborted after waiting too long to attempt to service your request." This > is on my app w/ zero users, just 3 requests/minute of a blank page as a test > load. > > On Thu, Mar 11, 2010 at 1:50 PM, Don Schwarz <[email protected]> wrote: > > Can you respond privately with your app id? > > > On Thu, Mar 11, 2010 at 10:10 AM, James <[email protected]> wrote: > > >> I setup some pings of my add a few minutes ago, and I'm still seeing > >> recycling :( > > >> My ping setup can't go lower than 60s intervals, so I have two running > >> concurrently. Here's a sample of 20 log entries over 10 minutes, > >> with . Three recyclings occur, and they happen less than 10s after a > >> previous request. Really Google, you're killing my JVM after TEN > >> SECONDS? And I get to pay you for the ton of CPU each startup uses? > >> Sounds like the more recycling, the more profitable the App Engine > >> becomes. > > >> - > >> * 03-11 08:02AM 38.506 /?Pragma=no-cache 200 2158ms 2235cpu_ms 0kb > >> Site 24 X 7 RPT-HTTPClient/0.3-3E,gzip(gfe) > >> 03-11 08:02AM 23.144 /?Pragma=no-cache 200 53ms 19cpu_ms 0kb Site 24 > >> X 7 RPT-HTTPClient/0.3-3E,gzip(gfe) > >> 03-11 08:01AM 06.134 /?Pragma=no-cache 200 75ms 19cpu_ms 0kb Site 24 > >> X 7 RPT-HTTPClient/0.3-3E,gzip(gfe) > >> 03-11 08:00AM 51.707 /?Pragma=no-cache 200 49ms 19cpu_ms 0kb Site 24 > >> X 7 RPT-HTTPClient/0.3-3E,gzip(gfe) > >> 03-11 08:00AM 05.823 /?Pragma=no-cache 200 49ms 58cpu_ms 0kb Site 24 > >> X 7 RPT-HTTPClient/0.3-3E,gzip(gfe) > >> 03-11 07:59AM 51.499 /?Pragma=no-cache 200 56ms 38cpu_ms 0kb Site 24 > >> X 7 RPT-HTTPClient/0.3-3E,gzip(gfe) > >> 03-11 07:59AM 05.584 /?Pragma=no-cache 200 47ms 19cpu_ms 0kb Site 24 > >> X 7 RPT-HTTPClient/0.3-3E,gzip(gfe) > >> 03-11 07:58AM 51.274 /?Pragma=no-cache 200 61ms 38cpu_ms 0kb Site 24 > >> X 7 RPT-HTTPClient/0.3-3E,gzip(gfe) > >> 03-11 07:58AM 05.371 /?Pragma=no-cache 200 64ms 19cpu_ms 0kb Site 24 > >> X 7 RPT-HTTPClient/0.3-3E,gzip(gfe) > >> 03-11 07:57AM 51.025 /?Pragma=no-cache 200 74ms 19cpu_ms 0kb Site 24 > >> X 7 RPT-HTTPClient/0.3-3E,gzip(gfe) > >> * 03-11 07:56AM 57.327 /?Pragma=no-cache 200 7835ms 2119cpu_ms 0kb > >> Site 24 X 7 RPT-HTTPClient/0.3-3E,gzip(gfe) > >> 03-11 07:56AM 50.784 /?Pragma=no-cache 200 75ms 58cpu_ms 0kb Site 24 > >> X 7 RPT-HTTPClient/0.3-3E,gzip(gfe) > >> 03-11 07:55AM 57.008 /?Pragma=no-cache 200 50ms 19cpu_ms 0kb Site 24 > >> X 7 RPT-HTTPClient/0.3-3E,gzip(gfe) > >> * 03-11 07:55AM 46.384 /?Pragma=no-cache 200 4250ms 2060cpu_ms 0kb > >> Site 24 X 7 RPT-HTTPClient/0.3-3E,gzip(gfe) > >> 03-11 07:54AM 56.782 /?Pragma=no-cache 200 70ms 38cpu_ms 0kb Site 24 > >> X 7 RPT-HTTPClient/0.3-3E,gzip(gfe) > >> 03-11 07:54AM 46.157 /?Pragma=no-cache 200 54ms 38cpu_ms 0kb Site 24 > >> X 7 RPT-HTTPClient/0.3-3E,gzip(gfe) > >> 03-11 07:53AM 56.586 /?Pragma=no-cache 200 52ms 19cpu_ms 0kb Site 24 > >> X 7 RPT-HTTPClient/0.3-3E,gzip(gfe) > >> 03-11 07:53AM 45.934 /?Pragma=no-cache 200 51ms 38cpu_ms 0kb Site 24 > >> X 7 RPT-HTTPClient/0.3-3E,gzip(gfe) > >> 03-11 07:52AM 56.240 /?Pragma=no-cache 200 62ms 38cpu_ms 0kb Site 24 > >> X 7 RPT-HTTPClient/0.3-3E,gzip(gfe) > >> 03-11 07:52AM 45.718 /?Pragma=no-cache 200 57ms 19cpu_ms 0kb Site 24 > >> X 7 RPT-HTTPClient/0.3-3E,gzip(gfe) > >> - > > >> On Jan 30, 11:02 pm, Alyxandor <[email protected]> > >> wrote: > >> > If you are experiencing failed requests on your long-running / > >> > requests, consider performing some kind of "pre-warming" procedure of > >> > your own... If you are getting timeout errors, Ping a do-nothing url, > >> > and wait for it to return before running the big job. If it's a big > >> > job, users should expect to wait anyway {and you should tell them they > >> > are waiting!}, so the ping ensures {almost} that a warm JVM is running > >> > in the server nearest said users, and then the big /request can > >> > {usually} avoid getting killed with extra spin up time. Very unlucky > >> > users would get a /ping on an old JVM, and /request a new one, but... > >> > Technology isn't perfect... YET! > > >> -- > >> You received this message because you are subscribed to the Google Groups > >> "Google App Engine for Java" group. > >> To post to this group, send email to > >> [email protected]. > >> To unsubscribe from this group, send email to > >> [email protected]<google-appengine-java%[email protected]> > >> . > >> For more options, visit this group at > >>http://groups.google.com/group/google-appengine-java?hl=en. > > > -- > > You received this message because you are subscribed to the Google Groups > > "Google App Engine for Java" group. > > To post to this group, send email to > > [email protected]. > > To unsubscribe from this group, send email to > > [email protected]<google-appengine-java%[email protected]> > > . > > For more options, visit this group at > >http://groups.google.com/group/google-appengine-java?hl=en. -- You received this message because you are subscribed to the Google Groups "Google App Engine for Java" group. To post to this group, send email to [email protected]. To unsubscribe from this group, send email to [email protected]. For more options, visit this group at http://groups.google.com/group/google-appengine-java?hl=en.
