Due to the way App Engine is designed it is possible for an application to work fine when datastore/memcache performance is good but then fail miserably when datastore/memcache performance is bad (ie, the last 2 days).
In my case I was mostly able to design workarounds for the bad performance so that my app still returns something from all requests (albeit in a degraded mode), but I wasn't aware which handlers were going to fail with timeout and 502 errors and whatnot until the bad performance happened. If we had a way to simulate worst case datastore/memcache performance for our apps we could design them to fail gracefully ahead of time and avert some of the pain of events like yesterday. If google would clearly define "maximum acceptable latencies" for all the relevant parameters (you don't have to call it a service level agreement, but it would be nice) and then allow us to test our applications at those latencies we could write more robust apps and still return something useful for our visitors in the event of unexpected performance degradation. --~--~---------~--~----~------------~-------~--~----~ You received this message because you are subscribed to the Google Groups "Google App Engine" group. To post to this group, send email to [email protected] To unsubscribe from this group, send email to [email protected] For more options, visit this group at http://groups.google.com/group/google-appengine?hl=en -~----------~----~----~----~------~----~------~--~---
