about the iOS api - we have RESTful requests that are responded to in json and we have a couple of views rendered as HTML. much more JSON though.

GAE just turns on more instances as there are more requests. they will happily run up your bill up to the maximum daily limit that you set. i have never had a problem with spikes.

i run the GAE dev server locally, and yes i have multiple GAE applications - one for production and at least one for test (i have several clients that i work with running web2py on GAE, with some different test setups).

Johnathan is correct you can use non-default versions for testing - so long as that does not create data corruption since it uses the same database. i use separated applications for testing because i'm, more often then i like to, changing schema in incompatible ways.

starmakerstudios.com (and the app) is the big project....with indexes GAE tells me i'm over 300GB of data. for me it's "fast enough" for the iOS developers it's never "fast enough". ;) my average response to an iOS API call is under 500ms for about 95% of the calls. the others have more complex data query/update requirements so they are understandably slower.

hope that helps.

cfh

On 9/3/12 7:42 , David Marko wrote:
Great!! The first set of questions ... some are maybe too private, you see
...

API for iOS
### does it mean that you dont use views, just endering data to JSON, and
data representation is done in iOS app?

30-40 request per second average sustained 24 hours a day.
### have you tried how high (in meaning of req/sec) you can get on GAE? Do
you have some peaks that are still served well?

we use google app engine.
### how do you evaluate entire dev process using web2py? Do you have some
procedure for deployment like develop localy using sqlite, then deploying
to some GAE demo account, then to production ??
### whats your long time experience with GAE in meaning of stability, speed
etc. ?
### how much data do you store in GAE datastore, is it fast enough?


Thanks!


Dne pondělí, 3. září 2012 15:07:09 UTC+2 howesc napsal(a):

yes, i manage a (seemingly to me) large application.  30-40 request per
second average sustained 24 hours a day.  that app is the data access API
for an iOS app plus an accompanying website.  some thoughts:
  - we use google app engine.  on the up side it serves all my requests, on
the downside we pay money in hosting to make up for bad programming.
  - we are using a class based models approach.   i'm interested in trying
the new lazy tables feature and perhaps switching to that.
  - we use memcache when possible. (it is possible to use it more we need
to work on that)
  - we are starting to use the google edge cache for pages/API responses
that are not user specific.  we can use more of this, but i believe those
requests served by the cache are counted in our request numbers.
  - some % of our API requests return somewhat static JSON - in this case
we generate the JSON when it changes (a few times a week), upload to amazon
S3, and then wrote a piece of router middleware to redirect the request
before web2py even is invoked....so we have some "creative" things in there
to have high request numbers that are not quite hitting web2py itself.

i'm happy to talk more about specific experiences if there are more
specific questions.

On Saturday, September 1, 2012 11:58:46 AM UTC-7, David Marko wrote:

Hi all, i'm also curious on this. Can howesc share his experience ... Or
others ?  We are planing project for estimated 1mil views per working hours
(in 12 hours in day). I know that there are many aspects but generaly would
be encouraging to hear real life data with architecture info. How many
server do you use, do you use some round robin proxy etc. ....




--



Reply via email to