I don't know what I want instances to do.. I am finishing up a complete
re-write of my code so that if I have to move to Java or GO that my code is
perfect in Python and I can pay for a port, not a re-write.   On the App I'm
baking the code to make sure I didn't blow anything up on, I'm now cruising
at 16 instances and using .025 CPU seconds a second.  So why I need 16
instances if I'm only doing 1/40th of a CPU worth of work is beyond me.
That'd imply I've got 640x as much horse power as I need.

 

Also there are a lot of things you can do much better than the example code
would imply.  I took request to first bit from 330ms down to 120ms.  And I
think I can get that to 70ms.

 

I could probably be more aggressive if I didn't have error handling. Who
needs that I never make errors.

 

I tried several things with regards to how .py was loaded and the
CGIhandler, I can't measure any difference.

 

If anyone cares zipping data makes it faster out of datastore, a tie out of
memcache, big (400k) wins big if the compression is good, and you lose on
anything under 20k. Assumes 25% average compression.  Considered zip based
on size. (may do that)

 

Anything worth writing to MemCache is probably worth writing to a
"scratchpad" datastore that you access by keyname.  The writes are cheap
compared to a cache miss on memcache, or you wouldn't have memcached it
would you? YMMV

 

 

 

From: [email protected]
[mailto:[email protected]] On Behalf Of Tim
Sent: Saturday, May 14, 2011 1:42 AM
To: [email protected]
Subject: Re: [google-appengine] Re: 1.5 improvements Make me less scared of
Pricing

 

I suspected that might be the case, but if I have a few minutes to spare
sometime I might try it out.

 

My startup and servicing costs is pretty minimal - I've got a
one-page-webapp and most of my calls are just AJAX calls to load and update
data, so it's typically just object to JSON and back again. So given the way
the API calls are counted and how my traffic works, I might look at getting
rid of using the datastore for fields and just store what was a collection
of items as a single larger JSON text blob (and look at moving to the
blobstore), and tune the caching in the client in terms of writing back
changes.

 

I suppose a dynamic API for the scheduler might make it a bit tricky for GAE
to plan (at the small scale) how to schedule work on machines, but if it was
more like a declaration of hints at startup about how long I'd prefer to
hang around when idle before being killed etc - even a declaration in
app.yaml - then I think it might go some way to allaying some of the
concerns being voiced.

 

--

T

 

-- 
You received this message because you are subscribed to the Google Groups
"Google App Engine" group.
To post to this group, send email to [email protected].
To unsubscribe from this group, send email to
[email protected].
For more options, visit this group at
http://groups.google.com/group/google-appengine?hl=en.

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to [email protected].
To unsubscribe from this group, send email to 
[email protected].
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.

Reply via email to