Brandon, thanks for the detailed analysis.

I too had almost exactly the same thoughts regarding CPU vs Instance
billing. Lets wait for the new scheduler to go live before finally deciding
on the path ahead.
Only hope that once it (the scheduler) stabilizes, we don't go through the
entire cycle again if & when Google decided to change things again in the
future or at-least the transition wouldn't be as drastic!!

Sam

On Fri, May 13, 2011 at 10:51 AM, Brandon Wirtz <[email protected]> wrote:

> I know that my use case for GAE is non-typical.  And as I mentioned before
> I’ll likely have to port my code to a different language under the new
> pricing, but I wanted to share how my use case works, and how yours may
> differ.
>
>
>
> When I moved to 1.5 it seems in my app that the sites getting the
> improvement have the highest MemCache hit ratio.  Likely the improvement is
> a function of something related to that.  I also noticed the request time
> went down the most on those, so probably I'm just seeing the effect of
> requests on those sites taking 30% less time so each instance serves 30%
> more people.
>
>
>
> On one of my apps I saw nearly zero improvements. And it is the one that
> worries me the most pricing wise.
>
>
>
> It cruises along at 60-75 instances 24/7 but only use 10 CPU hours a day
>
>
>
> CPU Time
>
> $0.10/ CPU hour
>
> [image: Description: 8%]
>
> 8%
>
> 9.95 of 127.50 CPU hours
>
> $0.35 / $12.10
>
>
>
> Because of the free quota that is 35 cents a day for the CPU  But if there
> was no free quota, or we were really big so the free quota was not
> factored.  That would be 10x $0.10 or a $1 a day
>
>
>
> But 60 instances 24 hours at 5 cents per is $72 so 72X the price  (before
> any scheduler efficiencies)  This is where I got the 70X I quoted before.
>
>
>
>
>
> I have another app that runs at 9 instances (after moving to 1.5) that uses
> 5 CPU Hours a day
>
> CPU Time
>
> $0.10/ CPU hour
>
> [image: Description: 1%]
>
> 1%
>
> 5.00 of 406.50 CPU hours
>
> $0.00 / $40.00
>
>
>
> 9x 24x.05  = $10 (and some pennies) vs 50 cents on the old model.  20x
> before any scheduler efficiencies
>
>
>
>
>
> I believe my use is the way it is because the really short logic of my code
> goes like this.
>
>
>
> User: I need a file
>
> App: I have that in memcache it’s popular right now. Sending
>
> ….
>
> ….
>
> 50ms later…
>
> User:Thanks!
>
> (Instance time 75ms CPU time 25ms)
>
>
>
> OR….
>
>
>
> User:I need a file
>
> APP: I don’t have that in memcache let me go get that for you
>
> …
>
> 110ms later…
>
> DataStore: Here is that file you asked for.
>
> App: Great I’ll pass it to the user
>
> …
>
> …
>
> 50ms later
>
> User Thanks.
>
> (Instance time 200ms CPU time 40ms)
>
>
>
>
>
> OR… (and this one kills me)
>
> User:I need a  big file
>
> APP: I don’t have that in memcache let me go get that for you
>
> …
>
> 110ms later…
>
> DataStore: Here is that file you asked for its big.
>
> App: Yeah I know and this user is on dial up. They Suck
>
> …
>
> …
>
> …
>
> …
>
> …
>
> 1500ms later
>
> User Thanks. What took so long.
>
> (Instance time 2000ms CPU time 40ms)
>
>
>
>
>
> And In all three of these scenarios what I’m really paying for if I get
> billed by instances is the waiting to send stuff to the user, and possibly
> read it out of GQL.  None of this uses the “Python” bits for anything
> useful.
>
>
>
> If on the other hand I was folding proteins or searching for
> extraterrestrials my Instance hours and my CPU hours would be far less
> disparate. In fact because instance hours are cheaper than CPU Hours if you
> can keep your instance at better than 50% load at all times you will in fact
> save money.
>
>
>
> *SO….*Check this out, the old price per CPU hour was 10 cents, but a
> Reserved Instance is 5 cents.  So if you can figure out how to not wait on
> the dumb user and his 56k connection, and not sit idle while waiting on some
> value to be returned from the depths of where ever Google stores your data
> and all the lost socks from my laundry, then you can actually save money on
> the new pricing.   It’s my horribly inefficient use of the time that I’m
> waiting on the user that gets me to 70x.  If this were PHP I’d have a do
> while that folded proteins while I waited for my user to receive their bits,
> but I can’t do that.
>
>
>
> Google on the other hand could likely make my apps much more efficient if
> there were a request handler that said “This dude is asking for a thing” and
> I’d say “put them on hold I’ll get you the file, Got it, here it is” and it
> would then say “what do you want me to do with this? Read it to them?” and
> I’d say “Yep”, “you suck why do I have to do that” , “Because you are the
> edge cache and that is your job”.
>
>
>
> And I think some days this works, (so do cache control headers and the
>  edge caching such that when two people ask for the same exact thing only 1
> hits my APP) and other days it doesn’t work so much. And I don’t really know
> why cause I can’t see the moving parts, only part of what is on each side of
> a row of magic black boxes.
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
> --
> You received this message because you are subscribed to the Google Groups
> "Google App Engine" group.
> To post to this group, send email to [email protected].
> To unsubscribe from this group, send email to
> [email protected].
> For more options, visit this group at
> http://groups.google.com/group/google-appengine?hl=en.
>

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to [email protected].
To unsubscribe from this group, send email to 
[email protected].
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.

<<image005.png>>

<<image002.png>>

Reply via email to