The reason you don't think about this cost in a VPS is because you have
*already* paid for the CPU; you're just using idle CPU cycles and disk
access.

--
Ikai Lan
Developer Programs Engineer, Google App Engine
Blogger: http://googleappengine.blogspot.com
Reddit: http://www.reddit.com/r/appengine
Twitter: http://twitter.com/app_engine



On Wed, Feb 2, 2011 at 8:35 AM, Stephen Johnson <[email protected]>wrote:

> Well, as for the time it takes to add a million entries, once you use app
> engine for a while you'll get use to using the map/reduce library to perform
> these types of tasks. As for the cost assuming your 12 entries per second
> you end up with:
>
> Cost ($) / Hour (CPU) = .1
> Seconds / Hour = 3600
> So, cost per second is $ .00002777
>
> Entries / Second = 12
> So, cost per entry is $ .00000231
>
> Total Entries = 1,000,000
> Thus, total cost is $ 2.314
>
> Of course, this doesn't include CPU costs associated with overhead for
> running the map reduce jobs but basically the cost is less than the Venti
> Americano I'm drinking right now. Also, costs for HR datastore would be
> greater as I assume this is Master/Slave. You'll have to decide if that cost
> is too expensive for you and you'd rather purchase and host your own server,
> software (database, os, etc.), load balancers, networking, etc. etc.
>
>
>
> On Wed, Feb 2, 2011 at 8:52 AM, rmflow <[email protected]> wrote:
>
>> All right, here is a test app (written in python, imports stripped):
>>
>> class TheTest(db.Model):
>>     id = db.IntegerProperty()
>>     name = db.StringProperty();
>>
>> class TestHandler(webapp.RequestHandler):
>>     def get(self):
>>         start_api = quota.get_request_api_cpu_usage()
>>         start = quota.get_request_cpu_usage()
>>         bulk = []
>>         for i in range(100):
>>             test = TheTest()
>>             test.id = random.randint(100,1000000)
>>             test.name = "abcdefgh"
>>             bulk.append(test)
>>         db.put(bulk)
>>         end_api = quota.get_request_api_cpu_usage()
>>         end = quota.get_request_cpu_usage()
>>         self.response.out.write("cost: %d : %d megacycles." % (start -
>> end, start_api - end_api))
>>
>> The result of the request is: cost: -204 : -9800 megacycles.
>> And the logs show: /test 200 383ms 8353cpu_ms 8166api_cpu_ms
>>
>> It is around 12 entries per CPU second. That means, one can forget about
>> putting million entries into database (an XML of 20 megs can have such
>> amount of data, but it will take ages to parse it)
>>
>>
>>
>>  --
>> You received this message because you are subscribed to the Google Groups
>> "Google App Engine" group.
>> To post to this group, send email to [email protected].
>> To unsubscribe from this group, send email to
>> [email protected]<google-appengine%[email protected]>
>> .
>> For more options, visit this group at
>> http://groups.google.com/group/google-appengine?hl=en.
>>
>
>  --
> You received this message because you are subscribed to the Google Groups
> "Google App Engine" group.
> To post to this group, send email to [email protected].
> To unsubscribe from this group, send email to
> [email protected]<google-appengine%[email protected]>
> .
> For more options, visit this group at
> http://groups.google.com/group/google-appengine?hl=en.
>

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to [email protected].
To unsubscribe from this group, send email to 
[email protected].
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.

Reply via email to