We write any request that has been expired (file hasn't been accessed in X
seconds) or any new request we haven't seen.  
 
While we average for the most part about 99% of requests from Users being
served without touching the data store, when Google bot indexes a site it is
often hitting pages users never do.   
 
JeffProbst.com has 35 total pages, 500 total assets.  We don't have to touch
the datastore (or memcache) for 99.998% of requests
 
XYHD.tv has 4800 total pages 8400 total assets. In a 24 hour period 350
unique pages receive user traffic 800-ish assets.  When Google Bot comes
through it reads 4200 ish of those pages.  On XYHD.tv pages expire from the
cache every 3 minutes. So we get about 97% cache hits for users, but only
10% for Google bot.
 
 
 
From: [email protected]
[mailto:[email protected]] On Behalf Of Niklas Rosencrantz
Sent: Saturday, January 21, 2012 3:45 PM
To: [email protected]
Subject: [google-appengine] Re: ROFLMFAO DynamoDB From Amazon
 
Thanks for this thread. I'm curious why you need so many writes. Do you
write to the datastore just because there is a request? Isn't that
ineffeicient? I have many handlers that don't do writes at all. 
Best regards,
Nick
-- 
You received this message because you are subscribed to the Google Groups
"Google App Engine" group.
To view this discussion on the web visit
https://groups.google.com/d/msg/google-appengine/-/XkvAJNYnYUQJ.
To post to this group, send email to [email protected].
To unsubscribe from this group, send email to
[email protected].
For more options, visit this group at
http://groups.google.com/group/google-appengine?hl=en.

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to [email protected].
To unsubscribe from this group, send email to 
[email protected].
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.

Reply via email to