On Mon, Aug 20, 2012 at 5:56 AM, Richard Watson <[email protected]> wrote: > On Friday, August 17, 2012 5:52:33 PM UTC+2, Jeff Schnitzer wrote: >> >> It won't fit into the free quota (just to rebuild the download once an >> hour will be hundreds of thousands of read ops per day) > > Group articles and bundle those together as mini-blobs before creating the > large downloadable blob. Recalculate only the mini blobs that have changes, > once every e.g. 10 minutes. Recalc the bigger one every hour. Not sure how > naturally grouped the data is, but even an artificial grouping of e.g. 500 > articles per blob grouped by id should work. If there's a way to put > oft-changed articles together you'll save more.
You could also do it by only loading articles that have changed (query on timestamp) and "hot-replacing" them in the blob (read blob into ram, munge, save again). You'd want to do this in dynamic backend with extra RAM depending on how large the blob gets. But at some point you have to ask is it really worth the extra engineering to maybe save a few bucks a month. Also, not having billing enabled also means that if you ever get a surge in users, the app goes down. Unless this is a hobby, he probably wants to enable billing. That 50k daily limit on read ops goes *fast* when users are doing queries for tags and whatnot. Jeff -- You received this message because you are subscribed to the Google Groups "Google App Engine" group. To post to this group, send email to [email protected]. To unsubscribe from this group, send email to [email protected]. For more options, visit this group at http://groups.google.com/group/google-appengine?hl=en.
