My apologies if this is an FAQ I just haven't noticed, or something that's mentioned right at the beginning of one of the [many] references I haven't managed to dig through yet (especially if it's in Bruce Slatkin's Google I/O session that's so basic--I've had that page opened in a tab for close to 6 months now, and I never seem to have an hour I can justify taking to watch).
Anyway, the most important basic point of all the <s>limitations</s> opportunities we embrace is for scalability, right? I just recommended to someone that he upload a few thousand records to google's datastore and then run the queries against them that he expects to be the most common/slowest. I think the size of the resultset might affect his query performance, but not the number of records in the data store. I assumed that, if a few thousand records work okay, then so would a few million. In the middle of writing that, I realized that my assumption was pretty much completely unjustified. That's the impression I've gathered from everything I've read, but I don't think I've seen any sort of official statement about that. I don't expect any sort of iron-clad "Your query will run in O(1) time, ignoring the size of the datastore" guarantee, or anything like that. But isn't that more or less the goal? Or should I also start thinking about ways to archive/data warehouse/ etc, the way if I were running an RDBMS? Thanks, James --~--~---------~--~----~------------~-------~--~----~ You received this message because you are subscribed to the Google Groups "Google App Engine" group. To post to this group, send email to [email protected] To unsubscribe from this group, send email to [email protected] For more options, visit this group at http://groups.google.com/group/google-appengine?hl=en -~----------~----~----~----~------~----~------~--~---
