Hi

A couple of questions

1.  Is the set of entities changing constantly - ie adding or deleting
entities, or are you just changing a fixed set of entities.
2.  How up to date must your view be,
3.  Is it feasible to refetch/populate your cache via a background
task  (if you write 20-30 times per read)

Also I assume you do a fetch on the query rather than iterating of the
result (fetch is much quicker).

I think if you adding/removing keys you could actually keep a list in
of keys in memcache as a single entity
then fetch the list of keys then do a db.get(list_of_keys) assuming
you have less 1000 entities as you are still limited to 1000's items
retrieved.
You would need to add/remove keys from the list in memcache, or
rebuild the list with a keys only query via a task.

Hope this gives you some ideas.

T



On May 9, 4:29 am, Adrian Holovaty <[email protected]> wrote:
> Hi all,
>
> I've got an App Engine application in which I'm selecting hundreds of
> entities from the same model, possibly even 1000 or 2000 of them --
> and I need all of them at the same time, for display in the browser.
> (There is no way around this requirement; I need all of them at the
> same time.) The Python code looks something like this, with model/
> property names changed:
>
>     MyModel.all().filter('somekey =', x).order('the_order')
>
> This becomes unbearably slow when selecting hundreds or thousands of
> records. I've read through the docs and various articles explaining
> *why* that happens, but I haven't found any good advice on how to
> solve it, aside from "put it in memcache" or "don't do that." I am
> indeed using memcache in the app's view code, like so...
>
>     if in_the_cache:
>         return cached
>     else:
>         result = MyModel.all().filter('somekey =',
> x).order('the_order')
>         put_in_cache(result)
>         return result
>
> ...and I invalidate the cache whenever a write is made, but the values
> change often enough that the caching is essentially useless (this is a
> write-heavy app).
>
> One option I'm considering is to recalculate and cache the result
> whenever a write happens, but that could result in a lot of
> unnecessary CPU usage, because, again, this is a write-heavy app, and
> I only do about one read for every 20-30 writes.
>
> Ideally there would be a way to optimize the way I model the data or
> query the data, such that I could get the hundreds of objects in a
> shorter period of time without sacrificing CPU usage. Are there any
> fancy data modeling tricks I can use, like sharding them over multiple
> models (which doesn't seem like it would make any difference), or
> using ListProperty (in which case I might run into the limit of items
> in a ListProperty, because each MyModel record has four attributes I
> need to include), or something else...?
>
> Adrian
>
> --
> You received this message because you are subscribed to the Google Groups 
> "Google App Engine" group.
> To post to this group, send email to [email protected].
> To unsubscribe from this group, send email to 
> [email protected].
> For more options, visit this group 
> athttp://groups.google.com/group/google-appengine?hl=en.

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to [email protected].
To unsubscribe from this group, send email to 
[email protected].
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.

Reply via email to