That is a good link Ernesto.

nischalshetty - I think the best answer is probably an "it depends"
kind of thing. What do you want to do with unread status? Of course
the key here is that unread is a per user deal. You might be able to
keep some kind of bit-map or other compressed data structure to hold
read/unread info, but your ability to do that may depend on the
structure of your models and your use cases.

I would look closely at the use case / story. Updating 5000 articles
as unread sounds like a 'catch up' kind of operation. You might be
able to maintain an unread since kind of concept similar to the idea
Nick proposed in the post Ernesto referenced. If you can do something
along those lines, you might be able to just move a 'pointer' to
'now'.

If you do need to update 5000 'somethings' you want to try and do it
in batch if at all possible (does the api support > 1000 now?). And if
you can defer the operation by placing it on a task queue, even
better.

-- Jay

On Jan 9, 11:04 am, Ernesto Karim Oltra <[email protected]>
wrote:
> Maybe you need to re-think your database models. This message may
> help:http://groups.google.com/group/google-appengine/browse_thread/thread/...
>
> On 9 ene, 17:57, nischalshetty <[email protected]> wrote:
>
>
>
>
>
>
>
> > Say a user selects 5000 unread messages (each message is an entity)
> > and wants to mark all of them as read. It would be a massive update.
> > Can anyone help me on what the best way to do such a large update is?
>
> > Is there a different way to do this apart from updating each of the
> > 5000 entities by marking their status as read? Even if there is no
> > escaping the large update, how do you update so many entities on
> > appengine?

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to [email protected].
To unsubscribe from this group, send email to 
[email protected].
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.

Reply via email to