The datastore does not delete things right away. It marks them as delete and
waits for a "compaction" to actually remove the data. If a lot of data is
deleted at the start of a query, the datastore will have to skip all the
deleted rows until it finds the first real entity. This is what is causing
your timeouts.

You can avoid this problem by using cursors to continue work
in subsequent requests. Cursors will cause your query to start at the last
deleted row instead of the start of the query. Another good alternative is
to use the Map framework (which also bookmarks it's progress for
continuation between requests).

On Sun, Sep 4, 2011 at 1:52 AM, Volker Schönefeld <
[email protected]> wrote:

> Oh, I ment the dataset has 100 million entities, so it's far from being
> done. There is still around 850 GiB worth of data that wants to be deleted.
>
> --
> You received this message because you are subscribed to the Google Groups
> "Google App Engine" group.
> To view this discussion on the web visit
> https://groups.google.com/d/msg/google-appengine/-/WPeRpAqa5j8J.
>
> To post to this group, send email to [email protected].
> To unsubscribe from this group, send email to
> [email protected].
> For more options, visit this group at
> http://groups.google.com/group/google-appengine?hl=en.
>

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to [email protected].
To unsubscribe from this group, send email to 
[email protected].
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.

Reply via email to