This is a condition that may sometimes occur in the MapReduce used to
perform the backup. The log message is expected to be improved in future
releases. However we would not expect this to cause any significant
increase in the number of datastore operations as it should only cause a a
datastore reads which are negligible compared to the rest of the job.
You may be able to decrease the probability of seeing these by increasing
the retry delay parameters for the task queue you are using to perform the
backup, for example:
retry_parameters:
min_backoff_seconds: 1
If you are concerned about overall job latency, a lower min_backoff time,
0.2 may also work just as well.
If you believe this is having a non-negligible impact on the number of
datastore operations, please file an issue on the tracker and include your
appId so we can investigate your particular case deeper.
On Mon, May 6, 2013 at 9:49 AM, Jason Collins <[email protected]>wrote:
> We are seeing failures within the Datastore Admin Scheduled Backup Tool
> across a number of our applications - perhaps coinciding with the 1.8.0
> push? Our cost increase started midway through May 3.
>
> It looks like our backups are continuing forever, leading to pretty
> substantial increases in costs on datastore reads.
>
> Stacktrace below. Is anyone else seeing this? You have to look in to your
> ah-builtin-python-bundle to see them.
>
>
> W2013-05-06 10:13:52.068Task 15812012600857BDDF6B8-0-14 is ahead of
> ShardState 13. Waiting for it to catch up.
>
>
> E2013-05-06 10:13:52.068Raise an error to trigger retry.
> Traceback (most recent call last):
> File
>
> "/base/python_runtime/python_lib/versions/1/google/appengine/ext/webapp/_webapp25.py"
> , line 716, in __call__
> handler.post(*groups)
> File
>
> "/base/python_runtime/python_lib/versions/1/google/appengine/ext/mapreduce/base_handler.py"
> , line 83, in post
> self.handle()
> File
>
> "/base/python_runtime/python_lib/versions/1/google/appengine/ext/mapreduce/handlers.py"
> , line 324, in handle
> if not self._try_acquire_lease(shard_state, tstate):
> File
>
> "/base/python_runtime/python_lib/versions/1/google/appengine/ext/mapreduce/handlers.py"
> , line 183, in _try_acquire_lease
> raise errors.RetrySliceError("Raise an error to trigger retry.")
> RetrySliceError: Raise an error to trigger retry.
> 1.
>
>
>
> --
> You received this message because you are subscribed to the Google Groups
> "Google App Engine" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to [email protected].
> To post to this group, send email to [email protected].
> Visit this group at http://groups.google.com/group/google-appengine?hl=en.
> For more options, visit https://groups.google.com/groups/opt_out.
>
>
>
--
You received this message because you are subscribed to the Google Groups
"Google App Engine" group.
To unsubscribe from this group and stop receiving emails from it, send an email
to [email protected].
To post to this group, send email to [email protected].
Visit this group at http://groups.google.com/group/google-appengine?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.