I think that's completely dependent on your models and any pre/post
processing you're doing. I'd play around with various sizes and
benchmark how well they perform. Possibly use transactions to
guarantee the writes and if you reach a deadline, have a standard
error code response and define a "next step" for your users to follow.
On reads you could have the user retry or retry with a smaller size.

Again, I think there is no "magic" number, you just need to bench what
fits your application.

On Aug 25, 11:51 am, fredrossperry <[email protected]> wrote:
> I've created a backup/restore capability for our customer data using
> Java on the client side and remote_api on the server.
>
> By default the SDK fetches in batches of 20.  When I put data back to
> the server, I am also doing batches of 20.
>
> I worry about hitting per-request limits on data size and CPU time
> doing it this way.  Is there a way of estimating the "best" batch size
> at run time?
>
> -Fred

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to [email protected].
To unsubscribe from this group, send email to 
[email protected].
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.

Reply via email to