Why not use allocate_ids to generate the ids?  That might simplify the
process a bit.
  
http://code.google.com/appengine/docs/python/datastore/functions.html#allocate_ids

I've been using a similar process for batch updates for quite some
time.  Works well for my case, but in my case there is not a user
involved.  It is an auto-mated sync process to another system's
database, so I have a unique id to use for lookups to avoid
duplicates.

What happens if the client does not get the response in step 4.

Also, I assume if you get a failure, and resend the entity you'll use
the previous id?



Robert



On Tue, Nov 9, 2010 at 15:07, stevep <[email protected]> wrote:
> I would like some feedback about pluses / minuses for handling new records.
> Currently I need to optimize how the client request handler processes new
> entity put()s. Several custom indices for the model are used, so puts run
> too close to the 1,000 ms limit (were running over the limit prior to Nov.
> 6th maintenance – thanks Google).
> The entities are written with unique integer key values. Integers are
> generated using Google’s recommended sharded process. Client currently POSTs
> a new record to the GAE handler. If handler does not send back a successful
> response, client will retry POST “n” times (at least twice, but possibly
> more). Continued failures past “n” will prompt user that record could not be
> created, saves data locally, and asks user to try later.
> Planned new process will use Task Queue.
> 1) Client POSTs new entity data to the handler. At this point, user sees a
> dialog box saying record is being written.
> 2) Handler will use the shards to generate the next integer value for the
> key.
> 3) Handler sets up a task queue with the new key value and record data, and
> responds back to the client with they key value.
> 4) Client receives key value back from handler, and changes to inform user
> that record write is being confirmed on the server (or as before retries
> entire POST if response is an error code).
> 5) Client waits a second or two (for task queue to finish), then issues a
> GET to the handler to read the new record using the key value.
> 6) Handler does a simple key value read of the new record. Responds back to
> client either with found or not found status.
> 7) If client gets found response, then we are done. If not found, or error
> response client will wait a few seconds, and issue another GET.
> 7) If after “n” tries, no GET yields a successful read, then client informs
> user that record could not be written, and “please try again in a few
> minutes” (saving new record data locally).
> I know this is not ideal, but believe it is a valid, given GAE’s
> limitations, as an approach to minimize lost writes. Would very much
> appreciate feedback. I should note that the imposition of a few seconds
> delay while writing the record should not be an issue given it is a single
> transaction at the end of a previous creative process which has engaged user
> for several minutes.  Also, we do not use logic that cannot handle gaps
> (missing) integer values in the model's key values.
> TIA,
> stevep
>
> --
> You received this message because you are subscribed to the Google Groups
> "Google App Engine" group.
> To post to this group, send email to [email protected].
> To unsubscribe from this group, send email to
> [email protected].
> For more options, visit this group at
> http://groups.google.com/group/google-appengine?hl=en.
>

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to [email protected].
To unsubscribe from this group, send email to 
[email protected].
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.

Reply via email to