Thanks for the response.

We are not very write intensive, so new records occur infrequently. As
I read the allocate_ids documentation, it appears more oriented toward
batch processes. Also, Google's recommended shard process for
generating unique, sequential integers is a very nice bit of flexible,
fast kit that we like.

We use an client async URL call for the new record post which allows
us to define a time limit on the server response. If we do not get a
response within a few seconds, the call itself generates an error
which we handle same as a server error response. If the response comes
after this, it will be ignored.

If we do end up having to save the user data, the id will be saved
with it. Logic for the resend will check first to see if the record
did eventually get posted by the Task Queue. If not, it will send it.
The client will always check the local store for un-posted records
each time the user opens the web page. However, we certainly cannot
expect every user presented with a "try again later" message will
return, so holes in the key number sequence are certain.

This "too late" task queue may be a more common error than a total
failure, so it may make sense to add something like email follow-up.
However, the overall risk (based on recent GAE performance) is a
situation where an app suddenly starts throwing Deadline Exceeded
errors due to GAE infrastructure issues rather than developer-
contolled code issues. In that case, the ability to post the record
for email follow-up will likely fail also.

Down the road, I've thought to run an AWS / MySql server for backup.
If a specific "high need"  GAE post fails, then it would be relatively
simple to use redirect to AWS. Its a good bit of redundancy work at
this point though- and only works for a specific type of record), so
we will put that off until we start to get out into the real word and
see some volume**. Odds of both clouds sources having internal
infrastructure issues at the same time hopefully will be very low.

Again, thanks for your response.
stevep

**  GAE clearly wins against AWS for low-volume apps because there is
no hourly charge. However, as an app increases its use, the constant
kill/start of instances after n transactions (~10K right now) appears
to me as achieving an hourly change based on CPU cycles "overhead".
However, need a lot more data before understanding this.

On Nov 9, 11:39 pm, Robert Kluin <[email protected]> wrote:
> Why not use allocate_ids to generate the ids?  That might simplify the
> process a bit.
>  http://code.google.com/appengine/docs/python/datastore/functions.html...
>
> I've been using a similar process for batch updates for quite some
> time.  Works well for my case, but in my case there is not a user
> involved.  It is an auto-mated sync process to another system's
> database, so I have a unique id to use for lookups to avoid
> duplicates.
>
> What happens if the client does not get the response in step 4.
>
> Also, I assume if you get a failure, and resend the entity you'll use
> the previous id?
>
> Robert
>

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to [email protected].
To unsubscribe from this group, send email to 
[email protected].
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.

Reply via email to