Hey Richard,
  I think your idea sounds good.  Inserting a named task if you can't
update the auction/item ought to do the trick.  If your task gets into
the queue it should run; you can also adjust your retry parameters if
you want finer control of how tasks will retry.  Any named task will
generate a TaskAlreadyExists (still in the queue) or tombstone error
(for a week or so after running).

  You might also use memcache to mark items that have been sold.  The
taskqueue is fast, but you've got a much lower number of allowed
taskqueu api calls.



Robert







On Thu, Mar 3, 2011 at 02:52, Richard Arrano <[email protected]> wrote:
> Hello Robert,
> I don't necessary even need to do it by tasks - what I'm doing is
> something along the lines of a live ebay type auction, where I need to
> guarantee that when user X wins the bid for product Y, I will a)
> record this bid, b) be able to access it for the next item so I can
> know that product Y is no longer available and c) I would like to be
> able for the users to see which items have been previously sold(i.e.
> some sort of history). I can do this in the normal way with the
> datastore, but my concern is when the datastore has issues due to
> latency and put() fails. I can't have this, and it's a very low rate
> of writing with a low amount of data that needs to be guaranteed.
> That's why I thought the task and encoding it in the payload might
> work, but if I can't access what I previously wrote to the payload
> then it won't.
>
> What about modifying it so that rather than attempting to access the
> payload data, I give the task a name along the format of 'auctionId-
> itemId' and in the payload, the bidder and the price. That way the
> item cannot be sold twice, because we can't have two tasks with the
> same name, and it will eventually get written if datastore writing is
> down because the task will eventually be executed. What do you think?
> I'm also a bit confused on whether or not a task eventually executes -
> is it only transactional tasks that must eventually be executed and
> don't get tombstones? In that case this won't work because I can't
> name them.
>
> Thanks,
> Richard
>
> On Mar 2, 9:04 am, Robert Kluin <[email protected]> wrote:
>> Hi Richard,
>>   Data in the task queue is not "accessible in some fashion when the
>> user inputs it."  There is currently no way to query / retrieve tasks
>> from a queue, its a one-way-street so to speak.
>>
>>   It sounds like you're implementing an autosave type feature?  If you
>> use memcache, remember it is just a cache and a given key (or the
>> whole thing) could be flushed at any time.  If you use the taskqueue
>> (or deferred), remember that the tasks may not be run in the order
>> they were inserted (particularly true if a queue backs up at all). if
>> possible, you'll want to keep some type of revision count so you don't
>> overwrite new with old.
>>
>>   If you provide more info someone can probably offer additional pointers.
>>
>> Robert
>>
>> On Wed, Mar 2, 2011 at 07:28, Richard Arrano <[email protected]> wrote:
>> > Hello,
>> > I was reading the thread regarding wanting to guarantee a put()
>> > (http://groups.google.com/group/google-appengine/browse_thread/thread/
>> > 8280d73d09dc64ee/1cf8c5539155371a?lnk=raot&pli=1) and I've found
>> > myself desiring to do the same. It seems to me that using the deferred
>> > task queue with < 10k of data will allow us to guarantee the data to
>> > be committed at some point in time, regardless of datastore latency/
>> > availability. The scenario that interests me is when I have some data
>> > I'd like to make sure gets committed at some later time(when exactly
>> > doesn't matter), but it must be recorded and accessible in some
>> > fashion when the user inputs it. I was thinking about using the
>> > deferred task queue, but the problem is that although it's < 10k of
>> > data, it will grow as the user inputs more data(they won't be able to
>> > input everything at once). Could this be solved by retrieving the task
>> > from the deferred queue and editing its payload. Is this possible to
>> > do? Is there another solution that will fit what I'm looking to do?
>>
>> > Thanks,
>> > Richard
>>
>> > --
>> > You received this message because you are subscribed to the Google Groups 
>> > "Google App Engine" group.
>> > To post to this group, send email to [email protected].
>> > To unsubscribe from this group, send email to 
>> > [email protected].
>> > For more options, visit this group 
>> > athttp://groups.google.com/group/google-appengine?hl=en.
>>
>>
>
> --
> You received this message because you are subscribed to the Google Groups 
> "Google App Engine" group.
> To post to this group, send email to [email protected].
> To unsubscribe from this group, send email to 
> [email protected].
> For more options, visit this group at 
> http://groups.google.com/group/google-appengine?hl=en.
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to [email protected].
To unsubscribe from this group, send email to 
[email protected].
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.

Reply via email to