By 'at a time' I mean per call to taskqueue.Queue.add, not per http
request. You can call that function twice in a row with two different
batches of tasks.

For example, in the python api you can do this:
queue = taskqueue.Queue('default')
tasks = [taskqueue.Task(params={'id': i}) for i in xrange(100)]
queue.add(tasks)

If you have more than 100 tasks that you want to insert, you will need
to split them into groups of 100 tasks, and call queue.add for each
group.

For example:
queue = taskqueue.Queue('default')
tasks = [taskqueue.Task(params={'id': i}) for i in xrange(1000)]

def _yield_in_groups(l, group_size):
  for start_index in xrange(0, len(l), group_size):
    yield l[start_index: start_index + group_size]

for task_group in _yield_in_groups(tasks, 100):
  queue.add(task_group)



On 12 May 2011 19:11, Jeff Schnitzer <[email protected]> wrote:
> Would you please clarify that?  When you say "at a time", do you mean
> "within the scope of a single HTTP request"?  The docs say that 100
> tasks is "the maximum number of tasks that can be added in a batch"
> but they don't specify what "batch" means.
>
> If this is indeed the limit, is there any talk of eliminating it for
> long-running Backend requests?  It seems to seriously limit the
> utility of Backends for long-running data processing that involves
> task creation - sure, I no longer need to break my work down into 30s
> (now 10m) increments, but now I need to respawn a new task anyways
> every time I hit 100 tasks.
>
> Here's the specific issue I have right now:
>
> I have a button in one of my apps that sends push notifications to a
> few million iphone app users.  A "map" task iterates through the
> users, creating a new "reduce" task that sends the relevant data for a
> single push to a REST service wired up to the APNS.  The map task has
> to restart every 100 reduce tasks it creates.
>
> For technical reasons related to filtering specific users to push to,
> it would be a lot easier if the "map" task didn't have to restart.
> I'd like to fire up a Backend to run through the entire set in one go.
>  But this would require millions of tasks to be created in the context
> of that long-running request.
>
> Is this just not an option?
>
> Thanks,
> Jeff
>
> On Thu, May 12, 2011 at 5:01 PM, Greg Darke (Google)
> <[email protected]> wrote:
>> Yes.
>>
>> The taskqueue.add function may only add 100 tasks at a time.
>>
>> On 12 May 2011 15:35, Jeff Schnitzer <[email protected]> wrote:
>>> With appengine frontends, each request is limited to 100 inserts into
>>> the task queue.
>>>
>>> Do long-running backend requests have the same limitation?
>>>
>>> Jeff
>>>
>>> --
>>> You received this message because you are subscribed to the Google Groups 
>>> "Google App Engine" group.
>>> To post to this group, send email to [email protected].
>>> To unsubscribe from this group, send email to 
>>> [email protected].
>>> For more options, visit this group at 
>>> http://groups.google.com/group/google-appengine?hl=en.
>>>
>>>
>>
>> --
>> You received this message because you are subscribed to the Google Groups 
>> "Google App Engine" group.
>> To post to this group, send email to [email protected].
>> To unsubscribe from this group, send email to 
>> [email protected].
>> For more options, visit this group at 
>> http://groups.google.com/group/google-appengine?hl=en.
>>
>>
>
> --
> You received this message because you are subscribed to the Google Groups 
> "Google App Engine" group.
> To post to this group, send email to [email protected].
> To unsubscribe from this group, send email to 
> [email protected].
> For more options, visit this group at 
> http://groups.google.com/group/google-appengine?hl=en.
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to [email protected].
To unsubscribe from this group, send email to 
[email protected].
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.

Reply via email to