Niphlod can probably provide better advice regarding managing scheduler workers and system resources, but yes, you should be able to have a few workers running ready to handle incoming tasks.
On Wednesday, September 16, 2015 at 6:11:08 PM UTC-4, Phillip wrote: > > Are you saying that if I start the workers before any tasks are queued (by > an arbitrary number of users), the workers will be idle in wait for queued > tasks? > > If the answer is no > > Here is the set up: There is a grid of files from which a user can > generate 'offspring' files in all possible combinations. Previous > inquiry has led me to think the scheduler could be used for multiprocessing > here (instead of the ostensibly problematic multiprocessing module). > The queued tasks are derived from the python script that processes > user-selected files, called in the controller when these file id's are > passed via ajax. > > Otherwise, > > Is there a brick wall here? If, for instance, the app was on Google App > Engine, could a large number of idle workers simply be started to handle > spikes in user requests? > > > Thank you for the response > > > > > > -- Resources: - http://web2py.com - http://web2py.com/book (Documentation) - http://github.com/web2py/web2py (Source code) - https://code.google.com/p/web2py/issues/list (Report Issues) --- You received this message because you are subscribed to the Google Groups "web2py-users" group. To unsubscribe from this group and stop receiving emails from it, send an email to [email protected]. For more options, visit https://groups.google.com/d/optout.

