I was wrong about SQLite - I got it working locally (thanks the debug flag).

I've had some issues with getting MySQL to 'refresh' when accessing from 
different AMIs. Is there something I can do to force workers get an updated 
database connection?


On Tuesday, April 2, 2013 1:48:41 PM UTC-7, Niphlod wrote:
>
> are you sure your settings don't prevent a concurrent run ?
> As soon as one of the worker sees 2 worker and 2 tasks it should assign 
> one task to each one of them.
> Try to run the workers with 
> python web2py.py -K appname -D 0
> to see the "debug" logging, one worker should print something like
> TICKER: I'm a ticker
> TICKER: workers are 2
> TICKER: tasks are 2
>
> On Tuesday, April 2, 2013 10:29:38 PM UTC+2, Eric S wrote:
>>
>>
>> I'm trying to run multiple Scheduler workers on different machines 
>> (AMIs), but can't get two workers to work at the same time. Although each 
>> worker is capable of processing jobs, only one will work at any one time.
>>
>> I see two workers in the db table 'scheduler_worker', both with status 
>> 'ACTIVE'.
>> I see two tasks in 'scheduler_tasks', one with status 'RUNNING', one with 
>> status 'QUEUED'.
>>
>> I'm running each worker with: python web2py.py -K appName
>> I'm using a shared MySQL database (on RDS), though I get the same results 
>> locally with SQLite.
>> The jobs I'm scheduling are long-running jobs so I need multiple 
>> concurrent workers. Using web2py v2.4.5.
>>
>> Any ideas?
>>
>> Thanks,
>> Eric
>>
>>

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"web2py-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
For more options, visit https://groups.google.com/groups/opt_out.


Reply via email to