as I said before, if the tasks are ready to be processed, as soon as the 
"ticker" sees n workers it assign all tasks (~50 per worker) splitting them 
evenly.

I can only think to a transaction isolation problem on mysql end (yes, a) 
mysql sucks at this and b) python driver doesn't raise a "virtual finger" 
for patching it, c) I'd like to erase mysql from the world for it): I'm 
planning to remove from the current scheduler the parts taking care of 
patching its problems and recommend to change the default transaction 
isolation level if the scheduler is needed, so anyone can pick his poison 
^_^

I don't have an AMI to test it, but I guess a normal mysql setup under 
ubuntu carries the same settings, and on my machine tasks are splitted 
evenly.

On Wednesday, April 3, 2013 12:14:06 AM UTC+2, Eric S wrote:
>
>
> After running more jobs, I see each worker picking up jobs. However, there 
> is frequently only one worker working while the other is waiting inactive, 
> seemingly unaware that there are waiting jobs to process. The jobs also 
> seem to be allocated unequally, where one worker (which has 'is_ticker' = 
> T) is processing most jobs. The workers are both instances of the same AMI. 
>
>
>
> On Tuesday, April 2, 2013 2:12:51 PM UTC-7, Niphlod wrote:
>>
>> if you're on 2.4.5 there shouldn't be problems with mysql (it has a 
>> transaction isolation "problem" that is different from any other db 
>> engine.... if you don't commit() you can't read what other processes 
>> committed already)
>>
>> Anyway on my test rig this problem was superseeded some time ago: are you 
>> sure that there aren't problems connecting to that instance ? 
>>
>> On Tuesday, April 2, 2013 11:00:04 PM UTC+2, Eric S wrote:
>>>
>>>
>>> I was wrong about SQLite - I got it working locally (thanks the debug 
>>> flag).
>>>
>>> I've had some issues with getting MySQL to 'refresh' when accessing from 
>>> different AMIs. Is there something I can do to force workers get an updated 
>>> database connection?
>>>
>>>
>>> On Tuesday, April 2, 2013 1:48:41 PM UTC-7, Niphlod wrote:
>>>>
>>>> are you sure your settings don't prevent a concurrent run ?
>>>> As soon as one of the worker sees 2 worker and 2 tasks it should assign 
>>>> one task to each one of them.
>>>> Try to run the workers with 
>>>> python web2py.py -K appname -D 0
>>>> to see the "debug" logging, one worker should print something like
>>>> TICKER: I'm a ticker
>>>> TICKER: workers are 2
>>>> TICKER: tasks are 2
>>>>
>>>> On Tuesday, April 2, 2013 10:29:38 PM UTC+2, Eric S wrote:
>>>>>
>>>>>
>>>>> I'm trying to run multiple Scheduler workers on different machines 
>>>>> (AMIs), but can't get two workers to work at the same time. Although each 
>>>>> worker is capable of processing jobs, only one will work at any one time.
>>>>>
>>>>> I see two workers in the db table 'scheduler_worker', both with status 
>>>>> 'ACTIVE'.
>>>>> I see two tasks in 'scheduler_tasks', one with status 'RUNNING', one 
>>>>> with status 'QUEUED'.
>>>>>
>>>>> I'm running each worker with: python web2py.py -K appName
>>>>> I'm using a shared MySQL database (on RDS), though I get the same 
>>>>> results locally with SQLite.
>>>>> The jobs I'm scheduling are long-running jobs so I need multiple 
>>>>> concurrent workers. Using web2py v2.4.5.
>>>>>
>>>>> Any ideas?
>>>>>
>>>>> Thanks,
>>>>> Eric
>>>>>
>>>>>

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"web2py-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
For more options, visit https://groups.google.com/groups/opt_out.


Reply via email to