Thank you Daniel very much for your detailed reply. However, for some
backward compatibility issue, we are stuck with Mojolicous 7.48 at the
moment.

But since we are using mysql database, I guess the version probably doesn't
matter much. And the idea of using table update for a semaphore mechanism
should still work regardless of the version.

Thanks again.

On Thu, Nov 23, 2017 at 10:42 AM, Daniel Mantovani <[email protected]> wrote:

> As a (probably not very efficient) option, you can consider to coordinate
> the different processes through a database, something like:
>
> use Mojolicious::Lite;
> use Mojo::IOLoop;
> use Mojo::Pg;
>
> helper pg => sub { state $pg = Mojo::Pg->new("postgresql:///test") };
> app->pg->migrations->from_data->migrate;
> app->log->path('ten_seconds.log');
> Mojo::IOLoop->recurring(
> 1 => sub {
> my $db = app->pg->db;
> my $ctime = int(time);
> $db->update_p(
> 'laststamp' => {ltime => $ctime + 10, pid => $$},
> {ltime => {'<=', $ctime}}, {returning => 'ltime'}
> )->then(
> sub {
> my $res = shift->hashes;
> if ($res->first) {
>
> # do some actual work here
> app->log->info(sprintf "Modified by: pid %i, time up to: %i",
> $$, $res->first->{ltime});
> }
> }
> )->catch(
> sub {
> my $err = shift;
> app->log->error(sprintf "Error: %s, pid: %i, time: %i", $err, $$, time);
> }
> );
> }
> );
>
> get '/' => sub {
> my $c = shift;
> my $db = $c->pg->db;
> $db->select_p(laststamp => undef)->then(
> sub {
> my $res = shift->hashes;
> $c->render(
> text => sprintf('Last modified by pid %i, up to %i',
> @{$res->first}{qw(pid ltime)})
> );
> }
> );
> };
> app->start;
> __DATA__
> @@ migrations
> -- 1 up
> create table laststamp (ltime integer, pid integer);
> insert into laststamp (ltime, pid) values (0, 0);
> -- 1 down
> drop table laststamp;
>
>
> It works because the sql update acts as a semaphore mechanism among the
> processes, as the one that is able to update the database will atomically
> update the ltime field by 10 seconds.
> Please note that Mojolicious and Mojo::Pg should be in their last versions
> (7.57 & 4.03) or at least a recent version that supported promises.
> Hope it helps,
> BR,
> Daniel
>
> El miércoles, 22 de noviembre de 2017, 0:30:27 (UTC-3), Dan Book escribió:
>>
>> If you add a recurring timer in startup it will execute in each worker
>> process that is forked. You might consider having a separate process
>> (possibly with a command for your application to spawn it) that just does
>> this recurring work.
>>
>> -Dan
>>
>> On Tue, Nov 21, 2017 at 9:40 PM, Nancy Li <[email protected]> wrote:
>>
>>> OK. i managed to replicate the issue in dev. So it was actually caused
>>> by using hypnotoad in production. I was using morbo in dev which probably
>>> only had 1 thread and production might have 4 thread.
>>>
>>> I'm open for suggestions to tackle this issue though. This is my first
>>> time using IOLoop and any advice is highly appreciated.
>>>
>>>
>>> On Wednesday, November 22, 2017 at 12:49:37 PM UTC+11, Nancy Li wrote:
>>>>
>>>> * Mojolicious version: 7.39
>>>> * Perl version: v5.24.9
>>>> * Operating system: Linux AND 2.6.32-504.23.4.el6.x86_64
>>>>
>>>> ### Steps to reproduce the behavior
>>>> I'm using Mojo::IOLoop->recurring to schedule a HTTP get request every
>>>> 10s. However, from the log file, it seems the IOLoop is starting more than
>>>> 1 job at the same time. Sometimes, there are 2 or 3 or 4 jobs running at
>>>> the same time.
>>>>
>>>> Code:
>>>> ```
>>>> Mojo::IOLoop->recurring(
>>>>         10 => sub {
>>>>             my $task    = 'pull_reservation';
>>>>             my $backend = $self->minion->backend;
>>>>
>>>>             my $inactive_batch
>>>>                 = $backend->list_jobs( 0, -1, { state => 'inactive',
>>>> task => $task, queue => 'default' } );
>>>>             my $active_batch = $backend->list_jobs( 0, -1, { state =>
>>>> 'active', task => $task, queue => 'default' } );
>>>>
>>>>             if ( ( !@$inactive_batch ) && ( !@$active_batch ) ) {
>>>>                 $self->minion->enqueue( $task => [], { priority => 1 }
>>>> );
>>>>             }
>>>>
>>>>         }
>>>>     );
>>>> ```
>>>>
>>>> ### Expected behavior
>>>> IOLoop should only be starting one job at the time.
>>>>
>>>> ### Actual behavior
>>>> IOLoop somestimes start more than 1 job. From the log file, it is
>>>> starting 2 -4 jobs.
>>>>
>>>> ###Content from log file:
>>>> ```
>>>> [Wed Nov 22 11:02:35 2017] [info]  job completed with 0 reservations
>>>> with job_id 244828 .
>>>> [Wed Nov 22 11:02:35 2017] [info]  job completed with 0 reservations
>>>> with job_id 244827 .
>>>> [Wed Nov 22 11:02:35 2017] [info]  job completed with 0 reservations
>>>> with job_id 244829 .
>>>> [Wed Nov 22 11:02:42 2017] [info]  job completed with 0 reservations
>>>> with job_id 244833 .
>>>> [Wed Nov 22 11:02:42 2017] [info]  job completed with 0 reservations
>>>> with job_id 244832 .
>>>> [Wed Nov 22 11:02:42 2017] [info]  job completed with 0 reservations
>>>> with job_id 244831 .
>>>> [Wed Nov 22 11:02:42 2017] [info]  job completed with 0 reservations
>>>> with job_id 244830 .
>>>> [Wed Nov 22 11:02:54 2017] [info]  job completed with 0 reservations
>>>> with job_id 244836 .
>>>> [Wed Nov 22 11:02:54 2017] [info]  job completed with 0 reservations
>>>> with job_id 244834 .
>>>> ```
>>>>
>>>> I've also checked the details of each parallel job, they have exactly
>>>> the same create time.
>>>>
>>>> ```
>>>>  ./script/app.pl minion job  285950
>>>> {
>>>>   "args" => [],
>>>>   "attempts" => 1,
>>>>   "children" => [],
>>>>   "created" => "2017-11-22T01:25:39Z",
>>>>   "delayed" => "2017-11-22T01:25:39Z",
>>>>   "finished" => "2017-11-22T01:25:44Z",
>>>>   "id" => 285950,
>>>>   "notes" => {},
>>>>   "parents" => [],
>>>>   "priority" => 1,
>>>>   "queue" => "default",
>>>>   "result" => {
>>>>     "uuid" => 246109
>>>>   },
>>>>   "retried" => undef,
>>>>   "retries" => 0,
>>>>   "started" => "2017-11-22T01:25:43Z",
>>>>   "state" => "finished",
>>>>   "task" => "pull_reservation",
>>>>   "worker" => 157
>>>> }
>>>>  ./script/app.pl minion job  285951
>>>> {
>>>>   "args" => [],
>>>>   "attempts" => 1,
>>>>   "children" => [],
>>>>   "created" => "2017-11-22T01:25:39Z",
>>>>   "delayed" => "2017-11-22T01:25:39Z",
>>>>   "finished" => "2017-11-22T01:25:44Z",
>>>>   "id" => 285951,
>>>>   "notes" => {},
>>>>   "parents" => [],
>>>>   "priority" => 1,
>>>>   "queue" => "default",
>>>>   "result" => {
>>>>     "uuid" => 246112
>>>>   },
>>>>   "retried" => undef,
>>>>   "retries" => 0,
>>>>   "started" => "2017-11-22T01:25:43Z",
>>>>   "state" => "finished",
>>>>   "task" => "pull_reservation",
>>>>   "worker" => 157
>>>> }
>>>>
>>>> ```
>>>> While doing trouble shooting, I started using IOLoop->singleton, but it
>>>> didn't solve this problem :(
>>>>
>>>> Any pointers are highly appreciated.
>>>>
>>> --
>>> You received this message because you are subscribed to the Google
>>> Groups "Mojolicious" group.
>>> To unsubscribe from this group and stop receiving emails from it, send
>>> an email to [email protected].
>>> To post to this group, send email to [email protected].
>>> Visit this group at https://groups.google.com/group/mojolicious.
>>> For more options, visit https://groups.google.com/d/optout.
>>>
>>
>> --
> You received this message because you are subscribed to a topic in the
> Google Groups "Mojolicious" group.
> To unsubscribe from this topic, visit https://groups.google.com/d/
> topic/mojolicious/Cr5qDgtuQmM/unsubscribe.
> To unsubscribe from this group and all its topics, send an email to
> [email protected].
> To post to this group, send email to [email protected].
> Visit this group at https://groups.google.com/group/mojolicious.
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"Mojolicious" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To post to this group, send email to [email protected].
Visit this group at https://groups.google.com/group/mojolicious.
For more options, visit https://groups.google.com/d/optout.

Reply via email to