Finished my first attempt to implelement Redis Backend
My benchmark results
Clean start with 10000 jobs
Enqueued 10000 jobs in 9.98849105834961 seconds (1001.152/s)
29165 has started 4 workers
4 workers finished 1000 jobs each in 155.357006072998 seconds (25.747/s)
29165 has started 4 workers
4 workers finished 1000 jobs each in 121.667289972305 seconds (32.877/s)
Requesting job info 100 times
Received job info 100 times in 0.120177030563354 seconds (832.106/s)
Requesting stats 100 times
Received stats 100 times in 0.540313005447388 seconds (185.078/s)
Repairing 100 times
Repaired 100 times in 9.60826873779297e-05 seconds (1040770.223/s)
And Dan's benchmark result on my enviroment
Clean start with 10000 jobs
Enqueued 10000 jobs in 228.140287876129 seconds (43.833/s)
29268 has started 4 workers
4 workers finished 1000 jobs each in 295.22328209877 seconds (13.549/s)
29268 has started 4 workers
4 workers finished 1000 jobs each in 224.983679056168 seconds (17.779/s)
Requesting job info 100 times
Received job info 100 times in 3.12703800201416 seconds (31.979/s)
Requesting stats 100 times
Received stats 100 times in 0.699573993682861 seconds (142.944/s)
Repairing 100 times
Repaired 100 times in 1.10982012748718 seconds (90.105/s)
Some explanations:
I used redis expire feature to automatically delete jobs and workers, so
repair results here is irrelevant. I just wrote empty function.
Next, I used MessagePack instead of JSON (with XS implementation on perl
side) and lua scripts instead of transaction.
My next step is profile lua scripts (want to try this
https://stackoverflow.com/questions/16370333/can-i-profile-lua-scripts-running-in-redis).
So far, the only reason to store jobs as hashes is ability to check
parents and delayed fields. I think, I can find another ways to store
jobs and it can significantly improve performance.
So far, looks like it's worth the effort to continue. Enqueue is already
bit faster than current pg backend(700 j/s for enqueue and 170 j/s for
deque), so I want to believe there is a light in the end of tunnel and
it's not the train.
31.12.2017 21:10, Dan Book пишет:
Please, I welcome any attempts at making it work! My development of
the Redis backend is stalled because I have been unable to find a way
to make the dequeue process efficient enough. I have copied the minion
benchmark script with the modification to use Redis here:
https://github.com/Grinnz/Minion-Backend-Redis/blob/master/examples/minion_bench.pl
-Dan
On Sun, Dec 31, 2017 at 7:08 AM, Илья Рассадин <[email protected]
<mailto:[email protected]>> wrote:
Hi!
I recently started to work on my own implementation for
Minion::Backend::Redis.
During code reading of Mojo::Backend::Pg, some questions were born
in my mind and I wrote email directly to Sebastian. Than's how I
found out about Dan Book's Redis backend
https://github.com/Grinnz/Minion-Backend-Redis
<https://github.com/Grinnz/Minion-Backend-Redis>
First of all, I think, it'll be great to write links to this
mailing list in Minion documentation. Now doc says "You can use
Minion <https://metacpan.org/pod/Minion> as a standalone job
queue", so it's not very clear that this mailing list is right
place to discuss minions.
And second, as far as I know, current redis backend is still under
construction. And I want to help make it as solid as Pg backend,
but faster.
Now, I'm trying to reimplement my own Backend not based on Dan's
code, but based on Pg backend. I'm widely using lua scripting and
Data::MessagePack module. Next I want to benchmark it against
Dan's current version and compare speed, performance and determine
max task rate per second. Trick thing with redis lua scripting,
that it is fast, but can be voracious CPU eater...
If I'm satisfied with the result, I'll make a PR to Dan's repo
with Redis backend
But if mojo team already have another plan for redis backend and
you need a couple of hardworking hands to help, I'm very motivated
and now you know how to find me. So feel free to contact me.
PS: Happy New Year and great thanks to all Mojo team for your
great work.
--
You received this message because you are subscribed to the Google
Groups "Mojolicious" group.
To unsubscribe from this group and stop receiving emails from it,
send an email to [email protected]
<mailto:[email protected]>.
To post to this group, send email to [email protected]
<mailto:[email protected]>.
Visit this group at https://groups.google.com/group/mojolicious
<https://groups.google.com/group/mojolicious>.
For more options, visit https://groups.google.com/d/optout
<https://groups.google.com/d/optout>.
--
You received this message because you are subscribed to a topic in the
Google Groups "Mojolicious" group.
To unsubscribe from this topic, visit
https://groups.google.com/d/topic/mojolicious/YxCcwq6gGII/unsubscribe.
To unsubscribe from this group and all its topics, send an email to
[email protected]
<mailto:[email protected]>.
To post to this group, send email to [email protected]
<mailto:[email protected]>.
Visit this group at https://groups.google.com/group/mojolicious.
For more options, visit https://groups.google.com/d/optout.
--
You received this message because you are subscribed to the Google Groups
"Mojolicious" group.
To unsubscribe from this group and stop receiving emails from it, send an email
to [email protected].
To post to this group, send email to [email protected].
Visit this group at https://groups.google.com/group/mojolicious.
For more options, visit https://groups.google.com/d/optout.