[ 
https://issues.apache.org/jira/browse/QPID-4046?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13291379#comment-13291379
 ] 

Ken Giusti commented on QPID-4046:
----------------------------------

Performance of queue fill followed by queue drain, multiple clients, prior to 
fix:

[kgiusti@xxx Test1.5]$ ./Test1-Setup.sh; ./Test1-Sender.sh; ./Test1-Receiver.sh
+ qpid-config -b 127.0.0.1:8888 add queue inQ1 --max-queue-size=12000000000 
--max-queue-count=4000000 --flow-stop-size=0 --flow-stop-count=0
+ numactl --cpunodebind 6 qpid-send -b 127.0.0.1:8888 -a inQ1 -m 1000000 
--content-size 300 --capacity 2000 --report-total --sequence no --timestamp no
+ numactl --cpunodebind 5 qpid-send -b 127.0.0.1:8888 -a inQ1 -m 1000000 
--content-size 300 --capacity 2000 --report-total --sequence no --timestamp no
+ numactl --cpunodebind 4 qpid-send -b 127.0.0.1:8888 -a inQ1 -m 1000000 
--content-size 300 --capacity 2000 --report-total --sequence no --timestamp no
+ wait
+ numactl --cpunodebind 3 qpid-send -b 127.0.0.1:8888 -a inQ1 -m 1000000 
--content-size 300 --capacity 2000 --report-total --sequence no --timestamp no
tp(m/s)
37135
tp(m/s)
37044
tp(m/s)
37010
tp(m/s)
36945
+ numactl --cpunodebind 5 qpid-receive -b 127.0.0.1:8888 -a inQ1 -f -m 1000000 
--capacity 2000 --ack-frequency 1000 --print-content no --report-total
+ numactl --cpunodebind 4 qpid-receive -b 127.0.0.1:8888 -a inQ1 -f -m 1000000 
--capacity 2000 --ack-frequency 1000 --print-content no --report-total
+ numactl --cpunodebind 3 qpid-receive -b 127.0.0.1:8888 -a inQ1 -f -m 1000000 
--capacity 2000 --ack-frequency 1000 --print-content no --report-total
+ numactl --cpunodebind 6 qpid-receive -b 127.0.0.1:8888 -a inQ1 -f -m 1000000 
--capacity 2000 --ack-frequency 1000 --print-content no --report-total
+ wait
tp(m/s) l-min   l-max   l-avg
36776
tp(m/s) l-min   l-max   l-avg
35813
tp(m/s) l-min   l-max   l-avg
34623
tp(m/s) l-min   l-max   l-avg
34519


Same test, with patch to rate limit queue cleanup:


[kgiusti@xxxx Test1.5]$ ./Test1-Setup.sh; ./Test1-Sender.sh; ./Test1-Receiver.sh
+ qpid-config -b 127.0.0.1:8888 add queue inQ1 --max-queue-size=12000000000 
--max-queue-count=4000000 --flow-stop-size=0 --flow-stop-count=0
+ numactl --cpunodebind 6 qpid-send -b 127.0.0.1:8888 -a inQ1 -m 1000000 
--content-size 300 --capacity 2000 --report-total --sequence no --timestamp no
+ numactl --cpunodebind 5 qpid-send -b 127.0.0.1:8888 -a inQ1 -m 1000000 
--content-size 300 --capacity 2000 --report-total --sequence no --timestamp no
+ numactl --cpunodebind 4 qpid-send -b 127.0.0.1:8888 -a inQ1 -m 1000000 
--content-size 300 --capacity 2000 --report-total --sequence no --timestamp no
+ wait
+ numactl --cpunodebind 3 qpid-send -b 127.0.0.1:8888 -a inQ1 -m 1000000 
--content-size 300 --capacity 2000 --report-total --sequence no --timestamp no
tp(m/s)
37590
tp(m/s)
37585
tp(m/s)
37453
tp(m/s)
37318
+ numactl --cpunodebind 5 qpid-receive -b 127.0.0.1:8888 -a inQ1 -f -m 1000000 
--capacity 2000 --ack-frequency 1000 --print-content no --report-total
+ numactl --cpunodebind 4 qpid-receive -b 127.0.0.1:8888 -a inQ1 -f -m 1000000 
--capacity 2000 --ack-frequency 1000 --print-content no --report-total
+ numactl --cpunodebind 3 qpid-receive -b 127.0.0.1:8888 -a inQ1 -f -m 1000000 
--capacity 2000 --ack-frequency 1000 --print-content no --report-total
+ numactl --cpunodebind 6 qpid-receive -b 127.0.0.1:8888 -a inQ1 -f -m 1000000 
--capacity 2000 --ack-frequency 1000 --print-content no --report-total
+ wait
tp(m/s) l-min   l-max   l-avg
41857
tp(m/s) l-min   l-max   l-avg
41839
tp(m/s) l-min   l-max   l-avg
41824
tp(m/s) l-min   l-max   l-avg
41062


Re-run test, but with receivers and senders running simultaineously.  First, 
pre-patch:

[kgiusti@xxxx Test1.5]$ ./Test1-Receiver.sh &
[1] 7886
[kgiusti@xxxx Test1.5]$ + numactl --cpunodebind 5 qpid-receive -b 
127.0.0.1:8888 -a inQ1 -f -m 1000000 --capacity 2000 --ack-frequency 1000 
--print-content no --report-total
+ numactl --cpunodebind 4 qpid-receive -b 127.0.0.1:8888 -a inQ1 -f -m 1000000 
--capacity 2000 --ack-frequency 1000 --print-content no --report-total
+ numactl --cpunodebind 3 qpid-receive -b 127.0.0.1:8888 -a inQ1 -f -m 1000000 
--capacity 2000 --ack-frequency 1000 --print-content no --report-total
+ numactl --cpunodebind 6 qpid-receive -b 127.0.0.1:8888 -a inQ1 -f -m 1000000 
--capacity 2000 --ack-frequency 1000 --print-content no --report-total
+ wait
./Test1-Sender.sh
+ numactl --cpunodebind 6 qpid-send -b 127.0.0.1:8888 -a inQ1 -m 1000000 
--content-size 300 --capacity 2000 --report-total --sequence no --timestamp no
+ numactl --cpunodebind 5 qpid-send -b 127.0.0.1:8888 -a inQ1 -m 1000000 
--content-size 300 --capacity 2000 --report-total --sequence no --timestamp no
+ numactl --cpunodebind 4 qpid-send -b 127.0.0.1:8888 -a inQ1 -m 1000000 
--content-size 300 --capacity 2000 --report-total --sequence no --timestamp no
+ wait
+ numactl --cpunodebind 3 qpid-send -b 127.0.0.1:8888 -a inQ1 -m 1000000 
--content-size 300 --capacity 2000 --report-total --sequence no --timestamp no
tp(m/s)
23990
tp(m/s)
23920
tp(m/s)
23786
tp(m/s)
23783
[kgiusti@xxxx Test1.5]$ tp(m/s)       l-min   l-max   l-avg
21190
tp(m/s) l-min   l-max   l-avg
21031
tp(m/s) l-min   l-max   l-avg
21008
tp(m/s) l-min   l-max   l-avg
20988

[1]+  Done                    ./Test1-Receiver.sh


Repeat, with patch:


[kgiusti@xxxx Test1.5]$ + numactl --cpunodebind 5 qpid-receive -b 
127.0.0.1:8888 -a inQ1 -f -m 1000000 --capacity 2000 --ack-frequency 1000 
--print-content no --report-total
+ numactl --cpunodebind 4 qpid-receive -b 127.0.0.1:8888 -a inQ1 -f -m 1000000 
--capacity 2000 --ack-frequency 1000 --print-content no --report-total
+ numactl --cpunodebind 3 qpid-receive -b 127.0.0.1:8888 -a inQ1 -f -m 1000000 
--capacity 2000 --ack-frequency 1000 --print-content no --report-total
+ numactl --cpunodebind 6 qpid-receive -b 127.0.0.1:8888 -a inQ1 -f -m 1000000 
--capacity 2000 --ack-frequency 1000 --print-content no --report-total
+ wait
./Test1-Sender.sh
+ numactl --cpunodebind 6 qpid-send -b 127.0.0.1:8888 -a inQ1 -m 1000000 
--content-size 300 --capacity 2000 --report-total --sequence no --timestamp no
+ numactl --cpunodebind 5 qpid-send -b 127.0.0.1:8888 -a inQ1 -m 1000000 
--content-size 300 --capacity 2000 --report-total --sequence no --timestamp no
+ numactl --cpunodebind 4 qpid-send -b 127.0.0.1:8888 -a inQ1 -m 1000000 
--content-size 300 --capacity 2000 --report-total --sequence no --timestamp no
+ wait
+ numactl --cpunodebind 3 qpid-send -b 127.0.0.1:8888 -a inQ1 -m 1000000 
--content-size 300 --capacity 2000 --report-total --sequence no --timestamp no
tp(m/s)
25981
tp(m/s)
25900
tp(m/s)
25814
tp(m/s)
25790
[kgiusti@xxxx Test1.5]$ tp(m/s)       l-min   l-max   l-avg
22760
tp(m/s) l-min   l-max   l-avg
22733
tp(m/s) l-min   l-max   l-avg
22637
tp(m/s) l-min   l-max   l-avg
22592

[1]+  Done                    ./Test1-Receiver.sh



                
> Improve broker's performance by rate limiting queue cleanup.
> ------------------------------------------------------------
>
>                 Key: QPID-4046
>                 URL: https://issues.apache.org/jira/browse/QPID-4046
>             Project: Qpid
>          Issue Type: Improvement
>          Components: C++ Broker
>    Affects Versions: 0.16
>            Reporter: Ken Giusti
>            Assignee: Ken Giusti
>            Priority: Trivial
>             Fix For: 0.17
>
>
> The cleanup of dequeued messages is done on the receive path with the Queue's 
> messageLock held (has to be held).  When multiple consumers with large ack 
> windows are sharing the queue, a large backlog of messages can build up.  
> Performance of the consumer can be improved by rate limiting this queue 
> cleanup.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to