[
https://issues.apache.org/jira/browse/QPID-3890?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13225218#comment-13225218
]
[email protected] commented on QPID-3890:
-----------------------------------------------------
bq. On 2012-03-07 21:30:28, Andrew Stitcher wrote:
bq. > This is great work.
bq. >
bq. > The most important thing on my mind though is how do ensure that work
like this maintains correctness?
bq. >
bq. > Do we have some rules/guidelines for the use of the Queue structs that
we can use to inspect the code to satisfy its correctness? A simple rule like
this would be "take the blah lock before capturing or changing the foo state".
It looks like we might have had rules like that, but these changes are so broad
it's difficult for me to inspect the code and reconstruct what rules now apply
(if any).
bq. >
bq. > On another point - I see you've added the use of a RWLock - it's always
good to benchmark with a plain mutex before using one of these as they are
generally higher overhead and unless the use is "read lock mostly write lock
seldom" plain mutexes can win.
bq.
bq. Kenneth Giusti wrote:
bq. Re: locking rules - good point Andrew. Most (all?) of these changes I
made simply by manually reviewing the existing code's locking behavior, and
trying to identify what can be safely moved outside the locks. I'll update
this patch with comments that explicitly call out those data members & actions
that must hold the lock.
bq.
bq. The RWLock - hmm.. that may be an unintentional change: at first I had
a separate lock for the queue observer container which was rarely written too.
I abandoned that change and put the queue observers under the queue lock since
otherwise the order of events (as seen by the observers) could be re-arranged.
Doh - sorry, replied with way too low Caffiene/Blood ratio. You're referring
to the lock in the listener - yes. Gordon points out that this may be unsafe,
so I need to think about it a bit more.
- Kenneth
-----------------------------------------------------------
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/4232/#review5691
-----------------------------------------------------------
On 2012-03-07 19:23:36, Kenneth Giusti wrote:
bq.
bq. -----------------------------------------------------------
bq. This is an automatically generated e-mail. To reply, visit:
bq. https://reviews.apache.org/r/4232/
bq. -----------------------------------------------------------
bq.
bq. (Updated 2012-03-07 19:23:36)
bq.
bq.
bq. Review request for qpid, Andrew Stitcher, Alan Conway, Gordon Sim, and Ted
Ross.
bq.
bq.
bq. Summary
bq. -------
bq.
bq. In order to improve our parallelism across threads, this patch reduces the
scope over which the Queue's message lock is held.
bq.
bq. I've still got more testing to do, but the simple test cases below show
pretty good performance improvements on my multi-core box. I'd like to get
some early feedback, as there are a lot of little changes to the queue locking
that will need vetting.
bq.
bq. As far as performance is concerned - I've run the following tests on an 8
core Xeon X5550 2.67Ghz box. I used qpid-send and qpid-receive to generate
traffic. All traffic generated in parallel.
bq. All numbers are in messages/second - sorted by best overall throughput.
bq.
bq. Test 1: Funnel Test (shared queue, 2 producers, 1 consumer)
bq.
bq. test setup and execute info:
bq. qpidd --auth no -p 8888 --no-data-dir --worker-threads=3
bq. qpid-config -b $SRC_HOST add queue srcQ1 --max-queue-size=1200000000
--max-queue-count=4000000 --flow-stop-size=0 --flow-stop-count=0
bq. qpid-receive -b $SRC_HOST -a srcQ1 -f -m 4000000 --capacity 2000
--ack-frequency 1000 --print-content no --report-total &
bq. qpid-send -b $SRC_HOST -a srcQ1 -m 2000000 --content-size 300 --capacity
2000 --report-total --sequence no --timestamp no &
bq. qpid-send -b $SRC_HOST -a srcQ1 -m 2000000 --content-size 300 --capacity
2000 --report-total --sequence no --timestamp no &
bq.
bq. trunk: patched:
bq. tp(m/s) tp(m/s)
bq.
bq. S1 :: S2 :: R1 S1 :: S2 :: R1
bq. 73144 :: 72524 :: 112914 91120 :: 83278 :: 133730
bq. 70299 :: 68878 :: 110905 91079 :: 87418 :: 133649
bq. 70002 :: 69882 :: 110241 83957 :: 83600 :: 131617
bq. 70523 :: 68681 :: 108372 83911 :: 82845 :: 128513
bq. 68135 :: 68022 :: 107949 85999 :: 81863 :: 125575
bq.
bq.
bq. Test 2: Quad Client Shared Test (1 queue, 2 senders x 2 Receivers)
bq.
bq. test setup and execute info:
bq. qpidd --auth no -p 8888 --no-data-dir --worker-threads=4
bq. qpid-config -b $SRC_HOST add queue srcQ1 --max-queue-size=1200000000
--max-queue-count=4000000 --flow-stop-size=0 --flow-stop-count=0
bq. qpid-receive -b $SRC_HOST -a srcQ1 -f -m 2000000 --capacity 2000
--ack-frequency 1000 --print-content no --report-total &
bq. qpid-receive -b $SRC_HOST -a srcQ1 -f -m 2000000 --capacity 2000
--ack-frequency 1000 --print-content no --report-total &
bq. qpid-send -b $SRC_HOST -a srcQ1 -m 2000000 --content-size 300 --capacity
2000 --report-total --sequence no --timestamp no &
bq. qpid-send -b $SRC_HOST -a srcQ1 -m 2000000 --content-size 300 --capacity
2000 --report-total --sequence no --timestamp no &
bq.
bq. trunk: patched:
bq. tp(m/s) tp(m/s)
bq.
bq. S1 :: S2 :: R1 :: R2 S1 :: S2 :: R1 :: R2
bq. 52826 :: 52364 :: 52386 :: 52275 64826 :: 64780 :: 64811 :: 64638
bq. 51457 :: 51269 :: 51399 :: 51213 64630 :: 64157 :: 64562 :: 64157
bq. 50557 :: 50432 :: 50468 :: 50366 64023 :: 63832 :: 64092 :: 63748
bq.
bq. **Note: pre-patch, qpidd ran steadily at about 270% cpu during this test.
With the patch, qpidd's cpu utilization was steady at 300%
bq.
bq.
bq. Test 3: Federation Funnel (two federated brokers, 2 producers upstream, 1
consumer downstream)
bq.
bq. test setup and execute info:
bq. qpidd --auth no -p 8888 --no-data-dir --worker-threads=3 -d
bq. qpidd --auth no -p 9999 --no-data-dir --worker-threads=2 -d
bq. qpid-config -b $SRC_HOST add queue srcQ1 --max-queue-size=600000000
--max-queue-count=2000000 --flow-stop-size=0 --flow-stop-count=0
bq. qpid-config -b $SRC_HOST add queue srcQ2 --max-queue-size=600000000
--max-queue-count=2000000 --flow-stop-size=0 --flow-stop-count=0
bq. qpid-config -b $DST_HOST add queue dstQ1 --max-queue-size=1200000000
--max-queue-count=4000000 --flow-stop-size=0 --flow-stop-count=0
bq. qpid-config -b $DST_HOST add exchange fanout dstX1
bq. qpid-config -b $DST_HOST bind dstX1 dstQ1
bq. qpid-route --ack=1000 queue add $DST_HOST $SRC_HOST dstX1 srcQ1
bq. qpid-route --ack=1000 queue add $DST_HOST $SRC_HOST dstX1 srcQ2
bq. qpid-receive -b $DST_HOST -a dstQ1 -f -m 4000000 --capacity 2000
--ack-frequency 1000 --print-content no --report-total &
bq. qpid-send -b $SRC_HOST -a srcQ1 -m 2000000 --content-size 300 --capacity
2000 --report-total --sequence no --timestamp no &
bq. qpid-send -b $SRC_HOST -a srcQ2 -m 2000000 --content-size 300 --capacity
2000 --report-total --sequence no --timestamp no &
bq.
bq.
bq. trunk: patched:
bq. tp(m/s) tp(m/s)
bq.
bq. S1 :: S2 :: R1 S1 :: S2 :: R1
bq. 86150 :: 84146 :: 87665 90877 :: 88534 :: 108418
bq. 89188 :: 88319 :: 82469 87014 :: 86753 :: 107386
bq. 88221 :: 85298 :: 82455 89790 :: 88573 :: 104119
bq.
bq. Still TBD:
bq. - fix message group errors
bq. - verify/fix cluster
bq.
bq.
bq. This addresses bug qpid-3890.
bq. https://issues.apache.org/jira/browse/qpid-3890
bq.
bq.
bq. Diffs
bq. -----
bq.
bq. /trunk/qpid/cpp/src/qpid/broker/Queue.h 1297185
bq. /trunk/qpid/cpp/src/qpid/broker/Queue.cpp 1297185
bq. /trunk/qpid/cpp/src/qpid/broker/QueueListeners.h 1297185
bq. /trunk/qpid/cpp/src/qpid/broker/QueueListeners.cpp 1297185
bq.
bq. Diff: https://reviews.apache.org/r/4232/diff
bq.
bq.
bq. Testing
bq. -------
bq.
bq. unit test (non-cluster)
bq. performance tests as described above.
bq.
bq.
bq. Thanks,
bq.
bq. Kenneth
bq.
bq.
> Improve performance by reducing the use of the Queue's message lock
> -------------------------------------------------------------------
>
> Key: QPID-3890
> URL: https://issues.apache.org/jira/browse/QPID-3890
> Project: Qpid
> Issue Type: Improvement
> Components: C++ Broker
> Affects Versions: 0.16
> Reporter: Ken Giusti
> Assignee: Ken Giusti
> Priority: Minor
> Fix For: Future
>
>
> The mutex that protects the Queue's message container is held longer than it
> really should be (e.g. across statistics updates, etc).
> Reducing the amount of code executed while the lock is held will improve the
> performance of the broker in a multi-process environment.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators:
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]