On 07/28/2010 05:43 PM, Brian Crowell wrote:
On Wed, Jul 28, 2010 at 5:15 AM, Gordon Sim<[email protected]>  wrote:
Are you running the same test scenario as described in 3.3 of that document?
I.e. Simulating "60 AMQP clients talking to the AMQP broker with 10 shared
queues". (That is not what you get for perftest with 'default settings'
which is why I ask what may be a stupid question).

I'm not. I wanted to start by trying to get a baseline stat for one
queue (that is, how fast can Qpid serve one session?). The actual
scenario I'm working with is closer to:

   ./perftest --mode topic --count 100000 --npubs 4 --size 100
--pub-confirm no -s

I had installed on a Windows box, but I'm comparing it to a Linux box
now, and the Linux box is trouncing it. From the comments here, it
sounds like I'll need to go Linux if I ever hope to get any
performance out of it. I had hoped there was something simple I wasn't
doing or some build option I could change, but it sounds like the OS
is a major factor.

On the above test:

Windows 2003 (8-core)
Pubs: 5671 msg/sec
Subs: 8672 msg/sec
Total: 17262 msg/sec
Throughput: 1.64 MiB/s

Debian Linux (1-core?)
Pubs: 5892 msg/sec
Subs: 11343 msg/sec
Total: 22687 msg/sec
Throughput: 2.16 MiB/s

The other thing I'm worried about is that I can push so much faster
*into* Qpid than I can pull out of it.

If you have four connections pumping in messages to a queue as fast as they can and only one pulling them out, then the queue will indeed backup. I am very keen to get producer flow control implemented which would automatically slow the producers down to a rate that matched the consumer(s) in the steady state.

Messages pile up in the queue
and I run out of memory fast. I suppose for that, I'll have to devote
one queue to each publisher.

Or perhaps build in some feedback/throttling into the application(?).

---------------------------------------------------------------------
Apache Qpid - AMQP Messaging Implementation
Project:      http://qpid.apache.org
Use/Interact: mailto:[email protected]

Reply via email to