This was an Intel Xeon 5570 box with the RT kernel and following the Messaging install and optimizations recommended by RH's MRG docs. http://www.redhat.com/docs/en-US/Red_Hat_Enterprise_MRG/
I dug up the numbers I saw- Running- ./perftest --port 40000 --username guest --password guest --tcp-nodelay --size 256 --npubs 10 --pub-confirm no --mode hared --async-commit yes -s Summary output: pubs/sec subs/sec transfers/sec Mbytes/sec (rh qpid rpm and tcp-nodelay on) 28924.2 49230.9 98464.3 24.0391 (rh qpid rpm and tcp-nodelay off) 12227 20663.9 41341.9 10.0932 (compiled qpidc-0.5 and tcp-nodelay on) 13523 21557 43122 10.5278 -----Original Message----- From: Clark O'Brien [mailto:[email protected]] Sent: Monday, July 26, 2010 8:08 PM To: [email protected]; Donohue, Matt; [email protected] Subject: Re: QPID message throughput - Red Hat numbers A couple of interesting comments from the red hat doc. The Intel® Xeon® 5482 based system increases throughput by ~ 48% over the Intel® Xeon® 5365 based system. [On average, 344K messages/sec for the Intel® Xeon® 5365 based system versus 505K messages/sec for the Intel® Xeon® 5482 based system.] The optimized memory allocator increased throughput by ~ 49.7%. [On average, 558 messages/sec for the Intel® Xeon® 5365 based system versus 762K messages/sec for the Intel® Xeon® 5482 based system. Intel® Xeon® 5482 based system increased throughput by 36.6%.] --- On Mon, 7/26/10, Ian.Kinkade <[email protected]> wrote: > From: Ian.Kinkade <[email protected]> > Subject: Re: QPID message throughput - Red Hat numbers > To: [email protected], [email protected], [email protected] > Date: Monday, July 26, 2010, 6:30 PM > Hi Matt & Brian, > > It is my understanding that the Red Hat tests were > conducted using a > Real-time Version of RHEL (MRG) and that it was > specifically tuned for > MRG-M and its test applications. > > You might want to try using the tuning application from the > MRG install > before you run the tests. > > I hope this was helpful? > > Best Regards .................... Ian > > Ian Kinkade > CEO > Information Design, Inc. > 145 Durham Road, Suite 11 > Madison, CT 06443 USA > URL: www.idi-middleware.com > Email: [email protected] > > Work: 203-245-0772 Ext: 6212 > Fax: 203-245-1885 > Cell: 203-589-1192 > > > On 7/26/2010 7:54 PM, Donohue, Matt wrote: > > The last project I worked on was the same for me. Not > close to the MRG throughput numbers with the same test and > this was on an otherwise optimized trading box. > > The MRG qpid rpm was faster than an Intel C++ compiled > version though. > > > > Regards, > > Matt > > > > -----Original Message----- > > From: Brian Crowell [mailto:[email protected]] > > Sent: Monday, July 26, 2010 3:18 PM > > To: [email protected] > > Subject: QPID message throughput - Red Hat numbers > > > > Red Hat claims to be able to get hundreds of thousands > of messages > > through on an eight core machine > > (http://www.redhat.com/mrg/messaging/features/ or > > http://www.redhat.com/f/pdf/mrg/Reference_Architecture_MRG_Messaging_Throughput.pdf). > > I'm working with an eight-core machine, and I'm only > getting about > > 11,000/sec (in; about 6,500/sec out). This is with > perftest, default > > settings. > > > > What kinds of things do I need to be doing to get > better throughput? > > > > --Brian > > > > > --------------------------------------------------------------------- > > Apache Qpid - AMQP Messaging Implementation > > Project: http://qpid.apache.org > > Use/Interact: mailto:[email protected] > > > > > > > --------------------------------------------------------------------- > > Apache Qpid - AMQP Messaging Implementation > > Project: http://qpid.apache.org > > Use/Interact: mailto:[email protected] > > > > > > --------------------------------------------------------------------- > Apache Qpid - AMQP Messaging Implementation > Project: http://qpid.apache.org > Use/Interact: mailto:[email protected] > > --------------------------------------------------------------------- Apache Qpid - AMQP Messaging Implementation Project: http://qpid.apache.org Use/Interact: mailto:[email protected]
