Tests from MRG rpms are done as follow:

perftest results are transfers per second, as that is what perftest reports
latency test are messages per second as that is what latency test reports.


Note that the 33% increase is an old paper, pre RHEL 5.4 note that RHEL5.4
now has an updated memory allocator based on this work and now beats the
optimized memory allocator numbers from that paper.

You can also set thread affinity on the allocator in RHEL5.4 which in some
cases gives you an increase on NUMA machine. Note that RHEL6 now does
this by default for NUMA, and in the numbers Mark Wagner & I showed at
Red Hat Summit indicate ~ 10% gain from RHEL 5.x to RHEL 6 beta 2, snap7.


Test are done on RT certified machines,
see https://hardware.redhat.com/list.cgi?version=5&field0-0-0=cf_fixed_in&type0-0-0=substring&value0-0-0=MRG
or on whiteboxes.

all tests are done with SMI's turned off.

The boxes are then tuned according to the tuning guide, see
http://www.redhat.com/mrg/resources/

Note that none of the test use the onboard NIC, onboard NIC's are usually built to save $ and never yield the best results. On most of the slides that I have used the full machine config is listed, or most of the MRG papers / presso's have the tunings, and options used on the tests to reproduce in the appendix or last few
slides of the deck

If you are using the RPMs and not getting comp numbers, it is usually setting --worker-threads, SMI's, Memory speed (or overpopulated/unbalanced), NIC, and tuning -- and in that order.


What has been great is with the published numbers is that people try to reproduce then, in many cases they reproduce. when they don't it gets questions being asked and serves as a great tool to debug the environment/ network. It has been amazing the things that have been found. In all cases I've been involved in it has resulted in the numbers being
reproduced within a % or so.

hope that helps
Carl.



On 07/26/2010 10:06 PM, Clark O'Brien wrote:
Is  possible to replicate this environment in some cloud that supports 
pay-per-use like Amazon? I would live to test this in an environment that I did 
not have to tear down.



--- On Mon, 7/26/10, Donohue, Matt<[email protected]>  wrote:

From: Donohue, Matt<[email protected]>
Subject: RE: QPID message throughput - Red Hat numbers
To: "Clark O'Brien"<[email protected]>, "[email protected]"<[email protected]>, 
"[email protected]"<[email protected]>
Date: Monday, July 26, 2010, 7:49 PM

This was an Intel Xeon 5570 box with the RT kernel and
following the Messaging install and optimizations
recommended by RH's MRG docs.
http://www.redhat.com/docs/en-US/Red_Hat_Enterprise_MRG/

I dug up the numbers I saw-

Running-
./perftest --port 40000 --username guest --password guest
--tcp-nodelay --size 256 --npubs 10 --pub-confirm no --mode
hared  --async-commit yes -s

Summary output:
pubs/sec subs/sec transfers/sec Mbytes/sec

(rh qpid rpm and tcp-nodelay on)
28924.2 49230.9 98464.3 24.0391

(rh qpid rpm and tcp-nodelay off)
12227   20663.9 41341.9 10.0932

(compiled qpidc-0.5 and tcp-nodelay on)
13523   21557   43122   10.5278


-----Original Message-----
From: Clark O'Brien [mailto:[email protected]]

Sent: Monday, July 26, 2010 8:08 PM
To: [email protected];
Donohue, Matt; [email protected]
Subject: Re: QPID message throughput - Red Hat numbers

A couple of interesting comments from the red hat doc.


The Intel® Xeon® 5482 based system increases throughput
by ~ 48% over the Intel® Xeon® 5365 based system. [On
average, 344K messages/sec for the Intel® Xeon® 5365 based
system versus 505K messages/sec for the Intel® Xeon® 5482
based system.]

The optimized memory allocator increased throughput by ~
49.7%. [On average, 558 messages/sec for the Intel® Xeon®
5365 based system versus 762K messages/sec for the Intel®
Xeon® 5482 based system. Intel® Xeon® 5482 based system
increased throughput by 36.6%.]
--- On Mon, 7/26/10, Ian.Kinkade<[email protected]>
wrote:

From: Ian.Kinkade<[email protected]>
Subject: Re: QPID message throughput - Red Hat
numbers
To: [email protected],
[email protected],
[email protected]
Date: Monday, July 26, 2010, 6:30 PM
   Hi Matt&  Brian,

It is my understanding that the Red Hat tests were
conducted using a
Real-time Version of RHEL (MRG) and that it was
specifically tuned for
MRG-M and its test applications.

You might want to try using the tuning application
from the
MRG install
before you run the tests.

I hope this was helpful?

Best Regards .................... Ian

Ian Kinkade
CEO
Information Design, Inc.
145 Durham Road, Suite 11
Madison, CT  06443 USA
URL:   www.idi-middleware.com
Email: [email protected]

Work:  203-245-0772 Ext: 6212
Fax:   203-245-1885
Cell:  203-589-1192


On 7/26/2010 7:54 PM, Donohue, Matt wrote:
The last project I worked on was the same for me.
Not
close to the MRG throughput numbers with the same test
and
this was on an otherwise optimized trading box.
The MRG qpid rpm was faster than an Intel C++
compiled
version though.
Regards,
Matt

-----Original Message-----
From: Brian Crowell [mailto:[email protected]]
Sent: Monday, July 26, 2010 3:18 PM
To: [email protected]
Subject: QPID message throughput - Red Hat
numbers
Red Hat claims to be able to get hundreds of
thousands
of messages
through on an eight core machine
(http://www.redhat.com/mrg/messaging/features/ or
http://www.redhat.com/f/pdf/mrg/Reference_Architecture_MRG_Messaging_Throughput.pdf).
I'm working with an eight-core machine, and I'm
only
getting about
11,000/sec (in; about 6,500/sec out). This is
with
perftest, default
settings.

What kinds of things do I need to be doing to
get
better throughput?
--Brian


---------------------------------------------------------------------
Apache Qpid - AMQP Messaging Implementation
Project:      http://qpid.apache.org
Use/Interact: mailto:[email protected]



---------------------------------------------------------------------
Apache Qpid - AMQP Messaging Implementation
Project:      http://qpid.apache.org
Use/Interact: mailto:[email protected]



---------------------------------------------------------------------
Apache Qpid - AMQP Messaging Implementation
Project:      http://qpid.apache.org
Use/Interact: mailto:[email protected]



---------------------------------------------------------------------
Apache Qpid - AMQP Messaging Implementation
Project:      http://qpid.apache.org
Use/Interact: mailto:[email protected]





---------------------------------------------------------------------
Apache Qpid - AMQP Messaging Implementation
Project:      http://qpid.apache.org
Use/Interact: mailto:[email protected]



---------------------------------------------------------------------
Apache Qpid - AMQP Messaging Implementation
Project:      http://qpid.apache.org
Use/Interact: mailto:[email protected]

Reply via email to