On Thu, Jan 31, 2013 at 9:41 AM, Ken Giusti <kgiu...@redhat.com> wrote:

> Hi Folks,
>
> I'd like to solicit some ideas regarding $SUBJECT.
>
> I'm thinking we could take an approach similar to what is done on the C++
> broker tests now.  That is we should develop a set of "native" send and
> receive programs that can be used to profile various performance
> characteristics (msgs/sec with varying size, header content encode/decode
> etc).  By "native" I mean implementations in Java and C.
>
> I've hacked our C "send" and "recv" examples to provide a rough swag a
> measuring msgs/sec performance.  I use these to double check that any
> changes I make to the proton C codebase do not have an unexpected impact on
> performance.  This really belongs somewhere in our source tree, but for now
> you can grab the source here:  https://github.com/kgiusti/proton-tools.git
>
> We do something similar for the QPID broker - simple native clients
> (qpid-send, qpid-receive) that do the performance sensitive message
> generation/consumption.  We've written python scripts that drive these
> clients for various test cases.
>
> If we follow that approach, not only could we create a canned set of basic
> benchmarks that we could distribute, but we could also build inter-opt
> tests by running one native client against the other. E.g. C sender vs Java
> receiver.  That could be a useful addition to the current "unit" test
> framework - I don't believe we do any canned interopt testing yet.
>
> Thoughts?
>

This is a good start at performance measurements for messenger, however I
think it's too indirect when it comes to measuring engine performance. An
end-to-end measure like this is going to be significantly influenced by
both the driver and aspects of the messenger implementation. This could be
a problem because people directly embedding the engine might not be using
the driver and might be using the engine differently from messenger.

I think it would be good to include some performance metrics that isolate
the various components of proton. For example having a metric that simply
repeatedly encodes/decodes a message would be quite useful in isolating the
message implementation. Setting up two engines in memory and using them to
blast zero sized messages back and forth as fast as possible would tell us
how much protocol overhead the engine is adding. Using the codec directly
to encode/decode data would also be a useful measure. Each of these would
probably want to have multiple profiles, different message content,
different acknowledgement/flow control patterns, and different kinds of
data.

I think breaking out the different dimensions of the implementation as
above would provide a very useful tool to run before/after any performance
sensitive changes to detect and isolate regressions, or to test potential
improvements.

--Rafael

Reply via email to