[
https://issues.apache.org/jira/browse/PROTON-220?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Justin Ross updated PROTON-220:
-------------------------------
Fix Version/s: 0.17.0
> Create a set of "glass box" tests to quantify the performance of the proton
> codebase.
> -------------------------------------------------------------------------------------
>
> Key: PROTON-220
> URL: https://issues.apache.org/jira/browse/PROTON-220
> Project: Qpid Proton
> Issue Type: Test
> Components: proton-c, proton-j
> Reporter: Ken Giusti
> Assignee: michael goulish
> Labels: perfomance, test
> Fix For: 0.17.0
>
>
> The goal of these tests would be to detect any performance degradation
> inadvertently introduced during development. These tests would not be
> intended to provide any metrics regarding the "real world" behavior of
> proton-based applications. Rather, these tests are targeted for use by the
> proton developers to help gauge the effect their code changes may have on
> performance.
> These tests should require no special configuration or setup in order to run.
> It should be easy to run these test as part of the development process. The
> intent would be to have developer run the tests prior to making any code
> changes, and record the metrics for comparison against the results obtained
> after making changes to the code base.
> As described by Rafi:
> "I think it would be good to include some performance metrics that isolate
> the various components of proton. For example having a metric that simply
> repeatedly encodes/decodes a message would be quite useful in isolating the
> message implementation. Setting up two engines in memory and using them to
> blast zero sized messages back and forth as fast as possible would tell us
> how much protocol overhead the engine is adding. Using the codec directly
> to encode/decode data would also be a useful measure. Each of these would
> probably want to have multiple profiles, different message content,
> different acknowledgement/flow control patterns, and different kinds of
> data.
> I think breaking out the different dimensions of the implementation as
> above would provide a very useful tool to run before/after any performance
> sensitive changes to detect and isolate regressions, or to test potential
> improvements."
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]