[ 
https://issues.apache.org/jira/browse/AVRO-2269?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16689164#comment-16689164
 ] 

Raymie Stata commented on AVRO-2269:
------------------------------------

I took a look at JMH.  I think it'd be great to convert `Perf.java` over to 
JMH.  I didn't pursue it because I couldn't find good enough doc's on JMH to 
feel comfortable using it myself.

The forthcoming patch I have for AVRO-2269 make changes that are orthogonal to 
what JMH does.  JMH does things like warm up the JIT and various caches, and so 
forth, and it runs tests a dynamic number of times in order to "seek" stable 
statistics on performance metrics.  The current `Perf.main` does some of this 
already – I didn't touch any of that code – but JMH seems to do a much more 
professional job of it.  Thus, again, it'd be great to convert `Perf.java` to 
JMH.

That said, while JMH might do a pretty good job of finding the "true" running 
time of a highly-variance piece of code, it doesn't turn a high-variance piece 
of code into a low-variance one.  The forthcoming patch for AVRO-2269 do the 
latter – try to reduce the inherent variance of the tests (for example, by 
reducing the allocations done for `FooBarSpecificRecord` tests).  JMH together 
with this forthcoming patch would be a great combination.

A just submitted a pull request for AVRO-2268 containing a little bug fix that 
I want to depend upon, but which is pretty independent of the changes I have 
for AVRO-2269.  If someone could pull AVRO-2268, I'd like to rebase onto that 
change before submitting the AVRO-2269 patch.

> Improve variances seen across Perf.java runs
> --------------------------------------------
>
>                 Key: AVRO-2269
>                 URL: https://issues.apache.org/jira/browse/AVRO-2269
>             Project: Apache Avro
>          Issue Type: Test
>          Components: java
>            Reporter: Raymie Stata
>            Assignee: Raymie Stata
>            Priority: Major
>
> In attempting to use Perf.java to show that proposed performance changes 
> actually improved performance, different runs of Perf.java using the exact 
> same code base resulted variances of 5% or greater – and often 10% or greater 
> – for about half the test cases. With variance this high within a code base, 
> it's impossible to tell if a proposed "improved" code base indeed improves 
> performance. I will post to the wiki and elsewhere some documents and scripts 
> I developed to reduce this variance. This JIRA is for changes to Perf.java 
> that reduce the variance. Specifically:
>  * Access the {{reader}} and {{writer}} instance variables directly in the 
> inner-loop for {{SpecificTest}}, as well as switched to a "reuse" object for 
> reading records, rather than constructing fresh objects for each read. Both 
> helped to significantly reduce variance for 
> {{FooBarSpecificRecordTestWrite}}, a major target of recent 
> performance-improvement efforts.
>  * Switched to {{DirectBinaryEncoder}} instead of {{BufferedBinaryEncoder}} 
> for write tests. Although this slowed writer-tests a bit, it reduced variance 
> a lot, especially for performance tests of primitives like booleans, making 
> it a better choice for measuring the performance-impact of code changes.
>  * Started the timer of a test after the encoder/decoder for the test is 
> constructed, rather than before. Helps a little.
>  * Added the ability to output the _minimum_ runtime of a test case across 
> multiple cycles (vs the total runtime across all cycles). This was inspired 
> by JVMSpec, which used to use a minimum.  I was able to reduce the variance 
> of total runtime enough to obviate the need for this metric, but since it's 
> helpful diagnostically, I left it in.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

Reply via email to