[ 
https://issues.apache.org/jira/browse/OAK-6209?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16435127#comment-16435127
 ] 

Francesco Mari commented on OAK-6209:
-------------------------------------

[~maksim_kviatkouski], I just finished to have a look at your patch. I have a 
couple of comments, I hope we can discuss them together. Forgive my verbosity!

The work around {{BenchmarkOutputStrategy}} makes a lot of sense. I would like 
to propose the following changes. I noticed that {{AbstractTest}} uses both an 
{{outputStrategy}} and an optional {{csvStrategy}}, which is set only when a 
CSV file must be generated. What do you think about implementing a 
{{CompositeBenchmarkOutputStrategy}}, in the same spirit as 
{{o.a.j.o.spi.commit.CompositeHook}}? This way you can avoid chaining the calls 
of {{outputStrategy}} and {{csvStrategy}}, remembering to always wrap the 
latter in an if statement. Actually, you could get rid of the if statement 
altogether by creating a dummy implementation of {{BenchmarkOutputStrategy}}, 
whose methods do nothing. This way, if no CVS file is needed, you can simply 
pass the dummy implementation instead of handling special cases because of 
null. You probably already guessed that I took inspiration from 
{{o.a.j.o.spi.commit.EmptyHook}}.

In order for {{BenchmarkOutputStrategy}} to be reusable across test 
invocations, what do you think about having its methods accept an 
{{AbstractTest}} parameter? This way you could instantiate the strategy (or 
strategies) once during the setup of the benchmark runner, and use it for every 
following benchmark run. The code in {{AbstractTest}} can simply call the 
strategy as {{strategy.doSomething(this)}}.

While I understand your needs, I was a bit daunted at the amount of changes 
necessary to get rid of the usages of {{System.out}}. What about redirecting 
the standard output stream by using {{System#setOut}}? During the benchmark 
runner setup, if the {{report}} option is not specified, you can simply create 
a dummy {{OutputStream}} that discards all the output, create a new 
{{PrintStream}} based on that {{OutputStream}}, and pass that {{PrintStream}} 
to {{System#setOut}}. This way you don't need {{FilterPrinter}} and avoid 
tedious changes in every benchmark.

> The benchmark runner should produce machine-friendly output
> -----------------------------------------------------------
>
>                 Key: OAK-6209
>                 URL: https://issues.apache.org/jira/browse/OAK-6209
>             Project: Jackrabbit Oak
>          Issue Type: Improvement
>          Components: benchmarks
>            Reporter: Francesco Mari
>            Assignee: Francesco Mari
>            Priority: Major
>         Attachments: oak-6209.patch, sample-machine-readable-output.txt
>
>
> The benchmark runner currently produce output in the following format.
> {noformat}
> Apache Jackrabbit Oak 1.8-SNAPSHOT
> # LoginTest                        C     min     10%     50%     90%     max  
>      N 
> Oak-Segment-Tar                    1     472     494     522     552     631  
>    115
> # LoginLogoutTest                  C     min     10%     50%     90%     max  
>      N 
> Oak-Segment-Tar                    1     472     479     513     543     568  
>    118
> {noformat}
> While this format is well formatted and easy to read, it's a pain to process 
> with standard command line utilities. The benchmark runner should give the 
> possibility to produce machine-friendly output, like the following.
> {noformat}
> LoginTest,Oak-Segment-Tar,1,472,494,522,552,631,115
> LoginLogoutTest,Oak-Segment-Tar,1,472,479,513,543,568,118
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

Reply via email to