[
https://issues.apache.org/jira/browse/SOLR-10032?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15860371#comment-15860371
]
Mark Miller commented on SOLR-10032:
------------------------------------
I suppose it depends on the cost of integrating and maintaining it vs how much
the information it provides is consumed or useful. I think of things like
clover that a lot of effort and time was spent on in the past, but who is
looking at clover coverage? I'd still prefer to have test coverage reports, but
again, usually I'll just generate that on demand. Often I won't even look at
the logs for beast fails initially. I will move to the latest code, beast out
the latest fails with the latest code, etc. Usually beast fails are very
replicable (beast longer!), and running 8-12 in parallel allows you blast out a
few hundred runs on a test in no time (one test, no time, 900 tests, not so
much). For example, a couple tests hung in my first report and couple had RAM
issues. I didn't dig in the data much first though, I just reproduced with
YourKit attached to see what was up and addressed it.
> Create report to assess Solr test quality at a commit point.
> ------------------------------------------------------------
>
> Key: SOLR-10032
> URL: https://issues.apache.org/jira/browse/SOLR-10032
> Project: Solr
> Issue Type: Task
> Security Level: Public(Default Security Level. Issues are Public)
> Components: Tests
> Reporter: Mark Miller
> Assignee: Mark Miller
> Attachments: Lucene-Solr Master Test Beast Results
> 01-24-2017-9899cbd031dc3fc37a384b1f9e2b379e90a9a3a6 Level Medium- Running 30
> iterations, 12 at a time .pdf, Lucene-Solr Master Test Beasults
> 02-01-2017-bbc455de195c83d9f807980b510fa46018f33b1b Level Medium- Running 30
> iterations, 10 at a time.pdf, Lucene-Solr Master Test Beasults
> 02-08-2017-6696eafaae18948c2891ce758c7a2ec09873dab8 Level Medium+- Running 30
> iterations, 10 at a time, 8 cores.pdf
>
>
> We have many Jenkins instances blasting tests, some official, some policeman,
> I and others have or had their own, and the email trail proves the power of
> the Jenkins cluster to find test fails.
> However, I still have a very hard time with some basic questions:
> what tests are flakey right now? which test fails actually affect devs most?
> did I break it? was that test already flakey? is that test still flakey? what
> are our worst tests right now? is that test getting better or worse?
> We really need a way to see exactly what tests are the problem, not because
> of OS or environmental issues, but more basic test quality issues. Which
> tests are flakey and how flakey are they at any point in time.
> Reports:
> 01/24/2017 -
> https://docs.google.com/spreadsheets/d/1JySta2j2s7A_p16wA1UO-l6c4GsUHBIb4FONS2EzW9k/edit?usp=sharing
> 02/01/2017 -
> https://docs.google.com/spreadsheets/d/1FndoyHmihaOVL2o_Zns5alpNdAJlNsEwQVoJ4XDWj3c/edit?usp=sharing
> 02/08/2017 -
> https://docs.google.com/spreadsheets/d/1N6RxH4Edd7ldRIaVfin0si-uSLGyowQi8-7mcux27S0/edit?usp=sharing
--
This message was sent by Atlassian JIRA
(v6.3.15#6346)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]