[
https://issues.apache.org/jira/browse/SOLR-10032?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16070191#comment-16070191
]
Mark Miller commented on SOLR-10032:
------------------------------------
I actually bailed on spending more time with report generation for a bit. The
process was still a bit too cumbersome - it really needs to be one command to
kick off and then everything else is automatic so that it can be run regularly
with ease. I finally came up with a plan to automate from test running to table
generation and publishing, so I have spent the time instead finishing up full
automation. I still have a bit to do, but I'll be done soon and then will run a
report on a regular schedule.
I've also spent a little time planning out using Docker for parallel test runs
instead of the built in project support. This will enable the possibility of
supporting other projects and there are some nice benefits in truly isolating
parallel test runs.
> Create report to assess Solr test quality at a commit point.
> ------------------------------------------------------------
>
> Key: SOLR-10032
> URL: https://issues.apache.org/jira/browse/SOLR-10032
> Project: Solr
> Issue Type: Task
> Security Level: Public(Default Security Level. Issues are Public)
> Components: Tests
> Reporter: Mark Miller
> Assignee: Mark Miller
> Attachments: Lucene-Solr Master Test Beast Results
> 01-24-2017-9899cbd031dc3fc37a384b1f9e2b379e90a9a3a6 Level Medium- Running 30
> iterations, 12 at a time .pdf, Lucene-Solr Master Test Beasults
> 02-01-2017-bbc455de195c83d9f807980b510fa46018f33b1b Level Medium- Running 30
> iterations, 10 at a time.pdf, Lucene-Solr Master Test Beasults
> 02-08-2017-6696eafaae18948c2891ce758c7a2ec09873dab8 Level Medium+- Running 30
> iterations, 10 at a time, 8 cores.pdf, Lucene-Solr Master Test Beasults
> 02-14-2017- Level Medium+-a1f114f70f3800292c25be08213edf39b3e37f6a Running 30
> iterations, 10 at a time, 8 cores.pdf, Lucene-Solr Master Test Beasults
> 02%2F17%2F2017-19c8ec2bf1882bed1bb34d0b55198d03f2018838 Level Hard Running
> 100 iterations, 12 at a time, 8 cores.pdf
>
>
> We have many Jenkins instances blasting tests, some official, some policeman,
> I and others have or had their own, and the email trail proves the power of
> the Jenkins cluster to find test fails.
> However, I still have a very hard time with some basic questions:
> what tests are flakey right now? which test fails actually affect devs most?
> did I break it? was that test already flakey? is that test still flakey? what
> are our worst tests right now? is that test getting better or worse?
> We really need a way to see exactly what tests are the problem, not because
> of OS or environmental issues, but more basic test quality issues. Which
> tests are flakey and how flakey are they at any point in time.
> Reports:
> https://drive.google.com/drive/folders/0ByYyjsrbz7-qa2dOaU1UZDdRVzg?usp=sharing
> 01/24/2017 -
> https://docs.google.com/spreadsheets/d/1JySta2j2s7A_p16wA1UO-l6c4GsUHBIb4FONS2EzW9k/edit?usp=sharing
> 02/01/2017 -
> https://docs.google.com/spreadsheets/d/1FndoyHmihaOVL2o_Zns5alpNdAJlNsEwQVoJ4XDWj3c/edit?usp=sharing
> 02/08/2017 -
> https://docs.google.com/spreadsheets/d/1N6RxH4Edd7ldRIaVfin0si-uSLGyowQi8-7mcux27S0/edit?usp=sharing
> 02/14/2017 -
> https://docs.google.com/spreadsheets/d/1eZ9_ds_0XyqsKKp8xkmESrcMZRP85jTxSKkNwgtcUn0/edit?usp=sharing
> 02/17/2017 -
> https://docs.google.com/spreadsheets/d/1LEPvXbsoHtKfIcZCJZ3_P6OHp7S5g2HP2OJgU6B2sAg/edit?usp=sharing
--
This message was sent by Atlassian JIRA
(v6.4.14#64029)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]