I'd like to see some sort of dashboard that would make clear at a
glance when something goes from failing rarely to failing frequently.
Anyone know how to get this reporting out of Jenkins?

-Yonik
http://heliosearch.org - native off-heap filters and fieldcache for solr


On Tue, Feb 18, 2014 at 1:16 PM, Steve Molloy <[email protected]> wrote:
> I agree ignoring tests isn't a good idea, but someone from outside should be 
> able to determine if a failing test is critical or not. Maybe the solution 
> would be to keep running them, but have the failure message specify the Jira 
> entry associated with it. Then whoever runs the test, to build RC or for 
> another reason, can see it is a know issue that will eventually get addressed 
> and from Jira entry should be able to assess whether or not it's acceptable 
> for the context.
>
> My 2 cents. :)
>
> Steve
> ________________________________________
> From: Mark Miller [[email protected]]
> Sent: February 18, 2014 11:48 AM
> To: Lucene/Solr dev
> Subject: Re: Building an RC for Lucene / Solr 4.7
>
> Depends on your situation. For me, I can run the tests and have them pass 6 
> times in a row. It it was otherwise, I would fix the issue, as I have for 
> years now.
>
> When I see a test failing commonly for another dev, I'll also often jump in 
> and help fix the issue. As I have for years now.
>
> I'll also work on tests in general pretty much off and on all the time. Not a 
> lot of other guys doing it, so sometimes I'm more ahead or behind than other 
> times.
>
> Should we ignore them? I don't think so. We should keep fixing them. Removing 
> them would remove critical coverage. The fails in general, are known - 
> tracked in JIRA or logged by jenkins. I know if it's something thats likely 
> test related and I know if it's something knew or something existing. If one 
> of them annoys people, please jump in! I don't know half the code or features 
> I end up jumping in to help with.
>
> It's also not necessarily the same tests or the same issues. I've fixed 
> hundreds of issues. The code keeps changing, the environments and number of 
> tests threads used and number of contributors keep changing.
>
> The rational is I have a *very* close, *very* informed view on the Solr 
> tests. I read every fail, I run my own Jenkins server, I'm not some third 
> party who wonders what these fails mean. Someone else might not be so 
> informed. But unless they are helping to work on the tests, filing JIRA 
> issues, adding to existing JIRA issues, or emailing the list for help, those 
> people are pretty much on their own.
>
> They will have to wait until I can make every tests run in any env with any 
> computing power without fail even as tests are added to and the code is 
> changed for a very large distributed system.
>
>
> - Mark
>
> http://about.me/markrmiller
>
> On Feb 18, 2014, at 8:41 AM, Simon Willnauer <[email protected]> 
> wrote:
>
>> Hmm I am not sure if I understand that rational. Anyway wouldn't it be
>> better to @Ignore the tests and re-enable once they are fixed? I just
>> wonder how I can tell if I broke something in solr while working on
>> lucene and I am supposed to ignore failing tests?
>>
>> simon
>>
>> On Tue, Feb 18, 2014 at 2:36 PM, Mark Miller <[email protected]> wrote:
>>> Because we run those tests locally and see the results on Jenkins and have 
>>> an understanding what the issues are. Perhaps you don't, but the Solr 
>>> people do.  That's how we can release.
>>>
>>> That script shouldn't run the solr tests.
>>>
>>> - Mark
>>>
>>>> On Feb 18, 2014, at 8:28 AM, Simon Willnauer <[email protected]> 
>>>> wrote:
>>>>
>>>> it is not the smoke test - I ran this:
>>>>
>>>> python3.2 -u buildAndPushRelease.py -prepare -push simonw -sign
>>>> ECA39416 /home/simon/work/projects/lucene/lucene_solr_4_7/ 4.7.0 0
>>>>
>>>> compared to this:
>>>>
>>>> python3.2 -u buildAndPushRelease.py -prepare -push simonw -sign
>>>> ECA39416 -smoke /tmp/lucene_solr_4_7_smoke
>>>> /home/simon/work/projects/lucene/lucene_solr_4_7/ 4.7.0 0
>>>>
>>>> the first cmd runs the tests before it builds the release. I disabled
>>>> the tests by applying -smoke which skips the test run. This is still
>>>> freaking odd - how can I publish a release if the test don't pass a
>>>> single time out of 6 runs?
>>>>
>>>> simon
>>>>
>>>>
>>>>> On Tue, Feb 18, 2014 at 1:59 PM, Mark Miller <[email protected]> 
>>>>> wrote:
>>>>> Weird. The smoke script has always had solr tests disabled. Who enabled 
>>>>> it?  Those fails in general have JIRA issues as far as I remember.
>>>>>
>>>>> - Mark
>>>>>
>>>>>> On Feb 18, 2014, at 7:24 AM, Simon Willnauer <[email protected]> 
>>>>>> wrote:
>>>>>>
>>>>>> hey folks,
>>>>>>
>>>>>> I try to build an RC to checkout if everything goes alright and I now
>>>>>> spend 4 hours already without luck. The release script runs the solr
>>>>>> tests but they never pass. I tried it 6 times now and each time a
>>>>>> different test breaks. I am going to disable the solr test run in the
>>>>>> release script for now to actually run an RC build but this is very
>>>>>> concerning IMO. I tried to reproduce the failures each time but they
>>>>>> don't reproduce. Its mainly:
>>>>>>
>>>>>> org.apache.solr.cloud.OverseerTest.testShardAssignmentBigger
>>>>>> org.apache.solr.cloud.BasicDistributedZk2Test.testDistribSearch
>>>>>> org.apache.solr.cloud.ChaosMonkeySafeLeaderTest.testDistribSearch
>>>>>>
>>>>>> any ideas?
>>>>>>
>>>>>> I mean looking at the CI builds those failures are no news are they?
>>>>>>
>>>>>> simon
>>>>>>
>>>>>> ---------------------------------------------------------------------
>>>>>> To unsubscribe, e-mail: [email protected]
>>>>>> For additional commands, e-mail: [email protected]
>>>>>
>>>>> ---------------------------------------------------------------------
>>>>> To unsubscribe, e-mail: [email protected]
>>>>> For additional commands, e-mail: [email protected]
>>>>
>>>> ---------------------------------------------------------------------
>>>> To unsubscribe, e-mail: [email protected]
>>>> For additional commands, e-mail: [email protected]
>>>>
>>>
>>> ---------------------------------------------------------------------
>>> To unsubscribe, e-mail: [email protected]
>>> For additional commands, e-mail: [email protected]
>>>
>>
>> ---------------------------------------------------------------------
>> To unsubscribe, e-mail: [email protected]
>> For additional commands, e-mail: [email protected]
>>
>
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: [email protected]
> For additional commands, e-mail: [email protected]
>
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: [email protected]
> For additional commands, e-mail: [email protected]
>

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to