Hey Santosh,

For keeping track of which component fails its generally better to let jenkins 
figure it out. As we use nosetest (i think) we can store all the test reports 
and jenkins can determine which component failed in which test run without 
having to disable tests. Especially if en/disabling tests is a manual action we 
can be sure that sooner or later we will start forgetting tests or keeping 
tests disabled because they fail often.

Do we have a job that aggregates all the test reports at the moment? 

Cheers,

Hugo

 

 
On 21 jul. 2014, at 12:21, Santhosh Edukulla <santhosh.eduku...@citrix.com> 
wrote:

> All,
> 
> Alex, wanted to disable test cases in between CI( continuous integration) 
> runs for the below "reason" for failures. I only, provided a way to achieve 
> the same using tags, so that it will work for dual purpose, one not to effect 
> community and can be used in CI as well, it will not effect if some body 
> wanted to run all test cases immaterial of tags.
> 
> Reason: In CI,automation "auto" kick starts every 3 hours( configurable) and 
> picks up those delta changes and runs few checks, including sanity. Now, the 
> idea was to keep baseline of testcases running as always pass. Now between 
> two CI runs say T1 and T2, if there are "new" failures introduced, it will be 
> automatically detected with new git changes and bugs are logged automatically 
> against those check-ins. 
> 
> Now, till those bugs gets fixed, those were disabled keeping the baseline as 
> always pass again. The window to fix those failures( either product or test 
> case), through triage was almost constant and it need to be done soon, test 
> cases are then enabled back once fixed, available in next available CI run 
> again. It was to decide the failures between T1 and T2, as baseline is always 
> clean and pass, otherwise CI runs may accumulate failures, and confuse over 
> runs that which commits introduced failures. 
> 
> But, its not hard and fixed rule, we can discuss a better way as well, this 
> was followed in 4.4 release in phase1 for CI, in another phase 2( WIP ), if 
> we agree to some other better solution, then definitely it should be adopted. 
>    
> 
> Santhosh
> ________________________________________
> From: Gaurav Aradhye [gaurav.arad...@clogeny.com]
> Sent: Monday, July 21, 2014 5:40 AM
> To: Stephen Turner; Hugo Trippaers; dev@cloudstack.apache.org; Santhosh 
> Edukulla
> Cc: Girish Shilamkar
> Subject: Re: Disabling failed test cases (was RE: Review Request 23605: 
> CLOUDSTACK-7107: Disabling failed test cases)
> 
> Hugo, Stephen,
> 
> We have been following this practice as part of Continuous Integration 
> changes as defined in doc [1]. I personally think that tagging test case with 
> BugId is good idea to map the test cases with bugs, but the test case should 
> not be skipped when tagged. We can have discussion on this, and change the 
> process if majority agree.
> 
> Adding Santhosh.
> 
> [1]: 
> https://cwiki.apache.org/confluence/display/CLOUDSTACK/Cloudstack+-+Continuous+Integration
> 
> Regards,
> Gaurav
> 
> 
> On Mon, Jul 21, 2014 at 2:37 PM, Stephen Turner 
> <stephen.tur...@citrix.com<mailto:stephen.tur...@citrix.com>> wrote:
> In the case that it's a product bug, wouldn't it be better to keep running 
> the test even if you know it's going to fail? That way, you get a consistent 
> view of the overall pass rate from build to build. If you disable all the 
> tests that are failing, you're going to get a 100% pass rate, but you can't 
> see whether your quality is going up or down.
> 
> --
> Stephen Turner
> 
> 
> -----Original Message-----
> From: Gaurav Aradhye 
> [mailto:nore...@reviews.apache.org<mailto:nore...@reviews.apache.org>] On 
> Behalf Of Gaurav Aradhye
> Sent: 21 July 2014 09:58
> To: Girish Shilamkar
> Cc: Gaurav Aradhye; Hugo Trippaers; cloudstack
> Subject: Re: Review Request 23605: CLOUDSTACK-7107: Disabling failed test 
> cases
> 
> 
> 
>> On July 21, 2014, 1:03 p.m., Hugo Trippaers wrote:
>>> Why would we want to disable test cases that fail? Doesn't this mean we 
>>> need to fix something else so they don't fail anymore?
> 
> Hi Hugo,
> 
> Whenever we found a test case failing, we create bug for that, may it be a 
> test script issue or product bug, so that the test case gets associated with 
> a particular bug and it's easy to track in future why it is failing.
> 
> Addition of this decorator BugId to test case skips the test in the run.
> 
> Whenever the bug gets fixed, then the person who has fixed the bug removes 
> the BugId decorator from test case so that the test case gets picked up in 
> the next run.
> 
> 
> - Gaurav
> 
> 
> -----------------------------------------------------------
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/23605/#review48204
> -----------------------------------------------------------
> 
> 
> On July 17, 2014, 1:17 p.m., Gaurav Aradhye wrote:
>> 
>> -----------------------------------------------------------
>> This is an automatically generated e-mail. To reply, visit:
>> https://reviews.apache.org/r/23605/
>> -----------------------------------------------------------
>> 
>> (Updated July 17, 2014, 1:17 p.m.)
>> 
>> 
>> Review request for cloudstack and Girish Shilamkar.
>> 
>> 
>> Bugs: CLOUDSTACK-7074 and CLOUDSTACK-7107
>>    https://issues.apache.org/jira/browse/CLOUDSTACK-7074
>>    https://issues.apache.org/jira/browse/CLOUDSTACK-7107
>> 
>> 
>> Repository: cloudstack-git
>> 
>> 
>> Description
>> -------
>> 
>> Disabling failed test cases on master.
>> 
>> 
>> Diffs
>> -----
>> 
>>  test/integration/smoke/test_primary_storage.py 66aec59
>>  test/integration/smoke/test_vm_life_cycle.py 240ab68
>> 
>> Diff: https://reviews.apache.org/r/23605/diff/
>> 
>> 
>> Testing
>> -------
>> 
>> 
>> Thanks,
>> 
>> Gaurav Aradhye
>> 
>> 
> 
> 

Reply via email to