The idea would be that the whole integration fails with only a single test 
failing.
But for that the monkey has to be at 100% ;) and we're at 95% or so and only 
integrate
a single issue at a time and then rerun all validations, but thats future talk.

For now, I think the only reasonable thing is to look at the build each friday 
and
make it green, since classifying tests is not that easy, and the jenkins setup 
is
already close to be unmaintainable.

On 2013-10-21, at 23:24, Sven Van Caekenberghe <[email protected]> wrote:
> Tests that fail indicate that something is broken. They are like a kind of 
> todo. But sometimes they are hard to fix and remain open for a long time. I 
> understand that that conflicts with the notion of being 'all green all the 
> time'. Maybe we should classify them somehow ?
> 
> On 21 Oct 2013, at 23:08, Camillo Bruni <[email protected]> wrote:
> 
>> sad but true :(
>> 
>> On 2013-10-21, at 08:37, Marcus Denker <[email protected]> wrote:
>> 
>>> I have turned off these emails for now, they are too many and thus get 
>>> ignored
>>> (I have never seen anyone but me filing an issue tracker entry for a newly 
>>> failing test…)
>>> 
>>>     Marcus
>>> 
>>> On Oct 21, 2013, at 8:32 AM, [email protected] wrote:
>>> 
>>>> https://ci.inria.fr/pharo/job/Pharo-3.0-Update-Step-2.1-Validation-M-Z/label=win/21/
>>>> 
>>>> 2 regressions found.
>>>> Tests.Release.ReleaseTest.testLocalMethodsOfTheClassShouldNotBeRepeatedInItsTraits
>>>> Zinc.Tests.ZnServerTests.testEntityTooLarge

Attachment: signature.asc
Description: Message signed with OpenPGP using GPGMail

Reply via email to