I do not think LAVA will and can replace all manual test efforts. One
of the benefit of LAVA is providing regression tests together with
continue integration. But if no one monitoring and react on output of
regression tests, we lost the benefit completely.

One think we must make sure the the regression tests shall always pass
and run successfully on good builds, which is not the case today. So
when the tests fail, it is must easier for LAVA users to focus on
investigate why the tests / LAVA failed.

BR

/Chi Thu

On 2 April 2012 05:07, Andy Doan <[email protected]> wrote:
> On 04/01/2012 08:26 PM, Zach Pfeffer wrote:
>>>
>>> In other words, are we really submitting LAVA jobs and not caring about
>>> the
>>> >  results?
>>
>> Since LAVA:
>>
>> 1. Can't reliably boot all the builds in all configurations
>> 2. Doesn't use linaro-android-media-create (which we tell users to use)
>> 3. Doesn't use the right bootloaders
>>
>> We've always hand tested our builds to ensure they work. Until LAVA:
>>
>> 1. Can program a build in the same manner we tell users to
>> 2. Doesn't assume anything about the target, like it even booting
>>
>> We have to keep hand testing.
>
>
> I think even if LAVA were perfect, hand testing is still required. And I
> won't (in this thread) debate the limitations your bringing up.
>
> In my case, LAVA has been working pretty reliably for Panda for about 4
> months now (at least for my benchmark jobs). When I saw it broken, I pushed
> the issue and the team found a fix pretty quickly. So shouldn't we have
> someone paying attention to at least Panda builds and raise an issue when
> they trend from mostly working to completely broken?

_______________________________________________
linaro-validation mailing list
[email protected]
http://lists.linaro.org/mailman/listinfo/linaro-validation

Reply via email to