On Wed, Dec 04, 2013 at 02:54:07PM -0500, Chris Evich wrote:
> All,
>
> Currently virt-tests attempts to detect when the current virtualization
> environment can be recycled for use by the next test. Measurements show
> this optimization saves a significant amount of testing time. However,
> I believe the practical cost is significant additional maintenance
> burdens, and (perhaps worse) greater-than-zero questionable test results.
>
> On the maintenance-front, environment-cleanliness detection complexity
> increases in proportion to additional hardware (and configuration)
> support for both the harness and tests. This leads to the harness
> requiring a lot of "magic" (complicated and distributed logic) code to
> support cleanliness detection. I'm sure most seasoned developers here
> have encountered failures in this area on more than a few occasions, and
> have been exposed to pressure for complicated or messy fixes and
> workarounds.
>
> On the results-front, for all except the most simple tests, using the
> default/automatic environment, PASS/FAIL trustworthiness is tied
> directly to:
>
> * Trust that the harness has managed expectations precisely for an
> unknown number of proceeding tests.
>
> * Assumption that the harness usually does the right thing over the
> long-term. Otherwise tests can force environment reset/re-build when it
> is critical.
>
> After a lengthy discussion with lmr on the topic, we are questioning the
> practical benefits of the time-savings versus the maintenance cost and
> importance of long-term result-trust and reproduce-ability.
>
> I believe we can significantly increase result-trust and reduce
> maintenance burden (including "magic" code), if the harness takes an
> "environment is always dirty" stance. In other words, take steps to
> rebuild a known "pristine" environment between every test and
> remove/reduce most of the cleanliness detection. Placing more of the
> setup burden on the tests, which are closer to the state-requirements
> anyway.
>
> However, we feel it's important to also get the community's input on
> this topic. Are most of you already using a combination of
> '--restore-image-between-tests' and 'kill_vm = yes' anyway? Or do you
> see large benefits from the harness doing cleanliness detection despite
> the costs? What is your opinion or feedback on this topic?
Hi there.
This is a very nice topic to discuss, I'm glad you brought it.
As a wise man once said, "(Early) optimization is the root of all
evils". I hold the opinion that the *default* should be
'--restore-image-between-tests' and 'kill_vm = yes'.
Users who are concerned about testing time and know what they're
doing (or the risk they're taking) would enable the optimizations
in their test environments.
Do you have any numbers on the amount of time saved on a typical
run of virt-test with/without these optimizations? In other
words, what's the magnitude of the problem we're solving by
turning these optimizations on by default?
Thanks.
- Ademar
--
Ademar de Souza Reis Jr.
Red Hat
^[:wq!
_______________________________________________
Virt-test-devel mailing list
[email protected]
https://www.redhat.com/mailman/listinfo/virt-test-devel