Hi

Thanks for your answer!

On Wed, Jul 01, 2015 at 07:19:04PM +0200, intrigeri wrote:
> 
> bertagaz wrote (25 Jun 2015 09:41:23 GMT) :
> > I've prepared a blueprint to start this discussion and take notes of the
> > decisions:
> > https://tails.boum.org/blueprint/automated_builds_and_tests/automated_tests_specs/
> 
> Great work! I've pushed a few minor changes, and a more important one
> (21870e1), on top.

Haha, good catch!

> > ## When to test the builds
> 
> > for base branches, we could envisage to run the full test suite on
> > every automatically built ISO (every git push and daily builds) if
> > we think that is relevant.
> 
> This would be great. A possible optimization would be to do it
> (instead of all base branches, all the time) for:
> 
>  * the stable branch, so that we're always ready to put out an
>    emergency release;
>  * the branch that next scheduled release will be based on (can be
>    either stable, or testing, or devel, depending on when in the
>    release cycle we are).

That sound like a good idea, and would probably cut quite a bit the
number of tests ran per day.

> > for feature branches, we could run the full test suite only on the
> > daily builds, and either only the automated tests related to the
> > branch on every git push, and/or a subset of the whole test suite.
> 
> I'm not sure what's the benefit of testing a topic branch every day if
> no new work has been pushed to it. In the general case, as a developer
> I'd rather see them tested on Git push only, with some rate limiting
> per-day if needed. See below wrt. one specific case.

Well, as for automated builds, it would give an idea if the tests do not
pass anymore because of an external change. But I agree that as we have
to cut the number of automated tests ran per day, we might not to bother
with that at the moment.

> > We can also consider testing only the feature branch that are marked
> > as ReadyforQA as a beginning, even if that doesn't cover Scenario
> > 2 (developers).
> 
> Absolutely, I think that would be the top-priority thing to do for
> topic branches: let's ensure we don't merge crap.

Ok.

> > We can also maybe find more ways to split the automated test suite
> > in faster subsets of feature depending on the context, define
> > priorities for built ISO and/or tests.
> 
> This feels ambituous and potentially quite complex. I say let's design
> something simple and then re-adjust.

Agreed.

I've tried to sum this up in a 'current proposal' subsection.

> > ## How to run the tests
> 
> > The automated test suite MUST have access to the artifacts of
> > a given automated build corresponding to a given commit, as well as
> > to the ISOs of the previous Tails releases.
> 
> The ISO of the one last release should be enough, no?

Yes, I admit I wrote this with the tails-history gitannex repo in mind,
which gives access to all the previous releases. But that's
implementation details.

> It also needs to know what commit that ISO was built from, in order to
> run the test suite from the same commit. Surely we can dynamically get
> this information by inspecting the ISO (maybe even in the iso9660
> metadata), if passing through the info via Jenkins is too painful.
> Maybe that's worth a research ticket?

Yes, that's what the phrase means when it says "a given automated build
corresponding to a given commit", but maybe that's too fuzzy?
With most of the solution out there in Jenkins to chain build and test
jobs, it doesn't seem complicated to pass a parameter to the test job
containing the value of the commit used in the previous (upstream in
Jenkins) build ISO job.

> > The automated test suite MUST be run in a clean environment.
> 
> I'm not sure what exactly you had in mind here, but in my experience,
> the test suite is now quite resistant to being run multiple times in
> a row, so don't bother too much about this -- just using
> a fresh --tmpdir should be enough in general. If we really need to
> e.g. reboot the isotesterN VMs between test suite runs, I've looked
> into it a few weeks ago and dumped results of my research somewhere
> (likely in some blueprint). It seemed to be doable, but adds quite
> some complexity that I'd happily skip.

I've seen your commits on this slave reboot between jobs idea and made
some research myself, and it sure looks quite scary. I've updated the
blueprint to detail a bit what a "clean environment" means and include
your comment.

> > The automated test suite MUST be able to run features in parallel
> > for a single automated build ISO. This way, if more than one
> > isotester are idle, it can use several of them to test an
> > ISO faster.
> 
> Wow! Not sure if/how this can work out, or actually optimize things
> much, with the upcoming new VM snapshots handling.
> 
> Anyway: I doubt we'll have the situation when we have idle
> isotesterN's -- we're rather trying to limit the workload to something
> they can handle -- so perhaps it's not worth putting too much time
> into this? My current feeling is that this is a MAY at best, but I can
> totally be missing something.

Hmm, right, maybe that should rather land in the "future" section, to be
considered when we'll get more hardware for isotesters too.

> > The automated suite SHOULD be able when there are more than one ISO
> > queued for testing to fairly distribute the parallelizing of
> > their features.
> 
> > The automated test suite MUST not allocate all the isotesters for
> > one ISO when others are waiting to be tested.
> 
> These seems to be related to the parallelizing of one test suite run
> on multiple VMs, so I'll skip them until we've discussed that
> topic more.

Yes, it is.

> > The automated test suite MUST be able to accept a treshold of
> > failures for some features before sending notifications. This can
> > help if a scenario fails because of a network congestion, but other
> > use cases will probably raise.
> 
> The current running theory is that the test suite *itself* (as opposed
> to the way it's being run e.g. by Jenkins) should handle this itself,
> see e.g. https://labs.riseup.net/code/issues/9515. I prefer it a lot
> more to having Jenkins ignore failures, as it also benefits people who
> run the test suite outside of Jenkins. But realistically, surely we'll
> anyway have transient failures, and I'm not sure what's the best way
> to deal with it. I doubt it parameterizes a lot how we design the
> whole thing, though: it seems to be only about Jenkins publishers
> configuration, and should not impact the rest, so perhaps we can just
> postpone this topic (and not call it a MUST) until #9515 and friends
> are resolved, and an initial deployment makes our actual needs
> clearer? (See, I'm not *always* in favour of over-engineering
> things ;)

Hehe, but seems like you spread that favour. :)

I agree it's better to have this rooted in the test suite itself, and it
seems that a lot of progress have been made there.

I'm fine with considering this later, when we'll have more in depth idea
about how it behaves once an initial deployment has been

Updated this part of the blueprint too, but now it seems to be really
tiny. :)

bert
_______________________________________________
Tails-dev mailing list
Tails-dev@boum.org
https://mailman.boum.org/listinfo/tails-dev
To unsubscribe from this list, send an empty email to 
tails-dev-unsubscr...@boum.org.

Reply via email to