Hi, On Wed, May 27, 2015 at 2:03 AM, Martin Pitt <[email protected]> wrote:
> Consider a recent example: > > > http://d-jenkins.ubuntu-ci:8080/job/wily-adt-gem2deb/8/ARCH=amd64,label=adt/ > (25 minutes) > > http://d-jenkins.ubuntu-ci:8080/job/wily-adt-gem2deb/8/ARCH=i386,label=adt/ > (1:39 minutes) > > The team are sick of hearing me say this by now, but.... High Throughput != Low Latency. Hypothetical: If every test run were to take 60 minutes longer than the old system, but we could run 200 tests in parallel, we'd have pretty good throughput, terrible latency. We measured these numbers, I don't recall what they were, but... We're in a good position to scale up automatically when we need to (I agree that we should work on that, and indeed have had informal discussions within the team regarding how to achieve that). The acceptance criteria for the first sprint were all worded in terms of throughput, and we made sure we could deliver a higher throughput than the old system (we just crank up the number of cloud workers). If you also care about a certain minimum latency (let's call it 'mean time to test results'), then we should work on that as a separate sprint. We already have several ideas regarding how to improve the situation there. adt-cloud is far from perfect; I agree with your points, I just wanted to make sure we don't confuse the statistics here :D Cheers, -- Thomi Richards [email protected]
-- Mailing list: https://launchpad.net/~canonical-ci-engineering Post to : [email protected] Unsubscribe : https://launchpad.net/~canonical-ci-engineering More help : https://help.launchpad.net/ListHelp

