Clark Boylan <[email protected]> writes: > The downside here is we double our total number of jobs (though we do > not double the number of gearman job registrations since gearman will > register a job per node type regardless of option used). It is also much > more explicit and will likely require a greater understanding of our job > configs to edit them (this isn't all bad, where things used to mostly > work before by magic they will now work by explicit design).
Thank you for the problem statement and the proposals. It helps me to thing in examples -- if I'm following, we get the following registrations with the two options: The first results in: build:tempest build:tempest:trusty build:tempest:xenial The second results in: build:tempest-trusty build:tempest-trusty:trusty build:tempest-xenial build:tempest-xenial:xenial And our current state is: build:tempest build:tempest:trusty So the first option is 1.5x our current registration (of affected jobs) and the second is 2x. Unfortunately, we recently scaled past our ability to handle the magnitude of function registrations we have; in fact, we recently merged a stop-gap change to Zuul just to try to handle the current load. I'm not sure how long that will last us, and whether we can actually sustain the number of registrations at issue here. We may want to do some testing. And we may want to try to minimize the impact (in whatever ways we can -- for example by choosing one of those options over the other or defining/limiting the set of affected jobs). I hate having to even regard this as a factor in making this decision, but this is where I think we are. We may want to consider extraordinary measures such as yet another "private" option to zuul-launcher to reduce the number of unnecessary function registrations as a further stop-gap. This is, of course, one of the biggest problems we are solving in Zuul v3, and I think it will be solved well (both in terms of the user experience in specifying this kind of thing as well as the ability for the system to handle it), but unfortunately we aren't there yet. Hopefully this will at least be the last time we have to deal with this problem. -Jim _______________________________________________ OpenStack-Infra mailing list [email protected] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra
