I just saw a note over in HBase where someone remarked on an
unexpectedly long job time. It might have just be a transient spike in
load on one of the build servers. I haven't cross-referenced things, but
just thought I'd mention it.
Michael Wall wrote:
Yes, thanks Sean. They had been running in about 2 hours, but the last one
took 6 hrs. I am clicking through to figure out which test took so long.
On Thu, Aug 11, 2016 at 12:08 AM, Josh Elser<[email protected]> wrote:
Cool. Thanks for the info!
Sean Busbey wrote:
Right now it is manually maintained. Specifically, updating it is a matter
of find + grep + copy/paste.
There are some options for automating it, but I haven't seriously
investigated any of them yet. One would be using the job DSL stuff Jenkins
has to generate the job config. That should allow us to do the enumerating
as a part of a launching job. Another would be to use a distributed test
framework rather than Jenkins to do the parallelization.
If no one else digs into these questions, I imagine I'll need to by the
end
of the year for other work related stuff.