On Mon, Mar 01, 2021 at 09:59:22AM -0500, Daniele Buono wrote: > Hi Daniel, > > On 3/1/2021 5:06 AM, Daniel P. Berrangé wrote: > > On Fri, Feb 26, 2021 at 10:21:06AM -0500, Daniele Buono wrote: > > > Build jobs are on the longer side (about 2h and 20m), but I thought it > > > would be better to just have 6 large jobs than tens of smaller ones. > > > > IMHO that is a not viable. > > > > Our longest job today is approx 60 minutes, and that is already > > painfully long when developers are repeatedly testing their > > patch series to find and fix bugs before posting them for review. > > I can perhaps get through 5-6 test cycles in a day. If we have a > > 2 hour 20 min job, then I'll get 2-3 test cycles a day. > > > > I don't want to see any new jobs added which increase the longest > > job execution time. We want to reduce our max job time if anything. > > > > > > I totally understand the argument. > > We could build two targets per job. That would create build jobs that > take 40 to 60-ish minutes. If that's the case, however, I would not > recommend testing all the possible targets but limit them to what > is considered a set of most common targets. I have an example of the > resulting pipeline here: > > https://gitlab.com/dbuono/qemu/-/pipelines/258983262 > > I selected intel, power, arm and s390 as "common" targets. Would > something like this be a viable alternative? Perhaps after > due thinking of what targets should be tested?
What are the unique failure scenarios for CFI that these jobs are likely to expose ? Is it likely that we'll have cases where CFI succeeds in say, x86_64 target, but fails in aarch64 target ? If not, then it would be sufficient to just test a single target to smoke out CFI specific bugs, and assume it covers other targets implicitly. Regards, Daniel -- |: https://berrange.com -o- https://www.flickr.com/photos/dberrange :| |: https://libvirt.org -o- https://fstop138.berrange.com :| |: https://entangle-photo.org -o- https://www.instagram.com/dberrange :|