QEMU Summit Minutes 2023
========================

As usual, we held a QEMU Summit meeting at KVM Forum.  This is an
invite-only meeting for the most active maintainers and submaintainers
in the project, and we discuss various project-wide issues, usually
process stuff. We then post the minutes of the meeting to the list as
a jumping off point for wider discussion and for those who weren't
able to attend.

Attendees
=========

"Peter Maydell" <peter.mayd...@linaro.org>
"Alex Bennée" <alex.ben...@linaro.org>
"Kevin Wolf" <kw...@redhat.com>
"Thomas Huth" <th...@redhat.com>
"Markus Armbruster" <arm...@redhat.com>
"Mark Cave-Ayland" <mark.cave-ayl...@ilande.co.uk>
"Philippe Mathieu-Daudé" <phi...@linaro.org>
"Daniel P. Berrangé" <berra...@redhat.com>
"Richard Henderson" <richard.hender...@linaro.org>
"Michael S. Tsirkin" <m...@redhat.com>
"Stefan Hajnoczi" <stefa...@redhat.com>
"Alex Graf" <ag...@csgraf.de>
"Gerd Hoffmann" <kra...@redhat.com>
"Paolo Bonzini" <pbonz...@redhat.com>
"Michael Roth" <michael.r...@amd.com>

Topic 1: Dealing with tree wide changes
=======================================

Mark Cave-Ayland raised concerns that tree wide changes often get
stuck because maintainers are conservative about merging code that
touches other subsystems and doesn't have review.  He mentioned a
couple of cases of PC refactoring which had been held up and
languished on the list due to lack of review time. It can be hard to
get everything in the change reviewed, and then hard to get the change
merged, especially if it still has parts that weren't reviewed by
anybody.

Alex Bennée mentioned that maintainers can always give an Acked-by and
then rely on someone else doing the review. But even getting
Acked-by's can take time and we still have a problem with absent
maintainers.

In a brief diversion Markus mused that having more automated checking
for things like QAPI changes would help reduce the maintainer load for
the more mechanical changes.

It was pointed out we should be more accepting of merging changes
without explicit maintainer approval where the changes are surface
level system wide API changes rather than touching the guts of any
particular subsystem. This avoids the sometimes onerous task of
splitting mechanical tree-wide changes along subsystem boundaries.
A delay of one-week + send a ping + one-week was suggested as
sufficient time for maintainers to reply if they care specifically
about the series.

Alex Graf suggested that we should hold different standards of review
requirements depending on the importance of the sub-system. We should
not hold up code because a minor underused subsystem didn't get
signoff. We already informally do this but we don't make it very clear
so it can be hard to tell what is and isn't OK to let through without
review.

Topic 2: Are we happy with the email workflow?
==============================================

This was a topic to see if there was any consensus among maintainers
about the long-term acceptability of sticking with email for patch
submission and review -- in five years' time, if we're still doing it
the same way, how would we feel about it?

One area where we did get consensus was that now that we're doing CI
on gitlab we can change pull requests from maintainers from via-email
to gitlab merge requests. This would hopefully mean that instead of
the release-manager having to tell gitlab to do a merge and then
reporting back the results of any CI failures, the maintainer
could directly see the CI results and deal with fixing up failures
and resubmitting without involving the release manager. (This
may have the disbenefit that there isn't a single person any
more who looks at all the CI results and gets a sense of whether
particular test cases have pre-existing intermittent failures.)

There was less agreement on the main problem of reviewing code.
On the positive side:
 - everyone acknowledged that the email workflow was a barrier to new
   contributors
 - email is not awkward just for newcomers -- many regular
   developers have to deal with corporate mail systems, firewalls,
   etc, that make the email workflow more awkward than it was
   when Linux (and subsequently QEMU) first adopted it decades ago
 - a web UI means that unreviewed submissions are easier to track,
   rather than being simply ignored on the mailing list
But on the negative side:
 - gitlab is not set up for a "submaintainer tree" kind of workflow,
   so patches would go directly into the main tree and get no
   per-subsystem testing beyond whatever our CI can cover
 - gitlab doesn't handle adding Reviewed-by: and similar tags
 - email provides an automatic archive of the historical code
   review conversation; gitlab doesn't do this as well
 - it would increase the degree to which we might have a lock-in
   problem with gitlab (we could switch away, but it gets more painful)
 - it has the potential to be a bigger barrier to new contributors
   getting started with reviewing, compared to "just send an email"
 - it would probably burn the project's CI minutes more quickly
   as we would do runs per-submission, not just per-pullreq
 - might increase the awkwardness of the "some contributors/bug
   reporters/people interested in a patch are only notifiable
   by gitlab handle, and some only by email, and you can't
   email a handle and you can't tag an email on gitlab" split
 - offline working is trickier/impossible
 - many people were somewhere between "not enthusiastic" and
   "deal-breaker" about the web UI experience for code review
   (some of this can be dealt with via tooling)

So on net, there is no current consensus that we should make a change
in the project's patch submission and code review workflow.

Topic 3: Should we split responsibility for managing CoC reports?
=================================================================

The QEMU project happily does not have to deal with many Code of
Conduct (CoC) reports, but we could do a better job with managing the
ones we do get.  At the moment CoC reports go to the QEMU Leadership
Committee; Paolo proposed that it would be better to decouple CoC
handling to a separate team: although the CoC itself seems good,
asking the Leadership Committee to deal with the reports has not been
working so well.  The model for this is that Linux also initially had
its tech advisory board be the contact for CoC reports before
switching to a dedicated team for them.

There was general consensus that we should try the separate-team
approach. We plan to ask on the mailing list for volunteers who would
be interested in helping out with this.

(As always, the existence of a CoC policy and separate CoC team
doesn't remove the responsibility of established developers for
dealing with poor behaviour on the mailing lists when we see it. But
we can't see everything and the existence of a formal channel for
escalating problems is important.)

Topic 4: Size of the QEMU tarballs
==================================

Stefan began by outlining how the issue was noticed after Rackspace
pulled their open source funding leading to a sudden rise in hosting
bills. Fortunately we have been able to deal with the immediate
problem by first using Azure and then migrating to Gnome's CDN
service.  However the tarballs are still big with firmware source code
that we suppose most people never look at taking up a significant
chunk of the size. (In particular, EDK2 sources are half the tarball!)

We do need to be careful about GPL compliance (making sure users
have the source if we provide them the compiled firmware blob
for a GPL'd piece of firmware); but we don't need to necessarily
ship the sources in the exact same tarball as the blob.

Peter said we should check with the downstream consumers of our
tarballs what would be most useful to them; or at least figure
out what we think the common use-cases are. At the moment what
we do is not very useful to anybody:
 * most end users, CI systems, etc building from source tarballs
   don't care about the firmware sources and only want the
   binary blobs to be present
 * most downstream distros doing rebuilds want to rebuild the
   firmware from sources anyway, and will use the 'upstream'
   firmware sources rather than the ones we have in the tarballs

Users of QEMU from git don't get a great firmware experience either,
since the firmware is in submodules, with all the usual git submodule
problems. Plus we could do better in these days of Docker containers
than "somebody builds the firmware blob on their machine and sends a
patch with the binary blob to commit to git". So we should consider
whether we can improve things here while we're working on the firmware
problem.

Mark Cave-Ayland mentioned that he's already automated the "build
OpenBios blobs" aspect; so we could look at that as a model.

There definitely seemed to be consensus that it was worth trying
to improve what we do here -- hopefully somebody will have the
time to attempt something :-)


thanks
-- PMM

Reply via email to