On Wed, Oct 20, 2021 at 03:44:08PM +0200, David Hildenbrand wrote: > On 18.08.21 21:42, Peter Xu wrote: > > This is a long pending issue that we haven't fixed. The issue is in QEMU we > > have implicit device ordering requirement when realizing, otherwise some of > > the > > device may not work properly. > > > > The initial requirement comes from when vfio-pci starts to work with > > vIOMMUs. > > To make sure vfio-pci will get the correct DMA address space, the vIOMMU > > device > > needs to be created before vfio-pci otherwise vfio-pci will stop working > > when > > the guest enables the vIOMMU and the device at the same time. > > > > AFAIU Libvirt should have code that guarantees that. For QEMU cmdline > > users, > > they need to pay attention or things will stop working at some point. > > > > Recently there's a growing and similar requirement on vDPA. It's not a hard > > requirement so far but vDPA has patches that try to workaround this issue. > > > > This patchset allows us to realize the devices in the order that e.g. > > platform > > devices will be created first (bus device, IOMMU, etc.), then the rest of > > normal devices. It's done simply by ordering the QemuOptsList of "device" > > entries before realization. The priority so far comes from migration > > priorities which could be a little bit odd, but that's really about the same > > problem and we can clean that part up in the future. > > > > Libvirt can still keep its ordering for sure so old QEMU will still work, > > however that won't be needed for new qemus after this patchset, so with the > > new > > binary we should be able to specify qemu cmdline as wish on '-device'. > > > > Logically this should also work for vDPA and the workaround code can be done > > with more straightforward approaches. > > > > Please review, thanks. > > Hi Peter, looks like I have another use case:
Hi, David, > > vhost devices can heavily restrict the number of available memslots: > e.g., upstream KVM ~64k, vhost-user usually 32 (!). With virtio-mem > intending to make use of multiple memslots [1] and auto-detecting how > many to use based on currently avilable memslots when plugging and > realizing the virtio-mem device, this implies that realizing vhost > devices (especially vhost-user device) after virtio-mem devices can > similarly result in issues: when trying realization of the vhost device > with restricted memslots, QEMU will bail out. > > So similarly, we'd want to realize any vhost-* before any virtio-mem device. > > Do you have any updated version of this patchset? Thanks! Yes I should follow this up, thanks for asking. Though after Markus and Igor pointed out to me that it's much more than types of device and objects to order, I don't have a good way to fix the ordering issue for good on all the problems; obviously current solution only applies to device class ordering. Examples that Markus provided: https://lore.kernel.org/qemu-devel/87ilzj81q7....@dusky.pond.sub.org/ Also there can be inter-dependency issue too for single device class, e.g., for pci buses if bus pcie.2 has a parent pci bus of pcie.1, then we must speficy the "-device" for pcie.1 before the "-device" of pcie.2, otherwise qemu will fail to boot too. Any of above examples means ordering based on device class can only solve partial of the problems, not all. And I can buy in with what people worry about on having yet another way to fix ordering, since the root issue is still unsettled, even if the current solution seems to work for vIOMMU/vfio, and I had a feeling it could work too with the virtio-mem issue you're tackling with. My plan is to move on with what Igor suggested, on using either pre_plug hook for vIOMMU to make sure no special devices like vfio are realized before that. I think it'll be as silly as a pci bus scan on all the pcie host bridges looking for vfio-pci, it can even be put directly into realize() I think as I don't see an obvious difference on failing pre_plug() or realize() so far. Then I'll just drop this series so the new version may not really help with virtio-mem anymore; though not sure virtio-mem can do similar thing. One step back, OTOH, I do second on what Daniel commented in the other thread on leaving that problem to the user; sad to know that we already have pmem restriction so hot plugging some device already start to fail, but maybe failing is fine as long as nothing will crash? :) I also do think it's nice to at least allow the user to specify the exact value of virtio-mem slot number without any smart tricks to be played when the user wants - I think it's still okay to do automatic detection, but that's already part of "policy" not "mechanism" to me so imho it should be better optional, and now I had a feeling that maybe qemu should be good enough on providing these mechanisms first then we leave the rest of the problems to libvirt, maybe that's a better place to do all these sanity checks and doing smart things on deciding the slot numbers. For qemu failing at the right point without interrupting the guest seems to be good enough so far. I think "early failing" seems to not be a problem for virtio-mem already since if there's a conflict on the slot number then e.g. vhost-user will already fail early, not sure whether it means it's good enough. For vIOMMU I may need to work on the other bus scanning patchset to make sure when vfio is specified before vIOMMU then we should fail qemu early, and that's still missing. Thanks, -- Peter Xu