* Jason Wang <jasow...@redhat.com> [2024-07-29 10:16:48]:

> > Without this optimization, guest VCPU would have stalled until VMM in host 
> > can
> > emulate it, which can be long, especially a concern when the read is issued 
> > in
> > hot path (interrupt handler, w/o MSI_X).
> 
> I think I agree with Michael, let's try to use MSI-X here where
> there's a lot of existing optimizations in various layers.

Yes sure. Even if we implement MSI-X, there is a security angle to why we want
hypervisor-hosted PCI bus (have provided details in an earlier reply

https://lore.kernel.org/virtio-dev/20240726070609.gb723...@quicinc.com/T/#m84455763d6b4d0d3b8df814b3d64e6e48ec12ae3


> > > > We will however likely need vduse to support configuration writes 
> > > > (guest VM
> > > > updating configuration space, for ex: writing to 'events_clear' field 
> > > > in case of
> > > > virtio-gpu). Would vduse maintainers be willing to accept config_write 
> > > > support
> > > > for select devices/features (as long as the writes don't violate any 
> > > > safety
> > > > concerns we may have)?
> > >
> > > I think so, looking at virtio_gpu_config_changed_work_func(), the
> > > events_clear seems to be fine to have a posted semantic.
> > >
> > > Maybe you can post an RFC to support config writing and let's start from 
> > > there?

Does VDUSE support runtime configuration changes (ex: block device capacity
changes)? I am curious how the atomicity of that update is handled. For ex:
guest reading config space while concurrent updates are underway (SET_CONFIG).
I think the generation count should help there - but it was not clear to me how
VDUSE is handling generation_count reads during such concurrent udpates.

- vatsa

Reply via email to