On Tue, Mar 03, 2026 at 02:19:13AM -0500, Michael S. Tsirkin wrote:
On Tue, Mar 03, 2026 at 07:51:32AM +0100, Alexander Graf wrote:
On 02.03.26 20:52, Michael S. Tsirkin wrote:
> On Mon, Mar 02, 2026 at 04:48:33PM +0100, Alexander Graf wrote:
> > On 02.03.26 13:06, Stefano Garzarella wrote:
> > > CCing Bryan, Vishnu, and Broadcom list.
> > >
> > > On Mon, Mar 02, 2026 at 12:47:05PM +0100, Stefano Garzarella wrote:
> > > > Please target net-next tree for this new feature.
> > > >
> > > > On Mon, Mar 02, 2026 at 10:41:38AM +0000, Alexander Graf wrote:
> > > > > Vsock maintains a single CID number space which can be used to
> > > > > communicate to the host (G2H) or to a child-VM (H2G). The current
logic
> > > > > trivially assumes that G2H is only relevant for CID <= 2 because these
> > > > > target the hypervisor. However, in environments like Nitro
> > > > > Enclaves, an
> > > > > instance that hosts vhost_vsock powered VMs may still want to
> > > > > communicate
> > > > > to Enclaves that are reachable at higher CIDs through
virtio-vsock-pci.
> > > > >
> > > > > That means that for CID > 2, we really want an overlay. By default,
all
> > > > > CIDs are owned by the hypervisor. But if vhost registers a CID,
> > > > > it takes
> > > > > precedence. Implement that logic. Vhost already knows which CIDs it
> > > > > supports anyway.
> > > > >
> > > > > With this logic, I can run a Nitro Enclave as well as a nested VM with
> > > > > vhost-vsock support in parallel, with the parent instance able to
> > > > > communicate to both simultaneously.
> > > > I honestly don't understand why VMADDR_FLAG_TO_HOST (added
> > > > specifically for Nitro IIRC) isn't enough for this scenario and we
> > > > have to add this change. Can you elaborate a bit more about the
> > > > relationship between this change and VMADDR_FLAG_TO_HOST we added?
> >
> > The main problem I have with VMADDR_FLAG_TO_HOST for connect() is that it
> > punts the complexity to the user. Instead of a single CID address space, you
> > now effectively create 2 spaces: One for TO_HOST (needs a flag) and one for
> > TO_GUEST (no flag). But every user space tool needs to learn about this
> > flag. That may work for super special-case applications. But propagating
> > that all the way into socat, iperf, etc etc? It's just creating friction.
> >
> > IMHO the most natural experience is to have a single CID space, potentially
> > manually segmented by launching VMs of one kind within a certain range.
> >
> > At the end of the day, the host vs guest problem is super similar to a
> > routing table.
> If this is what's desired, some bits could be stolen from the CID
> to specify the destination type. Would that address the issue?
> Just a thought.
Nope :-( VMMs some times use random u32 to set CID (avoiding reserved
ones like 0, 1, 2, 3, U32_MAX). We also documented them in virtio spec:
https://docs.oasis-open.org/virtio/virtio/v1.3/csd01/virtio-v1.3-csd01.html#x1-4780004
If we had thought of this from the beginning, yes. But now that everyone
thinks CID (guest) == CID (host), I believe this is no longer feasible.
We added a new flag (VMADDR_FLAG_TO_HOST) in struct sockaddr_vm exactly
for that use case around 6 years ago [1], but not much work was done to
propagate that change to userspace tools.
IMO that should be improved, and if for Nitro this is useful, you should
try to help on that effort.
Stefano
[1]
https://lore.kernel.org/netdev/[email protected]/
Alex
I don't really insist, but just to point out that if we wanted to, we
could map multiple CIDs to host. Anyway.