Hi, All,

I am working in container4nfv (old name: openretriever) in opnfv.

Most of work is to integrate k8s, ovs with dpdk, vpp as 
http://gerrit.opnfv.org/gerrit/openretriever:

I developed simple CNI for vhost-user/virtio-user.
There are many differences between veth and vhost-user/virtio-user.

1. in veth case, CNI create veth pair for backend and pause container.
    In vhost-user/virtio-user, CNI need to create vhost-user/virtio-user for 
POD container instead of pause container.

2. in veth case, POD containers share network namespace with pause container.
    In vhost-user/virtio-user, ONLY one POD container consume virtio.

3. If we pass unix socket in vhost-user to pause container, it is not easy to 
retriever that information in POD container.
   To work around that issue, dummy interface with unix socket ID is created. 
POD container retriever that information and
setup virtio-user with that unix socket id.

4. there is no tcp/ip stack in virtio-user/virtio-user.

If following above model, we need to change many CNIs (calico, Contiv, etc) for 
vhost-user/virtio-user. 
I just came new idea so that only backend need to be changed and CNIs don't 
need to change:

1. basic idea is to offload veth (slow path) to vhost-user/virtio-user (fast 
path)
2. all VHOST message go to veth interface instead of unix socket. Ethertype 
0xFFFF is reserved for VHOST message.
    DPDK need minimal change to support it.
3. if vhost-user/virtio-user is setup, all traffic except VHOST message go to 
vhost-user/virtio-user.
     We can also consider the improvement is to aggregate veth interface and 
vhost-user interface.
     VPP need some change to support it.
4. application in container can inject traffic from vhost-user to kernel to 
reuse tcp/ip stack.

Is anyone  interested in this idea?

Thanks,
-Ruijing






Thanks,
-Ruijing
_______________________________________________
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Reply via email to