The only thing I would add a priori.
Is that the cost of scalar packet processing in the kernel may mean that
the cost of the single packet copy will be comparatively in the noise,
and that userspace vSwitch will give you additional features/flexibility
versus what you can realize in the Kernel.
If you benchmark, would be good to know the results.
On 19/02/2018 08:48, Jerome Tollet wrote:
As I already told you in a previous message. With tapv2, there's no longer copy
from user to kernel (there's still kernel to user copy).
Le 19/02/2018 09:35, « firstname.lastname@example.org au nom de Avi Cohen (A) »
<email@example.com au nom de avi.co...@huawei.com> a écrit :
Thank you Ray
dpdk is not running in the container - this is customer container and I
cannot force the customer to use dpdk.
Regarding the "not compatible" - I mean pkts received at vpp user-space,
destined to C1 (container) ,
should be copied to the kernel and then to C1 IP-stack (e.g. AF_PACKET
interface ) , while this copy can be saved if my vSwitch is running in kernel.
So theoretically for the container networking use-case , vSwitch in
kernel can achieve better performance and CPU utilization than any vSwitch
over dpdk. (VPP or OvS)
Unless a memory map can be used here to save the copy (userspace to kernel
and kernel to userspace)
> -----Original Message-----
> From: firstname.lastname@example.org [mailto:email@example.com] On Behalf Of Ray
> Sent: Friday, 16 February, 2018 4:52 PM
> To: firstname.lastname@example.org
> Subject: Re: [vpp-dev] VPP vs XDP-eBPF performance numbers - for linux
> Hi Avi,
> So the "not compatible" comment is something I would like to understand
> Are you typically running DPDK/VPP based or Socket based applications
> your containers?
> Our perspective is that userspace networking is also equally good for
> Container/Cloud Native - of course depending on what you are trying to
> have done a huge amount of work in both VPP and DPDK developing
> technologies to help like MemIF (including libmemif), Virtio-User,
> Master-VM, Contiv-VPP etc to help in this regard.
> What a container is - ultimately - is a silo'ing of CPU, memory and IO
> for both Kernel and Userspace processes, but there is nothing in this
> to choose Kernel over Userspace networking.
> The way we typically handle Containers networking for both VPP/DPDK is
> packets to flow directly between userspace processes - no kernel
> Where VPP runs in the default namespace possibly as a vSwitch or
> switches packets to containers running DPDK/VPP etc, all achieved in
> userspace. We also provide the Master-VM approach and/or FastTAP or
> AF_PACKET to punt the packets into the Kernel when required.
> We test the the performance of aspects of this such as Memif regularly -
> results are available here.
> Ray K
> On 13/02/2018 14:04, Avi Cohen (A) wrote:
> > Hello
> > Are there 'numbers' for performance - VPP vs XDP-eBPF for container
> > Since the DPDK and linux-containers are not compatible, is a sense that
> container and host share the same kernel - hence pkts received at
> at user-space and directed to a linux container - should be go down
> kernel and then to the container ip-stack, while in XDP-eBPF this pkt
> forward to the container ip-stack directly from the kernel.
> > I heard that a vhostuser interface for containers is 'in-working'
> > Can anyone assist with the performance numbers and the status of this
> user for containers ?
> > Best Regards
> > Avi
You receive all messages sent to this group.
View/Reply Online (#8255): https://lists.fd.io/g/vpp-dev/message/8255
View All Messages In Topic (9): https://lists.fd.io/g/vpp-dev/topic/11144798
Mute This Topic: https://lists.fd.io/mt/11144798/21656
New Topic: https://lists.fd.io/g/vpp-dev/post
Change Your Subscription: https://lists.fd.io/g/vpp-dev/editsub/21656
Group Home: https://lists.fd.io/g/vpp-dev
Contact Group Owner: vpp-dev+ow...@lists.fd.io
Terms of Service: https://lists.fd.io/static/tos