Re: [vpp-dev] VPP vs XDP-eBPF performance numbers - for linux containers

2018-02-19 Thread Ray Kinsella

Hi Avi,

The only thing I would add a priori.

Is that the cost of scalar packet processing in the kernel may mean that 
the cost of the single packet copy will be comparatively in the noise, 
and that userspace vSwitch will give you additional features/flexibility 
versus what you can realize in the Kernel.


If you benchmark, would be good to know the results.

Thanks,

Ray K


On 19/02/2018 08:48, Jerome Tollet wrote:

Hi Avi,
As I already told you in a previous message. With tapv2, there's no longer copy 
from user to kernel (there's still kernel to user copy).
Jerome

Le 19/02/2018 09:35, « vpp-dev@lists.fd.io au nom de Avi Cohen (A) » 
 a écrit :

 Thank  you Ray
 dpdk is not running in the container - this is customer container and I 
cannot force the customer to use dpdk.
 Regarding the  "not compatible"  - I mean pkts received at vpp user-space, 
 destined to C1 (container) ,
 should be copied to the kernel and then to C1 IP-stack (e.g.  AF_PACKET 
interface )  , while this copy can be saved if my  vSwitch is running in kernel.
 So theoretically for the container networking use-case ,  vSwitch in 
kernel can  achieve better performance and CPU utilization than any vSwitch  
over dpdk. (VPP or OvS)
 Unless a memory map can be used here to save the copy (userspace to kernel 
and kernel to userspace)
 Best Regards
 Avi

 > -Original Message-
 > From: vpp-dev@lists.fd.io [mailto:vpp-dev@lists.fd.io] On Behalf Of Ray 
Kinsella
 > Sent: Friday, 16 February, 2018 4:52 PM
 > To: vpp-dev@lists.fd.io
     > Subject: Re: [vpp-dev] VPP vs XDP-eBPF performance numbers - for linux
 > containers
 >
 > Hi Avi,
 >
 > So the "not compatible" comment is something I would like to understand 
a bit
 > more?
 >
 > Are you typically running DPDK/VPP based or Socket based applications 
inside
 > your containers?
 >
 > Our perspective is that userspace networking is also equally good for
 > Container/Cloud Native - of course depending on what you are trying to 
do. We
 > have done a huge amount of work in both VPP and DPDK developing
 > technologies to help like MemIF (including libmemif), Virtio-User, 
FastTap,
 > Master-VM, Contiv-VPP etc to help in this regard.
 >
 > What a container is - ultimately - is a silo'ing of CPU, memory and IO 
resources
 > for both Kernel and Userspace processes, but there is nothing in this 
forces us
 > to choose Kernel over Userspace networking.
 >
 > The way we typically handle Containers networking for both VPP/DPDK is 
for
 > packets to flow directly between userspace processes - no kernel 
required.
 > Where VPP runs in the default namespace possibly as a vSwitch or 
vRouter, and
 > switches packets to containers running DPDK/VPP etc, all achieved in
 > userspace. We also provide the Master-VM approach and/or FastTAP or
 > AF_PACKET to punt the packets into the Kernel when required.
 >
 > We test the the performance of aspects of this such as Memif regularly -
 > results are available here.
 >
 > 
https://docs.fd.io/csit/rls1710/report/vpp_performance_tests/packet_throughp
 > ut_graphs/container_memif.html#ndr-throughput
 >
 > Thanks,
 >
 > Ray K
 >
 >
 > On 13/02/2018 14:04, Avi Cohen (A) wrote:
 > > Hello
 > > Are there 'numbers' for performance - VPP  vs XDP-eBPF for container
 > networking.
 > >
 > > Since the DPDK and linux-containers are not compatible, is a sense that
 > container and host share the same kernel - hence pkts received at 
VPP-DPDK
 > at user-space and directed to a linux container  - should be go   down 
to the
 > kernel and then to the container ip-stack, while in XDP-eBPF this pkt 
can be
 > forward to the container ip-stack directly from the kernel.
 > >
 > > I heard that a vhostuser interface for containers is 'in-working' 
stage.
 > > Can anyone assist with the performance numbers and the status of this 
vhost-
 > user for containers ?
 > >
 > > Best Regards
 > > Avi
 > >
 > >
 > >
 > >
 >
 >











-=-=-=-=-=-=-=-=-=-=-=-
Links:

You receive all messages sent to this group.

View/Reply Online (#8255): https://lists.fd.io/g/vpp-dev/message/8255
View All Messages In Topic (9): https://lists.fd.io/g/vpp-dev/topic/11144798
Mute This Topic: https://lists.fd.io/mt/11144798/21656
New Topic: https://lists.fd.io/g/vpp-dev/post

Change Your Subscription: https://lists.fd.io/g/vpp-dev/editsub/21656
Group Home: https://lists.fd.io/g/vpp-dev
Contact Group Owner: vpp-dev+ow...@lists.fd.io
Terms of Service: https://lists.fd.io/static/tos
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] VPP vs XDP-eBPF performance numbers - for linux containers

2018-02-19 Thread Jerome Tollet
Hi Avi,
As I already told you in a previous message. With tapv2, there's no longer copy 
from user to kernel (there's still kernel to user copy).
Jerome

Le 19/02/2018 09:35, « vpp-dev@lists.fd.io au nom de Avi Cohen (A) » 
 a écrit :

Thank  you Ray
dpdk is not running in the container - this is customer container and I 
cannot force the customer to use dpdk.
Regarding the  "not compatible"  - I mean pkts received at vpp user-space,  
destined to C1 (container) ,
should be copied to the kernel and then to C1 IP-stack (e.g.  AF_PACKET 
interface )  , while this copy can be saved if my  vSwitch is running in kernel.
So theoretically for the container networking use-case ,  vSwitch in kernel 
can  achieve better performance and CPU utilization than any vSwitch  over 
dpdk. (VPP or OvS) 
Unless a memory map can be used here to save the copy (userspace to kernel 
and kernel to userspace)
Best Regards
Avi

> -Original Message-
> From: vpp-dev@lists.fd.io [mailto:vpp-dev@lists.fd.io] On Behalf Of Ray 
Kinsella
> Sent: Friday, 16 February, 2018 4:52 PM
> To: vpp-dev@lists.fd.io
    > Subject: Re: [vpp-dev] VPP vs XDP-eBPF performance numbers - for linux
> containers
> 
> Hi Avi,
> 
> So the "not compatible" comment is something I would like to understand a 
bit
> more?
> 
> Are you typically running DPDK/VPP based or Socket based applications 
inside
> your containers?
> 
> Our perspective is that userspace networking is also equally good for
> Container/Cloud Native - of course depending on what you are trying to 
do. We
> have done a huge amount of work in both VPP and DPDK developing
> technologies to help like MemIF (including libmemif), Virtio-User, 
FastTap,
> Master-VM, Contiv-VPP etc to help in this regard.
> 
> What a container is - ultimately - is a silo'ing of CPU, memory and IO 
resources
> for both Kernel and Userspace processes, but there is nothing in this 
forces us
> to choose Kernel over Userspace networking.
> 
> The way we typically handle Containers networking for both VPP/DPDK is for
> packets to flow directly between userspace processes - no kernel required.
> Where VPP runs in the default namespace possibly as a vSwitch or vRouter, 
and
> switches packets to containers running DPDK/VPP etc, all achieved in
> userspace. We also provide the Master-VM approach and/or FastTAP or
> AF_PACKET to punt the packets into the Kernel when required.
> 
> We test the the performance of aspects of this such as Memif regularly -
> results are available here.
> 
> 
https://docs.fd.io/csit/rls1710/report/vpp_performance_tests/packet_throughp
> ut_graphs/container_memif.html#ndr-throughput
> 
> Thanks,
> 
> Ray K
> 
> 
> On 13/02/2018 14:04, Avi Cohen (A) wrote:
> > Hello
> > Are there 'numbers' for performance - VPP  vs XDP-eBPF for container
> networking.
> >
> > Since the DPDK and linux-containers are not compatible, is a sense that
> container and host share the same kernel - hence pkts received at VPP-DPDK
> at user-space and directed to a linux container  - should be go   down to 
the
> kernel and then to the container ip-stack, while in XDP-eBPF this pkt can 
be
> forward to the container ip-stack directly from the kernel.
> >
> > I heard that a vhostuser interface for containers is 'in-working' stage.
> > Can anyone assist with the performance numbers and the status of this 
vhost-
> user for containers ?
> >
> > Best Regards
> > Avi
> >
> >
> >
> >
> 
> 







-=-=-=-=-=-=-=-=-=-=-=-
Links:

You receive all messages sent to this group.

View/Reply Online (#8252): https://lists.fd.io/g/vpp-dev/message/8252
View All Messages In Topic (8): https://lists.fd.io/g/vpp-dev/topic/11144798
Mute This Topic: https://lists.fd.io/mt/11144798/21656
New Topic: https://lists.fd.io/g/vpp-dev/post

Change Your Subscription: https://lists.fd.io/g/vpp-dev/editsub/21656
Group Home: https://lists.fd.io/g/vpp-dev
Contact Group Owner: vpp-dev+ow...@lists.fd.io
Terms of Service: https://lists.fd.io/static/tos
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] VPP vs XDP-eBPF performance numbers - for linux containers

2018-02-19 Thread Avi Cohen (A)
Thank  you Ray
dpdk is not running in the container - this is customer container and I cannot 
force the customer to use dpdk.
Regarding the  "not compatible"  - I mean pkts received at vpp user-space,  
destined to C1 (container) ,
should be copied to the kernel and then to C1 IP-stack (e.g.  AF_PACKET 
interface )  , while this copy can be saved if my  vSwitch is running in kernel.
So theoretically for the container networking use-case ,  vSwitch in kernel can 
 achieve better performance and CPU utilization than any vSwitch  over dpdk. 
(VPP or OvS) 
Unless a memory map can be used here to save the copy (userspace to kernel and 
kernel to userspace)
Best Regards
Avi

> -Original Message-
> From: vpp-dev@lists.fd.io [mailto:vpp-dev@lists.fd.io] On Behalf Of Ray 
> Kinsella
> Sent: Friday, 16 February, 2018 4:52 PM
> To: vpp-dev@lists.fd.io
> Subject: Re: [vpp-dev] VPP vs XDP-eBPF performance numbers - for linux
> containers
> 
> Hi Avi,
> 
> So the "not compatible" comment is something I would like to understand a bit
> more?
> 
> Are you typically running DPDK/VPP based or Socket based applications inside
> your containers?
> 
> Our perspective is that userspace networking is also equally good for
> Container/Cloud Native - of course depending on what you are trying to do. We
> have done a huge amount of work in both VPP and DPDK developing
> technologies to help like MemIF (including libmemif), Virtio-User, FastTap,
> Master-VM, Contiv-VPP etc to help in this regard.
> 
> What a container is - ultimately - is a silo'ing of CPU, memory and IO 
> resources
> for both Kernel and Userspace processes, but there is nothing in this forces 
> us
> to choose Kernel over Userspace networking.
> 
> The way we typically handle Containers networking for both VPP/DPDK is for
> packets to flow directly between userspace processes - no kernel required.
> Where VPP runs in the default namespace possibly as a vSwitch or vRouter, and
> switches packets to containers running DPDK/VPP etc, all achieved in
> userspace. We also provide the Master-VM approach and/or FastTAP or
> AF_PACKET to punt the packets into the Kernel when required.
> 
> We test the the performance of aspects of this such as Memif regularly -
> results are available here.
> 
> https://docs.fd.io/csit/rls1710/report/vpp_performance_tests/packet_throughp
> ut_graphs/container_memif.html#ndr-throughput
> 
> Thanks,
> 
> Ray K
> 
> 
> On 13/02/2018 14:04, Avi Cohen (A) wrote:
> > Hello
> > Are there 'numbers' for performance - VPP  vs XDP-eBPF for container
> networking.
> >
> > Since the DPDK and linux-containers are not compatible, is a sense that
> container and host share the same kernel - hence pkts received at VPP-DPDK
> at user-space and directed to a linux container  - should be go   down to the
> kernel and then to the container ip-stack, while in XDP-eBPF this pkt can be
> forward to the container ip-stack directly from the kernel.
> >
> > I heard that a vhostuser interface for containers is 'in-working' stage.
> > Can anyone assist with the performance numbers and the status of this vhost-
> user for containers ?
> >
> > Best Regards
> > Avi
> >
> >
> >
> >
> 
> 


-=-=-=-=-=-=-=-=-=-=-=-
Links:

You receive all messages sent to this group.

View/Reply Online (#8251): https://lists.fd.io/g/vpp-dev/message/8251
View All Messages In Topic (7): https://lists.fd.io/g/vpp-dev/topic/11144798
Mute This Topic: https://lists.fd.io/mt/11144798/21656
New Topic: https://lists.fd.io/g/vpp-dev/post

Change Your Subscription: https://lists.fd.io/g/vpp-dev/editsub/21656
Group Home: https://lists.fd.io/g/vpp-dev
Contact Group Owner: vpp-dev+ow...@lists.fd.io
Terms of Service: https://lists.fd.io/static/tos
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] VPP vs XDP-eBPF performance numbers - for linux containers

2018-02-16 Thread Ray Kinsella

Hi Avi,

So the "not compatible" comment is something I would like to understand 
a bit more?


Are you typically running DPDK/VPP based or Socket based applications 
inside your containers?


Our perspective is that userspace networking is also equally good for 
Container/Cloud Native - of course depending on what you are trying to 
do. We have done a huge amount of work in both VPP and DPDK developing 
technologies to help like MemIF (including libmemif), Virtio-User, 
FastTap, Master-VM, Contiv-VPP etc to help in this regard.


What a container is - ultimately - is a silo'ing of CPU, memory and IO 
resources for both Kernel and Userspace processes, but there is nothing 
in this forces us to choose Kernel over Userspace networking.


The way we typically handle Containers networking for both VPP/DPDK is 
for packets to flow directly between userspace processes - no kernel 
required. Where VPP runs in the default namespace possibly as a vSwitch 
or vRouter, and switches packets to containers running DPDK/VPP etc, all 
achieved in userspace. We also provide the Master-VM approach and/or 
FastTAP or AF_PACKET to punt the packets into the Kernel when required.


We test the the performance of aspects of this such as Memif regularly - 
results are available here.


https://docs.fd.io/csit/rls1710/report/vpp_performance_tests/packet_throughput_graphs/container_memif.html#ndr-throughput

Thanks,

Ray K


On 13/02/2018 14:04, Avi Cohen (A) wrote:

Hello
Are there 'numbers' for performance - VPP  vs XDP-eBPF for container networking.

Since the DPDK and linux-containers are not compatible, is a sense that 
container and host share the same kernel - hence pkts received at VPP-DPDK  at 
user-space and directed to a linux container  - should be go   down to the 
kernel and then to the container ip-stack, while in XDP-eBPF this pkt can be 
forward to the container ip-stack directly from the kernel.

I heard that a vhostuser interface for containers is 'in-working' stage.
Can anyone assist with the performance numbers and the status of this 
vhost-user for containers ?

Best Regards
Avi






-=-=-=-=-=-=-=-=-=-=-=-
Links:

You receive all messages sent to this group.

View/Reply Online (#8240): https://lists.fd.io/g/vpp-dev/message/8240
View All Messages In Topic (6): https://lists.fd.io/g/vpp-dev/topic/11144798
Mute This Topic: https://lists.fd.io/mt/11144798/21656
New Topic: https://lists.fd.io/g/vpp-dev/post

Change Your Subscription: https://lists.fd.io/g/vpp-dev/editsub/21656
Group Home: https://lists.fd.io/g/vpp-dev
Contact Group Owner: vpp-dev+ow...@lists.fd.io
Terms of Service: https://lists.fd.io/static/tos
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] VPP vs XDP-eBPF performance numbers - for linux containers

2018-02-15 Thread Heqing
Avi:

You have got to some level with a virtio-user interface, in the backend, it 
does allow OVS-DPDK (or VPP, but not tested) talked to the container guest with 
the virtio interface, 

There was a proposal to merge that interface to VPP community. But it does not 
fly much with the memif context, the mail archive is moving to the new place, 
the google search cannot help me to identify the mail thread yet.  

The performance will be similar as the OVS-DPDK for the container use case. 

A few interesting link are provided here. 
https://dl.acm.org/citation.cfm?id=3098583.3098586  (Paper) 
https://schd.ws/hosted_files/lc3china2017/22/Danny%20Zhou_High%20Performance%20Container%20Networking_v4.pdf
 
https://schd.ws/hosted_files/ossna2017/1e/VPP_K8S_GTPU_OSSNA.pdf 
http://dpdk.org/doc/guides/howto/virtio_user_for_container_networking.html  
(DPDK Doc)
https://github.com/lagopus/lagopus/blob/master/docs/how-to-use-virtio-user.md 
(Lagopus view)



-Original Message-
From: vpp-dev@lists.fd.io [mailto:vpp-dev@lists.fd.io] On Behalf Of Avi Cohen 
(A)
Sent: Wednesday, February 14, 2018 1:08 AM

To: vpp-dev@lists.fd.io
Subject: Re: [vpp-dev] VPP vs XDP-eBPF performance numbers - for linux 
containers

Thank you Jarome
I'm working with LXC , but should be applied for docker as well I can connect 
the container through  a virtio-user port in  VPP, and a tap interface in 
kernel But we pay for a  vhost kthread  that is copying  data from kernel to 
user space and viceversa.
Another option is to connect with veth pair - but the performance is further 
degraded

Another issue is how VPP interface with sandbox?

Best Regards
Avi

> -Original Message-
> From: vpp-dev@lists.fd.io [mailto:vpp-dev@lists.fd.io] On Behalf Of 
> Jerome Tollet
> Sent: Tuesday, 13 February, 2018 11:27 PM
> To: vpp-dev@lists.fd.io
> Cc: vpp-dev@lists.fd.io
> Subject: Re: [vpp-dev] VPP vs XDP-eBPF performance numbers - for linux 
> containers
> 
> Hi Avi,
> Can you elaborate a bit on the kind of containers you'd like to run.
> Interfaces exposed to containers may be different if you are looking 
> to run regular endpoints vs cVNF.
> Jerome
> 
> Le 13/02/2018 15:04, « vpp-dev@lists.fd.io au nom de Avi Cohen (A) » 
>  a écrit :
> 
> Hello
> Are there 'numbers' for performance - VPP  vs XDP-eBPF for 
> container networking.
> 
> Since the DPDK and linux-containers are not compatible, is a sense 
> that container and host share the same kernel - hence pkts received at 
> VPP-DPDK
> at user-space and directed to a linux container  - should be go   down to the
> kernel and then to the container ip-stack, while in XDP-eBPF this pkt 
> can be forward to the container ip-stack directly from the kernel.
> 
> I heard that a vhostuser interface for containers is 'in-working' stage.
> Can anyone assist with the performance numbers and the status of 
> this vhost-user for containers ?
> 
> Best Regards
> Avi
> 
> 
> 
> 
> 
> 
> 





-=-=-=-=-=-=-=-=-=-=-=-
Links:

You receive all messages sent to this group.

View/Reply Online (#8235): https://lists.fd.io/g/vpp-dev/message/8235
View All Messages In Topic (5): https://lists.fd.io/g/vpp-dev/topic/11144798
Mute This Topic: https://lists.fd.io/mt/11144798/21656
New Topic: https://lists.fd.io/g/vpp-dev/post

Change Your Subscription: https://lists.fd.io/g/vpp-dev/editsub/21656
Group Home: https://lists.fd.io/g/vpp-dev
Contact Group Owner: vpp-dev+ow...@lists.fd.io
Terms of Service: https://lists.fd.io/static/tos
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] VPP vs XDP-eBPF performance numbers - for linux containers

2018-02-14 Thread Jerome Tollet
Hi Avi,
There may be a terminology issue here:
-vhost-user interfaces are used between VPP and QEMU for fast communications. 
It doesn't involve Linux kernel at all.
-in VPP 18.01, we've added support for virtionet (tap v2 interfaces). In this 
case VPP "plays" the usual role of QEMU. This new interfaces are quite 
efficient. In this model, skbuff created by Linux kernel cannot be mapped to 
user space. However, Linux makes it possible to create skbuff where data is 
located in the mmaped memory region. As a result, packets created by VPP do not 
require copy to be injected in the kernel.
-Anyway, packet copy is not a real bottleneck here. We did an experimental 
patch where VPP support GRO/GSO with tapv2. 2 containers were running iperf and 
the bottleneck was tcp stack...
Jerome

Le 14/02/2018 09:08, « Avi Cohen (A) »  a écrit :

Thank you Jarome
I'm working with LXC , but should be applied for docker as well
I can connect the container through  a virtio-user port in  VPP, and a tap 
interface in kernel
But we pay for a  vhost kthread  that is copying  data from kernel to user 
space and viceversa.
Another option is to connect with veth pair - but the performance is 
further degraded

Another issue is how VPP interface with sandbox?

Best Regards
Avi

> -Original Message-
> From: vpp-dev@lists.fd.io [mailto:vpp-dev@lists.fd.io] On Behalf Of Jerome
> Tollet
> Sent: Tuesday, 13 February, 2018 11:27 PM
> To: vpp-dev@lists.fd.io
> Cc: vpp-dev@lists.fd.io
    > Subject: Re: [vpp-dev] VPP vs XDP-eBPF performance numbers - for linux
> containers
> 
> Hi Avi,
> Can you elaborate a bit on the kind of containers you'd like to run.
> Interfaces exposed to containers may be different if you are looking to 
run
> regular endpoints vs cVNF.
> Jerome
> 
> Le 13/02/2018 15:04, « vpp-dev@lists.fd.io au nom de Avi Cohen (A) »  d...@lists.fd.io au nom de avi.co...@huawei.com> a écrit :
> 
> Hello
> Are there 'numbers' for performance - VPP  vs XDP-eBPF for container
> networking.
> 
> Since the DPDK and linux-containers are not compatible, is a sense 
that
> container and host share the same kernel - hence pkts received at VPP-DPDK
> at user-space and directed to a linux container  - should be go   down to 
the
> kernel and then to the container ip-stack, while in XDP-eBPF this pkt can 
be
> forward to the container ip-stack directly from the kernel.
> 
> I heard that a vhostuser interface for containers is 'in-working' 
stage.
> Can anyone assist with the performance numbers and the status of this
> vhost-user for containers ?
> 
> Best Regards
> Avi
> 
> 
> 
> 
> 
> 
> 




-=-=-=-=-=-=-=-=-=-=-=-
Links:

You receive all messages sent to this group.

View/Reply Online (#8208): https://lists.fd.io/g/vpp-dev/message/8208
View All Messages In Topic (4): https://lists.fd.io/g/vpp-dev/topic/11144798
Mute This Topic: https://lists.fd.io/mt/11144798/21656
New Topic: https://lists.fd.io/g/vpp-dev/post

Change Your Subscription: https://lists.fd.io/g/vpp-dev/editsub/21656
Group Home: https://lists.fd.io/g/vpp-dev
Contact Group Owner: vpp-dev+ow...@lists.fd.io
Terms of Service: https://lists.fd.io/static/tos
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] VPP vs XDP-eBPF performance numbers - for linux containers

2018-02-14 Thread Avi Cohen (A)
Thank you Jarome
I'm working with LXC , but should be applied for docker as well
I can connect the container through  a virtio-user port in  VPP, and a tap 
interface in kernel
But we pay for a  vhost kthread  that is copying  data from kernel to user 
space and viceversa.
Another option is to connect with veth pair - but the performance is further 
degraded

Another issue is how VPP interface with sandbox?

Best Regards
Avi

> -Original Message-
> From: vpp-dev@lists.fd.io [mailto:vpp-dev@lists.fd.io] On Behalf Of Jerome
> Tollet
> Sent: Tuesday, 13 February, 2018 11:27 PM
> To: vpp-dev@lists.fd.io
> Cc: vpp-dev@lists.fd.io
> Subject: Re: [vpp-dev] VPP vs XDP-eBPF performance numbers - for linux
> containers
> 
> Hi Avi,
> Can you elaborate a bit on the kind of containers you'd like to run.
> Interfaces exposed to containers may be different if you are looking to run
> regular endpoints vs cVNF.
> Jerome
> 
> Le 13/02/2018 15:04, « vpp-dev@lists.fd.io au nom de Avi Cohen (A) »  d...@lists.fd.io au nom de avi.co...@huawei.com> a écrit :
> 
> Hello
> Are there 'numbers' for performance - VPP  vs XDP-eBPF for container
> networking.
> 
> Since the DPDK and linux-containers are not compatible, is a sense that
> container and host share the same kernel - hence pkts received at VPP-DPDK
> at user-space and directed to a linux container  - should be go   down to the
> kernel and then to the container ip-stack, while in XDP-eBPF this pkt can be
> forward to the container ip-stack directly from the kernel.
> 
> I heard that a vhostuser interface for containers is 'in-working' stage.
> Can anyone assist with the performance numbers and the status of this
> vhost-user for containers ?
> 
> Best Regards
> Avi
> 
> 
> 
> 
> 
> 
> 


-=-=-=-=-=-=-=-=-=-=-=-
Links:

You receive all messages sent to this group.

View/Reply Online (#8207): https://lists.fd.io/g/vpp-dev/message/8207
View All Messages In Topic (3): https://lists.fd.io/g/vpp-dev/topic/11144798
Mute This Topic: https://lists.fd.io/mt/11144798/21656
New Topic: https://lists.fd.io/g/vpp-dev/post

Change Your Subscription: https://lists.fd.io/g/vpp-dev/editsub/21656
Group Home: https://lists.fd.io/g/vpp-dev
Contact Group Owner: vpp-dev+ow...@lists.fd.io
Terms of Service: https://lists.fd.io/static/tos
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] VPP vs XDP-eBPF performance numbers - for linux containers

2018-02-13 Thread Jerome Tollet
Hi Avi,
Can you elaborate a bit on the kind of containers you'd like to run.
Interfaces exposed to containers may be different if you are looking to run 
regular endpoints vs cVNF.
Jerome

Le 13/02/2018 15:04, « vpp-dev@lists.fd.io au nom de Avi Cohen (A) » 
 a écrit :

Hello
Are there 'numbers' for performance - VPP  vs XDP-eBPF for container 
networking.

Since the DPDK and linux-containers are not compatible, is a sense that 
container and host share the same kernel - hence pkts received at VPP-DPDK  at 
user-space and directed to a linux container  - should be go   down to the 
kernel and then to the container ip-stack, while in XDP-eBPF this pkt can be 
forward to the container ip-stack directly from the kernel.

I heard that a vhostuser interface for containers is 'in-working' stage.
Can anyone assist with the performance numbers and the status of this 
vhost-user for containers ?

Best Regards
Avi






-=-=-=-=-=-=-=-=-=-=-=-
Links:

You receive all messages sent to this group.

View/Reply Online (#8201): https://lists.fd.io/g/vpp-dev/message/8201
View All Messages In Topic (2): https://lists.fd.io/g/vpp-dev/topic/11144798
Mute This Topic: https://lists.fd.io/mt/11144798/21656
New Topic: https://lists.fd.io/g/vpp-dev/post

Change Your Subscription: https://lists.fd.io/g/vpp-dev/editsub/21656
Group Home: https://lists.fd.io/g/vpp-dev
Contact Group Owner: vpp-dev+ow...@lists.fd.io
Terms of Service: https://lists.fd.io/static/tos
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub
-=-=-=-=-=-=-=-=-=-=-=-