> On 16 Oct 2023, at 14:48, Ilya Maximets wrote:
>
> On 10/6/23 20:10, Алексей Кашавкин via discuss wrote:
>> Hello!
>>
>> I am using OVS with DPDK in OpenStack. This is RDO+TripleO deployment with
>> the Train release. I am trying to measure the performance of the DPDK
>> compute node. I
On 10/6/23 20:10, Алексей Кашавкин via discuss wrote:
> Hello!
>
> I am using OVS with DPDK in OpenStack. This is RDO+TripleO deployment with
> the Train release. I am trying to measure the performance of the DPDK compute
> node. I have created two VMs [1], one as a DUT with DPDK and one as a
Hello!
I am using OVS with DPDK in OpenStack. This is RDO+TripleO deployment with the
Train release. I am trying to measure the performance of the DPDK compute node.
I have created two VMs [1], one as a DUT with DPDK and one as a traffic
generator with SR-IOV [2]. Both of them are using
Thank you for reply kevin,
I do have a multi queue configuration, as you can see following. Let
me tell you what tools I am using for traffic generation. We run VoIP
services (pretty much telco style) so we need low latency for the
network to provide quality audio experience for customers. We
On 30/10/2021 06:07, Satish Patel wrote:
Folks,
I have configured ovs-dpdk to replace sriov deployment for bonding
support. everything good but somehow as soon as i start hitting
200kpps rate i start seeing packet drop.
I have configured CPU isolation as per documentation to assign a
dedicated
Folks,
I have configured ovs-dpdk to replace sriov deployment for bonding
support. everything good but somehow as soon as i start hitting
200kpps rate i start seeing packet drop.
I have configured CPU isolation as per documentation to assign a
dedicated pmd thread. I have assigned 8 dedicated
On 11/26/2019 7:41 AM, Rami Neiman wrote:
Hello,
I am using OVS DPDK 2.9.2 with TRex traffic generator to simply
forward the received traffic back to the traffic generator (i.e.
ingress0->egeress0, egress0->ingress0) over 2 port 10G NIC.
The OVS throughput with this setup matches the
Hello,
I am using OVS DPDK 2.9.2 with TRex traffic generator to simply forward the
received traffic back to the traffic generator (i.e. ingress0->egeress0,
egress0->ingress0) over 2 port 10G NIC.
The OVS throughput with this setup matches the traffic generator (all packets
sent by TG are
Hi all,
I am able to get expected performance using ovs dpdk on a single socket
system.
But on a system with 2 NUMA nodes, the throughput is less than expected.
The system has 8 physical cores each socket with hyperthreading enabled. So
total 32 cores.
Only one physical 10G interface is being
Hi,
I managed to solve this performance issue. I got improved performance after
turning off the mrg_rxbuf and increasing the rx and tx queue sizes to 1024.
Thanks,
Onkar
On Thu, Nov 8, 2018 at 2:57 PM Onkar Pednekar wrote:
> Hi,
>
> We figured out that the packet processing appliance within
Hi,
We figured out that the packet processing appliance within VM (which reads
from raw socket on the dpdk vhost user interface) requires more packets per
second to give higher throughput. Else its cpu utilization is idle most of
the times.
We increased the "tx-flush-interval" from default 0 to
Hi Tiago,
Sure. I'll try that.
Thanks,
Onkar
On Fri, Oct 5, 2018 at 9:06 AM Lam, Tiago wrote:
> Hi Onkar,
>
> Thanks for shedding some light.
>
> I don't think your difference in performance will have to do your
> OvS-DPDK setup. If you're taking the measurements directly from the
> iperf
Hi Onkar,
Thanks for shedding some light.
I don't think your difference in performance will have to do your
OvS-DPDK setup. If you're taking the measurements directly from the
iperf server side you'd be going through the "Internet". Assuming you
don't have a dedicated connection there, things
Hi Michael,
Thanks for your reply. Below are the answers to your questions inline.
On Thu, Oct 4, 2018 at 8:01 AM Michael Richardson wrote:
>
> Onkar Pednekar wrote:
> > I have been experimenting with OVS DPDK on 1G interfaces. The system
> > has 8 cores (hyperthreading enabled) mix
Hi Tiago,
Thanks for your reply.
Below are the answers to your questions in-line.
On Thu, Oct 4, 2018 at 4:07 AM Lam, Tiago wrote:
> Hi Onkar,
>
> Thanks for your email. Your setup isn't very clear to me, so a few
> queries in-line.
>
> On 04/10/2018 06:06, Onkar Pednekar wrote:
> > Hi,
> >
Onkar Pednekar wrote:
> I have been experimenting with OVS DPDK on 1G interfaces. The system
> has 8 cores (hyperthreading enabled) mix of dpdk and non-dpdk capable
> ports, but the data traffic runs only on dpdk ports.
> DPDK ports are backed by vhost user netdev and I have
Hi Onkar,
Thanks for your email. Your setup isn't very clear to me, so a few
queries in-line.
On 04/10/2018 06:06, Onkar Pednekar wrote:
> Hi,
>
> I have been experimenting with OVS DPDK on 1G interfaces. The system has
> 8 cores (hyperthreading enabled) mix of dpdk and non-dpdk capable ports,
Hi,
I have been experimenting with OVS DPDK on 1G interfaces. The system has 8
cores (hyperthreading enabled) mix of dpdk and non-dpdk capable ports, but
the data traffic runs only on dpdk ports.
DPDK ports are backed by vhost user netdev and I have configured the system
so that hugepages are
moo...@intel.com>
wrote:
>
>
>
>
> *From:* Stokes, Ian
> *Sent:* Thursday, April 19, 2018 9:51 PM
> *To:* michael me <1michaelmesgu...@gmail.com>
> *Cc:* ovs-discuss@openvswitch.org; Mooney, Sean K <sean.k.moo...@intel.com
> >
> *Subject:* RE: [ovs-discuss] ov
From: Stokes, Ian
Sent: Thursday, April 19, 2018 9:51 PM
To: michael me <1michaelmesgu...@gmail.com>
Cc: ovs-discuss@openvswitch.org; Mooney, Sean K <sean.k.moo...@intel.com>
Subject: RE: [ovs-discuss] ovs-dpdk performance not stable
Hi Michael,
“It will be split between queues
ore concepts still apply even if some of the configuration
> commands may have changed
>
>
>
> https://software.intel.com/en-us/articles/configure-vhost-
> user-multiqueue-for-ovs-with-dpdk
>
>
>
> Ian
>
>
>
> *From:* michael me [mailto:1michaelmesgu...@gmail.c
me [mailto:1michaelmesgu...@gmail.com]
Sent: Wednesday, April 18, 2018 2:23 PM
To: Stokes, Ian <ian.sto...@intel.com>
Cc: ovs-discuss@openvswitch.org
Subject: Re: [ovs-discuss] ovs-dpdk performance not stable
Hi Ian,
In the deployment i do have vhost user; below is the full output of the
>
>
> *From:* ovs-discuss-boun...@openvswitch.org [mailto:ovs-discuss-bounces@
> openvswitch.org] *On Behalf Of *michael me
> *Sent:* Tuesday, April 17, 2018 10:42 AM
> *To:* ovs-discuss@openvswitch.org
> *Subject:* [ovs-discuss] ovs-dpdk performance not stable
>
>
& DPDK, have you
tried the same tests using the latest OVS 2.9 and DPDK 17.11?
Ian
From: ovs-discuss-boun...@openvswitch.org
[mailto:ovs-discuss-boun...@openvswitch.org] On Behalf Of michael me
Sent: Tuesday, April 17, 2018 10:42 AM
To: ovs-discuss@openvswitch.org
Subject: [ovs-discuss]
Hi Everyone,
I would greatly appreciate any input.
The setting that i am working with is a host with ovs-dpdk connected to a
VM.
What i see when i do performance test is that after about a minute or two
suddenly i have many drops as if the cache was full and was dumped
improperly.
I tried to
ehalf Of 40724...@qq.com
>Sent: 20 April 2017 08:35
>To: ovs-discuss <ovs-discuss@openvswitch.org>
>Subject: [ovs-discuss] OVS-DPDK performance problem in Openstack Ocata
>
>Hi,
>
>I tested ovs-
>dpdk(compiled using ovs 2.6.1) under openstack ocata, and I found the
>
/latest/topics/dpdk/vhost-user/#adding-vhost-user-ports-to-the-guest-qemu
Darragh.
From: ovs-discuss-boun...@openvswitch.org
[mailto:ovs-discuss-boun...@openvswitch.org] On Behalf Of 40724...@qq.com
Sent: 20 April 2017 08:35
To: ovs-discuss <ovs-discuss@openvswitch.org>
Subject: [ovs-discus
27 matches
Mail list logo