On 10/05/2023 07:10, Eli Britstein wrote:
-Original Message-
From: Ilya Maximets
Sent: Tuesday, 9 May 2023 21:04
To: Eli Britstein ; ovs-discuss@openvswitch.org
Cc: Kevin Traynor ; Aaron Conole
; i.maxim...@ovn.org
Subject: Re: [ovs-discuss] dpdk-lcore-mask
External email: Use
On 15/02/2023 05:50, ChangLimin wrote:
Before enable dpdk, all ovs-vswitch threads were affinitied to cpu 0-27. Cpu
28-31 were isolated in grub, will used for pmd.
After enable dpdk, many threads were affinitied only to the first lcore cpu.
Below is the output:
# ovs-vsctl --no-wait set
On 15/11/2021 14:58, Thomas Monjalon wrote:
For the last 5 years, DPDK was doing 4 releases per year,
in February, May, August and November (the LTS one):
.02 .05 .08 .11 (LTS)
This schedule has multiple issues:
- clash with China's Spring Festival
- too many
On 30/10/2021 06:07, Satish Patel wrote:
Folks,
I have configured ovs-dpdk to replace sriov deployment for bonding
support. everything good but somehow as soon as i start hitting
200kpps rate i start seeing packet drop.
I have configured CPU isolation as per documentation to assign a
dedicated
On 07/03/2021 03:57, George Diamantopoulos wrote:
> Hello all,
>
> It appears that setting the n_txq option has no effect for dpdk Interfaces,
> e.g.: "ovs-vsctl set Interface dpdk-eno1 options:n_txq=2".
>
> n_txq appears to be hardcoded to "5" for my driver (BNX2X PMD), for some
> reason.
>
On 20/11/2019 18:14, Stokes, Ian wrote:
>
>
> On 11/20/2019 6:02 PM, Kevin Traynor wrote:
>> On 19/11/2019 18:48, Ilya Maximets wrote:
>>> On 19.11.2019 19:01, Eli Britstein wrote:
>>>>
>>>> On 11/19/2019 7:46 PM, Ilya Maximets wrote:
>>>
On 19/11/2019 18:48, Ilya Maximets wrote:
> On 19.11.2019 19:01, Eli Britstein wrote:
>>
>> On 11/19/2019 7:46 PM, Ilya Maximets wrote:
>>> On 19.11.2019 18:29, Eli Britstein wrote:
On 11/19/2019 7:27 PM, Eli Britstein wrote:
> Hi
>
> I see this file has many inconsistencies
On 10/18/2018 10:46 PM, Ben Pfaff wrote:
> I've had a number of queries from folks lately about our roadmap for LTS
> releases. It has, indeed, been a long time since we've had a long-term
> support release (the current LTS is 2.5). Usually, we've done LTS
> releases before some kind of big
ep
>>>> <venkatesan.prad...@ericsson.com>; Stokes, Ian
>>>> <ian.sto...@intel.com>; d...@openvswitch.org
>>>> Cc: Kevin Traynor <ktray...@redhat.com>; Flavio Leitner
>>>> <f...@redhat.com>; Loftus, Ciara <ciara.lof...@inte
On 01/26/2018 05:21 PM, Ilya Maximets wrote:
> On 26.01.2018 18:47, Kevin Traynor wrote:
>> hOn 01/26/2018 03:16 PM, Ilya Maximets wrote:
>>> On 26.01.2018 15:00, Stokes, Ian wrote:
>>>> Hi All,
>>>>
>>>> Recently an issue was raised regar
On 01/26/2018 05:27 PM, Stokes, Ian wrote:
>> -Original Message-
>> From: Kevin Traynor [mailto:ktray...@redhat.com]
>> Sent: Friday, January 26, 2018 3:48 PM
>> To: Ilya Maximets <i.maxim...@samsung.com>; Stokes, Ian
>> <ian.sto...@intel.com>;
hOn 01/26/2018 03:16 PM, Ilya Maximets wrote:
> On 26.01.2018 15:00, Stokes, Ian wrote:
>> Hi All,
>>
>> Recently an issue was raised regarding the move from a single shared mempool
>> model that was in place up to OVS 2.8, to a mempool per port model
>> introduced in 2.9.
>>
>>
On 01/23/2018 11:42 AM, Kevin Traynor wrote:
> On 01/17/2018 07:48 PM, Venkatesan Pradeep wrote:
>> Hi,
>>
>> Assuming that all ports use the same MTU, in OVS2.8 and earlier, a single
>> mempool of 256K buffers (MAX_NB_MBUF = 4096 * 64) will be created and
On 01/17/2018 07:48 PM, Venkatesan Pradeep wrote:
> Hi,
>
> Assuming that all ports use the same MTU, in OVS2.8 and earlier, a single
> mempool of 256K buffers (MAX_NB_MBUF = 4096 * 64) will be created and shared
> by all the ports
>
> With the OVS2.9 mempool patches, we have port specific
On 11/14/2017 04:43 AM, Kevin Traynor wrote:
> On 11/14/2017 02:16 AM, aserd...@ovn.org wrote:
>>> Subject: Re: [ovs-discuss] pmd-cpu-mask/distribution of rx queues not
>>> working on windows
>>>
>>> On 10/19/2017 05:45 PM, Alin Gabriel Serdean wrote:
On 11/14/2017 02:16 AM, aserd...@ovn.org wrote:
>> Subject: Re: [ovs-discuss] pmd-cpu-mask/distribution of rx queues not
>> working on windows
>>
>> On 10/19/2017 05:45 PM, Alin Gabriel Serdean wrote:
>>> Hi,
>>>
>>>
>>>
>>> Currently the test “pmd-cpu-mask/distribution of rx queues” is failing
On 10/19/2017 05:45 PM, Alin Gabriel Serdean wrote:
> Hi,
>
>
>
> Currently the test “pmd-cpu-mask/distribution of rx queues” is failing
> on Windows. I’m trying to figure out what we are missing on the Windows
> environment. Any help is welcomed .
>
>
Hi Alin, the queues are sorted by
wat <devendra.rawat.si...@gmail.com>
>> Date: Monday, September 18, 2017 at 4:27 AM
>> To: Kevin Traynor <ktray...@redhat.com>
>> Cc: Darrel Ball <db...@vmware.com>, "ovs-...@openvswitch.org" > d...@openvswitch.org>, "disc...@openvswitch.org&q
On 09/07/2017 06:47 PM, Darrell Ball wrote:
> Adding disc...@openvswitch.org
>
> The related changes went into 2.7
>
>
>
> On 9/7/17, 3:51 AM, "ovs-dev-boun...@openvswitch.org on behalf of devendra
> rawat" devendra.rawat.si...@gmail.com>
On 09/06/2017 02:43 PM, Jan Scheurich wrote:
>>
>> I think the mention of pinning was confusing me a little. Let me see if I
>> fully understand your use case: You don't 'want' to pin
>> anything but you are using it as a way to force the distribution of rxq from
>> a single nic across to PMDs
On 09/06/2017 02:33 PM, Jan Scheurich wrote:
> Hi Billy,
>
>> You are going to have to take the hit crossing the NUMA boundary at some
>> point if your NIC and VM are on different NUMAs.
>>
>> So are you saying that it is more expensive to cross the NUMA boundary from
>> the pmd to the VM that
On 09/06/2017 08:03 AM, 王志克 wrote:
> Hi Darrell,
>
> pmd-rxq-affinity has below limitation: (so isolated pmd can not be used for
> others, which is not my expectation. Lots of VMs come and go on the fly, and
> manully assignment is not feasible.)
> >>After that PMD threads on cores
22 matches
Mail list logo