I am using red-hat 7.4 and ovs 2.6.1 with dpdk 16.11
In my setup I am using qemu-kvm without openstack. I am trying to use ovs with
dpdk. In my xml file I added following lines for the dpdkvhostuser
I have enabled hugepages during boot time
ot@mvmgptb11hyp01 hyp-1]# cat
Hi,
I have question about RX merge feature.
Below mentions that set mrg_rxbuf=off can improve performance. So question 1:
How much would it be affected for throughput?
*
Rx Mergeable
TBH I haven't looked into details.
I will do this tomorrow.
Thanks a lot,
Alin.
> -Original Message-
> From: Ben Pfaff [mailto:b...@ovn.org]
> Sent: Tuesday, January 23, 2018 2:19 AM
> To: Alin Serdean
> Cc: b...@openvswitch.org
> Subject: Re:
On 01/23/2018 11:42 AM, Kevin Traynor wrote:
> On 01/17/2018 07:48 PM, Venkatesan Pradeep wrote:
>> Hi,
>>
>> Assuming that all ports use the same MTU, in OVS2.8 and earlier, a single
>> mempool of 256K buffers (MAX_NB_MBUF = 4096 * 64) will be created and shared
>> by all the ports
>>
>> With
On Tue, Jan 23, 2018 at 10:41:21AM +0800, netsurfed wrote:
> Hi all,
>
>
> When I created a virtual machine using libvirt, with virtualport type was
> openvswitch, and virtual machine creation failed. The Domain XML file the
> section like this:
>
>
> I looked at the system log and it
Hi there,
this is a test email for ovs discuss.
Regards,
Nandan Kulkarni
___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss
Hi Marcos,
Marcos Felipe Schwarz writes:
> Thanks for the suggestion Aaron.
>
> Follows below the revised patch for the current master using Aaron and
> Timothy contributions. May I submit the patch as is or are there any further
> suggestions?
> I've tested it in the
Thanks for the suggestion Aaron.
Follows below the revised patch for the current master using Aaron and Timothy
contributions. May I submit the patch as is or are there any further
suggestions?
I've tested it in the following conditions:
1) Fedora 27, ovs_user root:root, vfio-uio driver: Fixed
Here's a good starting point:
http://www.opencompute.org/wiki/Networking/ONIE/NOS_Status
I don't see OVS specifically, but it could be that some of the NOSes have
OVS built in or can be easily installed on top.
On Mon, Jan 22, 2018 at 2:34 PM, Shivaram Mysore
On 01/17/2018 07:48 PM, Venkatesan Pradeep wrote:
> Hi,
>
> Assuming that all ports use the same MTU, in OVS2.8 and earlier, a single
> mempool of 256K buffers (MAX_NB_MBUF = 4096 * 64) will be created and shared
> by all the ports
>
> With the OVS2.9 mempool patches, we have port specific
10 matches
Mail list logo