Re: [ovs-discuss] 100G with OvS

2018-11-02 Thread Joel Wirāmu Pauling
DPDK. SRIOV and any sort of Smart NIC which removes or obfuscates the
generic kernel packet path from view have their place. But they are
all stop gap technologies IMNSHO.

eBPF + XDP  plugged into OVS is in my view the only truly useful use
case worth pursuing for SDN workload interaction.

If you need to switch faster you might as well get dedicated hardware
appliances; because for all practical purposes the Mellanox, Netronome
and VERY recent Intels offload mechanisms (including DPDK and SRIOV)
make the packet and flow processing path of the Kernel irrelevant .
You effectively have an entire GNU Linux/BSD stack there to provision
a very tiny bit of hardware that is actually doing anything ; and you
are not doing any meaningful interaction from the Kernel or
Application layers with said bit of hardware and you end up loosing
control and command and processing abilities. Black holes and edge
cases - leading to lower resiliency, feature disparity in the network
and increased management overhead in your scalable overlay fabric
forwarding path as a result.

There is a lot of VNF workloads that say they need  XYZ performance
metric to be met and there-fore need some $offload.

My Adjunct to that ; is that there are a lot of VNF workloads that
need to re-engineered to be a) Cloud Native b) apply newer or better
techniques for resilience in their packet paths.

Am not saying switching at 100G doesn't have it's place ; just that
for commodity cloud compute environments that place is questionable as
to it's current positioning on an overlay.


On Sat, 3 Nov 2018 at 05:04, Shivaram Mysore  wrote:
>
> Thanks for sharing.  Yes - I have heard of some folks using Mellanox cards.  
> But, I was more curious about use of Intel FM1 series - FM10420 and 
> FM10840 chipset which Silicom and I think Lanner also has these cards with 
> OVS.
>
> Thanks
>
> On Fri, Nov 2, 2018 at 4:35 AM  wrote:
>>
>> Thank you very much for the slides,
>>
>> Hmm the presentation doesn’t actually have much detail on why the upper 
>> bound is ~30G (I guess per port), well with 6 cores anyways, using the slow 
>> x86 path.
>>
>>
>>
>> So is the point you’re trying to make that above 40G per port one needs to 
>> enter the realm of “smart” NICs please
>>
>> If true then I’d be interested in how do these differ from the say P4/Tofino 
>> like chips.
>>
>> I guess that’s the realm where I’d be limited in terms of available OVS 
>> features to only those for which a HW acceleration is available on the NIC 
>> –is that assumption correct please?
>>
>>
>>
>> adam
>>
>>
>>
>> netconsultings.com
>>
>> ::carrier-class solutions for the telecommunications industry::
>>
>>
>>
>> From: Joel Wirāmu Pauling [mailto:j...@aenertia.net]
>> Sent: Friday, November 02, 2018 9:40 AM
>> To: adamv0...@netconsultings.com
>> Cc: Shivaram Mysore; ovs-discuss@openvswitch.org
>> Subject: Re: [ovs-discuss] 100G with OvS
>>
>>
>>
>> Have a look at :
>>
>>
>>
>> https://m.youtube.com/watch?v=MglrK-JTiqc
>>
>>
>>
>> Disclaimer I worked for Nuage at the time that was done, and work for Redhat 
>> now.
>>
>>
>>
>>
>>
>>
>>
>> On Fri, 2 Nov 2018, 22:23 >
>> > boun...@openvswitch.org] On Behalf Of Joel Wiramu Pauling
>> > Sent: Thursday, November 01, 2018 10:29 PM
>> >
>> > Currently - doing slow-path through commodity x86 silicon you are pretty
>> > much capped at 40gbit; so beyond a few use cases where you are say
>> > writting to an NVME array directly within minimal CPU interaction 100G to
>> > nodes is relatively limited. I've read several relatively good analysis
>> which
>> > indicate that we are close to physical limits when we hit around 130Gbit
>> with
>> > Ethernet ; but currently 40Gbit through existing x86_64 architectures is
>> about
>> > spot on.
>> >
>> Hi Joel,
>>
>> Thank you very much for the info,
>>
>> I assume this limit is per physical NIC is that the case please?
>> (I'm wondering if I'd get a ~100Gbps worth of throughput (100G in + 100G
>> out) through the system as a whole -i.e. multiple interfaces, essentially
>> turning it into an OVS-router.
>>
>> Would you please share what is the limiting factor?
>> (just found that PCIe 3.0 x16 should be capped at 126.075Gbps usable BW;
>> DDR3-2133 @ ~136.5Gbps and Intel® Xeon® Processor E5 Family it's 372.8 Gbps)
>>
>> Thakn you very much
>>
>> adam
>>
>> netconsultings.com
>> ::carrier-class solutions for the telecommunications industry::
___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


Re: [ovs-discuss] 100G with OvS

2018-11-02 Thread Shivaram Mysore
Thanks for sharing.  Yes - I have heard of some folks using Mellanox
cards.  But, I was more curious about use of Intel FM1 series - FM10420
and FM10840 chipset which Silicom and I think Lanner also has these cards
with OVS.

Thanks

On Fri, Nov 2, 2018 at 4:35 AM  wrote:

> Thank you very much for the slides,
>
> Hmm the presentation doesn’t actually have much detail on why the upper
> bound is ~30G (I guess per port), well with 6 cores anyways, using the slow
> x86 path.
>
>
>
> So is the point you’re trying to make that above 40G per port one needs to
> enter the realm of “smart” NICs please
>
> If true then I’d be interested in how do these differ from the say
> P4/Tofino like chips.
>
> I guess that’s the realm where I’d be limited in terms of available OVS
> features to only those for which a HW acceleration is available on the NIC
> –is that assumption correct please?
>
>
>
> adam
>
>
>
> netconsultings.com
>
> ::carrier-class solutions for the telecommunications industry::
>
>
>
> *From:* Joel Wirāmu Pauling [mailto:j...@aenertia.net]
> *Sent:* Friday, November 02, 2018 9:40 AM
> *To:* adamv0...@netconsultings.com
> *Cc:* Shivaram Mysore; ovs-discuss@openvswitch.org
> *Subject:* Re: [ovs-discuss] 100G with OvS
>
>
>
> Have a look at :
>
>
>
> https://m.youtube.com/watch?v=MglrK-JTiqc
>
>
>
> Disclaimer I worked for Nuage at the time that was done, and work for
> Redhat now.
>
>
>
>
>
>
>
> On Fri, 2 Nov 2018, 22:23 
> > boun...@openvswitch.org] On Behalf Of Joel Wiramu Pauling
> > Sent: Thursday, November 01, 2018 10:29 PM
> >
> > Currently - doing slow-path through commodity x86 silicon you are pretty
> > much capped at 40gbit; so beyond a few use cases where you are say
> > writting to an NVME array directly within minimal CPU interaction 100G to
> > nodes is relatively limited. I've read several relatively good analysis
> which
> > indicate that we are close to physical limits when we hit around 130Gbit
> with
> > Ethernet ; but currently 40Gbit through existing x86_64 architectures is
> about
> > spot on.
> >
> Hi Joel,
>
> Thank you very much for the info,
>
> I assume this limit is per physical NIC is that the case please?
> (I'm wondering if I'd get a ~100Gbps worth of throughput (100G in + 100G
> out) through the system as a whole -i.e. multiple interfaces, essentially
> turning it into an OVS-router.
>
> Would you please share what is the limiting factor?
> (just found that PCIe 3.0 x16 should be capped at 126.075Gbps usable BW;
> DDR3-2133 @ ~136.5Gbps and Intel® Xeon® Processor E5 Family it's 372.8
> Gbps)
>
> Thakn you very much
>
> adam
>
> netconsultings.com
> ::carrier-class solutions for the telecommunications industry::
>
>
___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


Re: [ovs-discuss] 100G with OvS

2018-11-02 Thread adamv0025
Thank you very much for the slides,

Hmm the presentation doesn’t actually have much detail on why the upper bound 
is ~30G (I guess per port), well with 6 cores anyways, using the slow x86 path.

 

So is the point you’re trying to make that above 40G per port one needs to 
enter the realm of “smart” NICs please

If true then I’d be interested in how do these differ from the say P4/Tofino 
like chips.

I guess that’s the realm where I’d be limited in terms of available OVS 
features to only those for which a HW acceleration is available on the NIC –is 
that assumption correct please? 

 

adam 

 

netconsultings.com

::carrier-class solutions for the telecommunications industry::

 

From: Joel Wirāmu Pauling [mailto:j...@aenertia.net] 
Sent: Friday, November 02, 2018 9:40 AM
To: adamv0...@netconsultings.com
Cc: Shivaram Mysore; ovs-discuss@openvswitch.org
Subject: Re: [ovs-discuss] 100G with OvS

 

Have a look at : 

 

https://m.youtube.com/watch?v=MglrK-JTiqc

 

Disclaimer I worked for Nuage at the time that was done, and work for Redhat 
now.

 

 

 

On Fri, 2 Nov 2018, 22:23 mailto:adamv0...@netconsultings.com>  wrote:

> boun...@openvswitch.org <mailto:boun...@openvswitch.org> ] On Behalf Of Joel 
> Wiramu Pauling
> Sent: Thursday, November 01, 2018 10:29 PM
> 
> Currently - doing slow-path through commodity x86 silicon you are pretty
> much capped at 40gbit; so beyond a few use cases where you are say
> writting to an NVME array directly within minimal CPU interaction 100G to
> nodes is relatively limited. I've read several relatively good analysis
which
> indicate that we are close to physical limits when we hit around 130Gbit
with
> Ethernet ; but currently 40Gbit through existing x86_64 architectures is
about
> spot on.
> 
Hi Joel,

Thank you very much for the info,

I assume this limit is per physical NIC is that the case please?
(I'm wondering if I'd get a ~100Gbps worth of throughput (100G in + 100G
out) through the system as a whole -i.e. multiple interfaces, essentially
turning it into an OVS-router.

Would you please share what is the limiting factor? 
(just found that PCIe 3.0 x16 should be capped at 126.075Gbps usable BW;
DDR3-2133 @ ~136.5Gbps and Intel® Xeon® Processor E5 Family it's 372.8 Gbps)

Thakn you very much

adam

netconsultings.com <http://netconsultings.com> 
::carrier-class solutions for the telecommunications industry::

___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


Re: [ovs-discuss] 100G with OvS

2018-11-02 Thread Joel Wirāmu Pauling
Have a look at :

https://m.youtube.com/watch?v=MglrK-JTiqc

Disclaimer I worked for Nuage at the time that was done, and work for
Redhat now.



On Fri, 2 Nov 2018, 22:23  > boun...@openvswitch.org] On Behalf Of Joel Wiramu Pauling
> > Sent: Thursday, November 01, 2018 10:29 PM
> >
> > Currently - doing slow-path through commodity x86 silicon you are pretty
> > much capped at 40gbit; so beyond a few use cases where you are say
> > writting to an NVME array directly within minimal CPU interaction 100G to
> > nodes is relatively limited. I've read several relatively good analysis
> which
> > indicate that we are close to physical limits when we hit around 130Gbit
> with
> > Ethernet ; but currently 40Gbit through existing x86_64 architectures is
> about
> > spot on.
> >
> Hi Joel,
>
> Thank you very much for the info,
>
> I assume this limit is per physical NIC is that the case please?
> (I'm wondering if I'd get a ~100Gbps worth of throughput (100G in + 100G
> out) through the system as a whole -i.e. multiple interfaces, essentially
> turning it into an OVS-router.
>
> Would you please share what is the limiting factor?
> (just found that PCIe 3.0 x16 should be capped at 126.075Gbps usable BW;
> DDR3-2133 @ ~136.5Gbps and Intel® Xeon® Processor E5 Family it's 372.8
> Gbps)
>
> Thakn you very much
>
> adam
>
> netconsultings.com
> ::carrier-class solutions for the telecommunications industry::
>
>
___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


Re: [ovs-discuss] 100G with OvS

2018-11-02 Thread adamv0025
> boun...@openvswitch.org] On Behalf Of Joel Wiramu Pauling
> Sent: Thursday, November 01, 2018 10:29 PM
> 
> Currently - doing slow-path through commodity x86 silicon you are pretty
> much capped at 40gbit; so beyond a few use cases where you are say
> writting to an NVME array directly within minimal CPU interaction 100G to
> nodes is relatively limited. I've read several relatively good analysis
which
> indicate that we are close to physical limits when we hit around 130Gbit
with
> Ethernet ; but currently 40Gbit through existing x86_64 architectures is
about
> spot on.
> 
Hi Joel,

Thank you very much for the info,

I assume this limit is per physical NIC is that the case please?
(I'm wondering if I'd get a ~100Gbps worth of throughput (100G in + 100G
out) through the system as a whole -i.e. multiple interfaces, essentially
turning it into an OVS-router.

Would you please share what is the limiting factor? 
(just found that PCIe 3.0 x16 should be capped at 126.075Gbps usable BW;
DDR3-2133 @ ~136.5Gbps and Intel® Xeon® Processor E5 Family it's 372.8 Gbps)

Thakn you very much

adam

netconsultings.com
::carrier-class solutions for the telecommunications industry::

___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


Re: [ovs-discuss] 100G with OvS

2018-11-01 Thread Joel Wirāmu Pauling
Yes - it's only really possible with Vendor offload tricks i.e ASAP2
on the Mellanox X5 works well for 100G on commodity silicon.

You will need an OVS and Kernel stack that support said offload tricks.

Currently - doing slow-path through commodity x86 silicon you are
pretty much capped at 40gbit; so beyond a few use cases where you are
say writting to an NVME array directly within minimal CPU interaction
100G to nodes is relatively limited. I've read several relatively good
analysis which indicate that we are close to physical limits when we
hit around 130Gbit with Ethernet ; but currently 40Gbit through
existing x86_64 architectures is about spot on.

25Gbit is a sweet spot because you can actually do processing of
things on the CPU's anything higher and you might as well go to
purpose built switch infra and you can do it without Vendor offload
tricks like ASAP2/DPDK which remove the ability of the kernel having
meaningful interaction with the flow.

-Joel
On Fri, 2 Nov 2018 at 11:12, Shivaram Mysore  wrote:
>
> Hello,
> Has anyone used 100G with OvS?  I am specifically interested about 
> experiences with PE3100G2DQIR Server Adapter 
> (https://www.silicom-usa.com/pr/server-adapters/networking-adapters/100-gigabit-ethernet-networking-server-adapters/pe3100g2dqir_server_adapter/)
>  running on OvS and Ubuntu Linux.
>
> Thanks
>
> /Shivaram
> ___
> discuss mailing list
> disc...@openvswitch.org
> https://mail.openvswitch.org/mailman/listinfo/ovs-discuss
___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss