Re: [openstack-dev] [neutron][kilo] - vxlan's max bandwidth

2016-04-20 Thread Akihiro Motoki
2016-04-20 14:53 GMT+09:00 Ihar Hrachyshka :
> Ian Wells  wrote:
>
>>
>> Right. Note that custom MTU works out of the box only starting from
>> Mitaka.
>>
>> It's been in from at least Kilo (give or take a some bugfixes, it seems,
>> all of which deserve backporting).
>
>
> It never worked as you would expect, though indeed a lot of code to
> calculate MTU was in place.
>
>>
>> - It won’t work in OVS hybrid setup, where intermediate devices (qbr) will
>> still have mtu = 1500, that will result in Jumbo frames dropped. We have
>> backports to fix it in Liberty at: https://review.openstack.org/305782 and
>> https://review.openstack.org/#/c/285710/
>>
>> Indeed, you can actively request the MTU per virtual network as you create
>> them, subject to segment_mtu and path_mtu indicating they're achievable.
>
>
> No. MTU is not available for setting during network creation. It’s only
> calculated as per get_mtu() [relying on path_mtu and physical_network_mtus
> and segment_mtu; note the latter is renamed since Mitaka into
> global_physnet_mtu].
>
>>
>> In this instance, configure your switches with a 9000 MTU and set
>> segment_mtu = path_mtu = 9000.  The virtual network MTU will then default to
>> 8950 for a VXLAN network (the biggest possible packet inside VXLAN in that
>> circumstance) and you can choose to set it to anything else below that
>> number as you net-create.  The MTU should be correctly advertised by DHCP
>> when set.
>>
>
> As I said, no, you can’t set it on network creation.
>
> Also, having the network using 8950 is not enough for Jumbo frames, because
> till Mitaka (and the next minor Liberty release) Nova was not using that
> value to bump MTU for intermediate devices participating in the hybrid
> bridge setup.

In my understanding, nova honors network_device_mtu configurtaion and
when we configure network_device_mtu to 8950, veth devices are
configured to mtu 8950
and the MTU of qbr bridges will be set to 8950. It seems a linux
bridge adjusts its MTU
to the largest MTU among connected interfaces.

I got the following the following results.

$ ip -o l | grep 9cdd22c-ec
11: qbra9cdd22c-ec:  mtu 8950 qdisc
noqueue state UP mode DEFAULT group default \link/ether
5e:7f:e2:aa:91:71 brd ff:ff:ff:ff:ff:ff
12: qvoa9cdd22c-ec:  mtu 8950
qdisc pfifo_fast master ovs-system state UP mode DEFAULT group default
qlen 1000\link/ether 56:7a:79:31:cc:aa brd ff:ff:ff:ff:ff:ff
13: qvba9cdd22c-ec:  mtu 8950
qdisc pfifo_fast master qbra9cdd22c-ec state UP mode DEFAULT group
default qlen 1000\link/ether 5e:7f:e2:aa:91:71 brd
ff:ff:ff:ff:ff:ff
14: tapa9cdd22c-ec:  mtu 8950 qdisc
pfifo_fast master qbra9cdd22c-ec state UNKNOWN mode DEFAULT group
default qlen 500\link/ether fe:16:3e:b1:a0:71 brd
ff:ff:ff:ff:ff:ff


If we don't need to per-network MTU, we can configure nova to use larger MTU.
Am I missing the context?

>> I hope you don't find you have to do what Akihiro suggests.  That was good
>> advice about three releases back but nowadays it actually breaks the code
>> that's there to deal with MTUs properly.
>
>
> Yes, indeed it’s not needed since Kilo to modify dnsmasq configuration to
> set the option. advertise_mtu is now True since Mitaka, and for earlier
> releases, you just set it to True explicitly.

Sorry for confusing you all and thanks for correcting me.
It seems I was confused with one older version.
As others pointed out, we have advertise_mtu in Kilo.

Akihiro

>
>
> Ihar
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][kilo] - vxlan's max bandwidth

2016-04-20 Thread Ihar Hrachyshka

Ian Wells  wrote:



Right. Note that custom MTU works out of the box only starting from Mitaka.

It's been in from at least Kilo (give or take a some bugfixes, it seems,  
all of which deserve backporting).


It never worked as you would expect, though indeed a lot of code to  
calculate MTU was in place.




- It won’t work in OVS hybrid setup, where intermediate devices (qbr)  
will still have mtu = 1500, that will result in Jumbo frames dropped. We  
have backports to fix it in Liberty at:  
https://review.openstack.org/305782 and  
https://review.openstack.org/#/c/285710/


Indeed, you can actively request the MTU per virtual network as you  
create them, subject to segment_mtu and path_mtu indicating they're  
achievable.


No. MTU is not available for setting during network creation. It’s only  
calculated as per get_mtu() [relying on path_mtu and physical_network_mtus  
and segment_mtu; note the latter is renamed since Mitaka into  
global_physnet_mtu].




In this instance, configure your switches with a 9000 MTU and set  
segment_mtu = path_mtu = 9000.  The virtual network MTU will then default  
to 8950 for a VXLAN network (the biggest possible packet inside VXLAN in  
that circumstance) and you can choose to set it to anything else below  
that number as you net-create.  The MTU should be correctly advertised by  
DHCP when set.




As I said, no, you can’t set it on network creation.

Also, having the network using 8950 is not enough for Jumbo frames, because  
till Mitaka (and the next minor Liberty release) Nova was not using that  
value to bump MTU for intermediate devices participating in the hybrid  
bridge setup.


I hope you don't find you have to do what Akihiro suggests.  That was  
good advice about three releases back but nowadays it actually breaks the  
code that's there to deal with MTUs properly.


Yes, indeed it’s not needed since Kilo to modify dnsmasq configuration to  
set the option. advertise_mtu is now True since Mitaka, and for earlier  
releases, you just set it to True explicitly.


Ihar

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][kilo] - vxlan's max bandwidth

2016-04-19 Thread Rick Jones
I like larger MTUs, and used to call stateless offloads like TSO/GSO 
"Poor man's Jumbo Frames" but if you can get the stateless offloads 
going, you can go beyond the savings one gets from JumboFrames because a 
TSO/GSO/GRO "segment" can end-up being semi-effectively 32-64KB.


rick jones
PS - don't forget that *everything* in the same broadcast domain must 
have the same MTU


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][kilo] - vxlan's max bandwidth

2016-04-19 Thread Ian Wells
On 18 April 2016 at 04:33, Ihar Hrachyshka  wrote:

> Akihiro Motoki  wrote:
>
> 2016-04-18 15:58 GMT+09:00 Ihar Hrachyshka :
>>
>>> Sławek Kapłoński  wrote:
>>>
>>> Hello,

 What MTU have You got configured on VMs? I had issue with performance on
 vxlan network with standard MTU (1500) but when I configured Jumbo
 frames on vms and on hosts then it was much better.

>>>
>>>
>>> Right. Note that custom MTU works out of the box only starting from
>>> Mitaka.
>>
>>
It's been in from at least Kilo (give or take a some bugfixes, it seems,
all of which deserve backporting).

You can find details on how to configure Neutron for Jumbo frames in the
>>> official docs:
>>>
>>> http://docs.openstack.org/mitaka/networking-guide/adv-config-mtu.html
>>>
>>
>> If you want to advertise MTU using DHCP in releases before Mitaka,
>> you can prepare your custom dnsmasq config file like below and
>> set it to dhcp-agent dnsmasq_config_file config option.
>> You also need to set network_device_mtu config parameters appropriately.
>>
>> sample dnsmasq config file:
>> --
>> dhcp-option-force=26,8950
>> --
>> dhcp option 26 specifies MTU.
>>
>
> Several notes:
>
> - In Liberty, above can be achieved by setting advertise_mtu in
> neutron.conf on nodes hosting DHCP agents.
> - You should set [ml2] segment_mtu on controller nodes to MTU value for
> underlying physical networks. After that, DHCP agents will advertise
> correct MTU for all new networks created after the configuration applied.
> - It won’t work in OVS hybrid setup, where intermediate devices (qbr) will
> still have mtu = 1500, that will result in Jumbo frames dropped. We have
> backports to fix it in Liberty at: https://review.openstack.org/305782
> and https://review.openstack.org/#/c/285710/
>

Indeed, you can actively request the MTU per virtual network as you create
them, subject to segment_mtu and path_mtu indicating they're achievable.

In this instance, configure your switches with a 9000 MTU and set
segment_mtu = path_mtu = 9000.  The virtual network MTU will then default
to 8950 for a VXLAN network (the biggest possible packet inside VXLAN in
that circumstance) and you can choose to set it to anything else below that
number as you net-create.  The MTU should be correctly advertised by DHCP
when set.

I hope you don't find you have to do what Akihiro suggests.  That was good
advice about three releases back but nowadays it actually breaks the code
that's there to deal with MTUs properly.
-- 
Ian.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][kilo] - vxlan's max bandwidth

2016-04-19 Thread Sławek Kapłoński
Hello,

We made it exactly in that way on vms :)

-- 
Best regards / Pozdrawiam
Sławek Kapłoński
sla...@kaplonski.pl

On Mon, 18 Apr 2016, Akihiro Motoki wrote:

> 2016-04-18 15:58 GMT+09:00 Ihar Hrachyshka :
> > Sławek Kapłoński  wrote:
> >
> >> Hello,
> >>
> >> What MTU have You got configured on VMs? I had issue with performance on
> >> vxlan network with standard MTU (1500) but when I configured Jumbo
> >> frames on vms and on hosts then it was much better.
> >
> >
> > Right. Note that custom MTU works out of the box only starting from Mitaka.
> > You can find details on how to configure Neutron for Jumbo frames in the
> > official docs:
> >
> > http://docs.openstack.org/mitaka/networking-guide/adv-config-mtu.html
> 
> If you want to advertise MTU using DHCP in releases before Mitaka,
> you can prepare your custom dnsmasq config file like below and
> set it to dhcp-agent dnsmasq_config_file config option.
> You also need to set network_device_mtu config parameters appropriately.
> 
> sample dnsmasq config file:
> --
> dhcp-option-force=26,8950
> --
> dhcp option 26 specifies MTU.
> 
> Akihiro
> 
> 
> >
> >
> > Ihar
> > __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


signature.asc
Description: Digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][kilo] - vxlan's max bandwidth

2016-04-18 Thread Ihar Hrachyshka

Akihiro Motoki  wrote:


2016-04-18 15:58 GMT+09:00 Ihar Hrachyshka :

Sławek Kapłoński  wrote:


Hello,

What MTU have You got configured on VMs? I had issue with performance on
vxlan network with standard MTU (1500) but when I configured Jumbo
frames on vms and on hosts then it was much better.



Right. Note that custom MTU works out of the box only starting from  
Mitaka.

You can find details on how to configure Neutron for Jumbo frames in the
official docs:

http://docs.openstack.org/mitaka/networking-guide/adv-config-mtu.html


If you want to advertise MTU using DHCP in releases before Mitaka,
you can prepare your custom dnsmasq config file like below and
set it to dhcp-agent dnsmasq_config_file config option.
You also need to set network_device_mtu config parameters appropriately.

sample dnsmasq config file:
--
dhcp-option-force=26,8950
--
dhcp option 26 specifies MTU.


Several notes:

- In Liberty, above can be achieved by setting advertise_mtu in  
neutron.conf on nodes hosting DHCP agents.
- You should set [ml2] segment_mtu on controller nodes to MTU value for  
underlying physical networks. After that, DHCP agents will advertise  
correct MTU for all new networks created after the configuration applied.
- It won’t work in OVS hybrid setup, where intermediate devices (qbr) will  
still have mtu = 1500, that will result in Jumbo frames dropped. We have  
backports to fix it in Liberty at: https://review.openstack.org/305782 and  
https://review.openstack.org/#/c/285710/


Ihar

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][kilo] - vxlan's max bandwidth

2016-04-18 Thread Akihiro Motoki
2016-04-18 15:58 GMT+09:00 Ihar Hrachyshka :
> Sławek Kapłoński  wrote:
>
>> Hello,
>>
>> What MTU have You got configured on VMs? I had issue with performance on
>> vxlan network with standard MTU (1500) but when I configured Jumbo
>> frames on vms and on hosts then it was much better.
>
>
> Right. Note that custom MTU works out of the box only starting from Mitaka.
> You can find details on how to configure Neutron for Jumbo frames in the
> official docs:
>
> http://docs.openstack.org/mitaka/networking-guide/adv-config-mtu.html

If you want to advertise MTU using DHCP in releases before Mitaka,
you can prepare your custom dnsmasq config file like below and
set it to dhcp-agent dnsmasq_config_file config option.
You also need to set network_device_mtu config parameters appropriately.

sample dnsmasq config file:
--
dhcp-option-force=26,8950
--
dhcp option 26 specifies MTU.

Akihiro


>
>
> Ihar
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][kilo] - vxlan's max bandwidth

2016-04-18 Thread Ihar Hrachyshka

Sławek Kapłoński  wrote:


Hello,

What MTU have You got configured on VMs? I had issue with performance on
vxlan network with standard MTU (1500) but when I configured Jumbo
frames on vms and on hosts then it was much better.


Right. Note that custom MTU works out of the box only starting from Mitaka.  
You can find details on how to configure Neutron for Jumbo frames in the  
official docs:


http://docs.openstack.org/mitaka/networking-guide/adv-config-mtu.html


Ihar

signature.asc
Description: Message signed with OpenPGP using GPGMail
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][kilo] - vxlan's max bandwidth

2016-04-16 Thread Sławek Kapłoński
Hello,

What MTU have You got configured on VMs? I had issue with performance on
vxlan network with standard MTU (1500) but when I configured Jumbo
frames on vms and on hosts then it was much better.

-- 
Best regards / Pozdrawiam
Sławek Kapłoński
sla...@kaplonski.pl

On Fri, 15 Apr 2016, Rick Jones wrote:

> On 04/14/2016 07:10 PM, Kenny Ji-work wrote:
> >Hi all,
> >
> >In the environment of openstack kilo, I test the bandwidth in the scene
> >which VxLan being used. The result show that the vxlan can only support
> >up to 1 gbits bandwidth. Is this a bug or any else issue, or is there
> >some hotfix to solve the issue? Thank you for answering!
> 
> I'm glossing over some details, but broadly speaking, a single network flow
> cannot take advantage of more than one CPU in a system.  And while network
> speeds have been continuing to increase, per-core speeds haven't really gone
> up much over the last five to ten years.
> 
> So, to get "speed/link rate" networking stacks have become dependent on
> stateless offloads - Checksum Offload (CKO) TCP Segmentation Offload
> (TSO/GSO) and Generic Receive Offload 9GRO).  And until somewhat recently,
> NICs did not offer stateless offloads for VxLAN-encapsulated traffic.  So,
> one effectively has a "dumb" NIC without stateless offloads.  And depending
> on what sort of processor you have, that limit could be down around 1
> Gbit/s.  Only some of the more recent 10GbE NICs offer stateless offload of
> VxLAN-encapsulated traffic, and similarly their more recent drivers and
> networking stacks.
> 
> In olden days, before the advent of stateless offloads there was a rule of
> thumb - 1 Mbit/s per MHz.  That was with "pure" bare-iron networking - no
> VMs, no encapsulation.  Even then it was a bit hand-wavy, and may have
> originated in the land of SPARC processors.  But hopefully it conveys the
> idea of what it means to lose the stateless offloads.
> 
> So, it would be good to know what sort of CPUs are involved (down to the
> model names and frequencies) as well as the NICs involved - again, full
> naming, not just the brand name.
> 
> And it is just a paranoid question, but is there any 1 Gbit/s networking in
> your setup at all?
> 
> happy benchmarking,
> 
> rick jones
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


signature.asc
Description: Digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][kilo] - vxlan's max bandwidth

2016-04-15 Thread Rick Jones

On 04/14/2016 07:10 PM, Kenny Ji-work wrote:

Hi all,

In the environment of openstack kilo, I test the bandwidth in the scene
which VxLan being used. The result show that the vxlan can only support
up to 1 gbits bandwidth. Is this a bug or any else issue, or is there
some hotfix to solve the issue? Thank you for answering!


I'm glossing over some details, but broadly speaking, a single network 
flow cannot take advantage of more than one CPU in a system.  And while 
network speeds have been continuing to increase, per-core speeds haven't 
really gone up much over the last five to ten years.


So, to get "speed/link rate" networking stacks have become dependent on 
stateless offloads - Checksum Offload (CKO) TCP Segmentation Offload 
(TSO/GSO) and Generic Receive Offload 9GRO).  And until somewhat 
recently, NICs did not offer stateless offloads for VxLAN-encapsulated 
traffic.  So, one effectively has a "dumb" NIC without stateless 
offloads.  And depending on what sort of processor you have, that limit 
could be down around 1 Gbit/s.  Only some of the more recent 10GbE NICs 
offer stateless offload of VxLAN-encapsulated traffic, and similarly 
their more recent drivers and networking stacks.


In olden days, before the advent of stateless offloads there was a rule 
of thumb - 1 Mbit/s per MHz.  That was with "pure" bare-iron networking 
- no VMs, no encapsulation.  Even then it was a bit hand-wavy, and may 
have originated in the land of SPARC processors.  But hopefully it 
conveys the idea of what it means to lose the stateless offloads.


So, it would be good to know what sort of CPUs are involved (down to the 
model names and frequencies) as well as the NICs involved - again, full 
naming, not just the brand name.


And it is just a paranoid question, but is there any 1 Gbit/s networking 
in your setup at all?


happy benchmarking,

rick jones


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][kilo] - vxlan's max bandwidth

2016-04-14 Thread Shinobu Kinjo
How did you test it out?
Would you elaborate on this more?

Cheers,
Shinobu

On Fri, Apr 15, 2016 at 11:10 AM, Kenny Ji-work  wrote:
> Hi all,
>
> In the environment of openstack kilo, I test the bandwidth in the scene
> which VxLan being used. The result show that the vxlan can only support up
> to 1 gbits bandwidth. Is this a bug or any else issue, or is there some
> hotfix to solve the issue? Thank you for answering!
>
> Sincerely,
> Kenny Ji
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Email:
shin...@linux.com
GitHub:
shinobu-x
Blog:
Life with Distributed Computational System based on OpenSource

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron][kilo] - vxlan's max bandwidth

2016-04-14 Thread Kenny Ji-work
Hi all,


In the environment of openstack kilo, I test the bandwidth in the scene which 
VxLan being used. The result show that the vxlan can only support up to 1 gbits 
bandwidth. Is this a bug or any else issue, or is there some hotfix to solve 
the issue? Thank you for answering!


Sincerely,
Kenny Ji__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev