On 09/01/2012 17:19, Wyborny, Carolyn wrote:
>
>
>> -----Original Message-----
>> From: Chris Boot [mailto:bo...@bootc.net]
>> Sent: Wednesday, January 04, 2012 8:58 AM
>> To: Wyborny, Carolyn
>> Cc: Nicolas de Pesloüan; netdev; e1000-devel@lists.sourceforge.net
>> Subject: Re: igb + balance-rr + bridge + IPv6 = no go without
>> promiscuous mode
>>
>> On 04/01/2012 16:00, Wyborny, Carolyn wrote:
>>>
>>>
>>>> -----Original Message-----
>>>> From: netdev-ow...@vger.kernel.org [mailto:netdev-
>> ow...@vger.kernel.org]
>>>> On Behalf Of Wyborny, Carolyn
>>>> Sent: Tuesday, January 03, 2012 3:24 PM
>>>> To: Chris Boot; Nicolas de Pesloüan
>>>> Cc: netdev; e1000-devel@lists.sourceforge.net
>>>> Subject: RE: igb + balance-rr + bridge + IPv6 = no go without
>>>> promiscuous mode
>>>>
>>>>
>>>>
>>>>> -----Original Message-----
>>>>> From: netdev-ow...@vger.kernel.org [mailto:netdev-
>>>> ow...@vger.kernel.org]
>>>>> On Behalf Of Chris Boot
>>>>> Sent: Tuesday, December 27, 2011 1:53 PM
>>>>> To: Nicolas de Pesloüan
>>>>> Cc: netdev
>>>>> Subject: Re: igb + balance-rr + bridge + IPv6 = no go without
>>>>> promiscuous mode
>>>>>
>>>>> On 23/12/2011 10:56, Chris Boot wrote:
>>>>>> On 23/12/2011 10:48, Nicolas de Pesloüan wrote:
>>>>>>> [ Forwarded to netdev, because two previous e-mail erroneously
>> sent
>>>>> in
>>>>>>> HTML ]
>>>>>>>
>>>>>>> Le 23/12/2011 11:15, Chris Boot a écrit :
>>>>>>>> On 23/12/2011 09:52, Nicolas de Pesloüan wrote:
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> Le 23 déc. 2011 10:42, "Chris Boot"<bo...@bootc.net
>>>>>>>>> <mailto:bo...@bootc.net>>   a écrit :
>>>>>>>>>>
>>>>>>>>>> Hi folks,
>>>>>>>>>>
>>>>>>>>>> As per Eric Dumazet and Dave Miller, I'm opening up a separate
>>>>>>>>> thread on this issue.
>>>>>>>>>>
>>>>>>>>>> I have two identical servers in a cluster for running KVM
>>>> virtual
>>>>>>>>> machines. They each have a
>>>>>>>>> single connection to the Internet (irrelevant for this) and two
>>>>>>>>> gigabit connections between each
>>>>>>>>> other for cluster replication, etc... These two connections are
>> in
>>>>> a
>>>>>>>>> balance-rr bonded connection,
>>>>>>>>> which is itself member of a bridge that the VMs attach to. I'm
>>>>>>>>> running v3.2-rc6-140-gb9e26df on
>>>>>>>>> Debian Wheezy.
>>>>>>>>>>
>>>>>>>>>> When the bridge is brought up, IPv4 works fine but IPv6 does
>>>> not.
>>>>>>>>> I can use neither the
>>>>>>>>> automatic link-local on the brid ge nor the static global
>> address
>>>> I
>>>>>>>>> assign. Neither machine can
>>>>>>>>> perform neighbour discovery over the link until I put the bond
>>>>>>>>> members (eth0 and eth1) into
>>>>>>>>> promiscuous mode. I can do this either with tcpdump or 'ip link
>>>> set
>>>>>>>>> dev ethX promisc on' and this
>>>>>>>>> is enough to make the link spring to life.
>>>>>>>>>
>>>>>>>>> For as far as I remember, setting bond0 to promisc should set
>> the
>>>>>>>>> bonding member to promisc too.
>>>>>>>>> And inserting bond0 into br0 should set bond0 to promisc... So
>>>>>>>>> everything should be in promisc
>>>>>>>>> mode anyway... but you shoudn't have to do it by hand.
>>>>>>>>>
>>>>>>>>
>>>>>>>> Sorry, I should have added that I tried this. Setting bond0 or
>> br0
>>>>> to
>>>>>>>> promisc has no effect. I
>>>>>>>> discovered this by running tcpdump on br0 first, then bond0, then
>>>>>>>> eventually each bond member in
>>>>>>>> turn. Only at the last stage did things jump to life.
>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> This cluster is not currently live so I can easily test patches
>>>>>>>>> and various configurations.
>>>>>>>>>
>>>>>>>>> Can you try to remove the bonding part, connecting eth0 and eth1
>>>>>>>>> directly to br0 and see if it
>>>>>>>>> works better? (This is a test ony. I perfectly understand that
>> you
>>>>>>>>> would loose balance-rr in this
>>>>>>>>> setup.)
>>>>>>>>>
>>>>>>>>
>>>>>>>> Good call. Let's see.
>>>>>>>>
>>>>>>>> I took br0 and bond0 apart, took eth0 and eth1 out of enforced
>>>>>>>> promisc mode, then manually built a
>>>>>>>> br0 with eth0 in only so I didn't cause a network loop. Adding
>> eth0
>>>>>>>> to br0 did not make it go into
>>>>>>>> promisc mode, but IPv6 does work over this setup. I also made
>> sure
>>>>> ip
>>>>>>>> -6 neigh was empty on both
>>>>>>>> machines before I started.
>>>>>>>>
>>>>>>>> I then decided to try the test with just the bond0 in balance-rr
>>>>>>>> mode. Once again I took everything
>>>>>>>> down and ensured no promisc mode and no ip -6 neigh. I noticed
>>>> bond0
>>>>>>>> wasn't getting a link-local and
>>>>>>>> I found out for some reason
>>>>>>>> /proc/sys/net/ipv6/conf/bond0/disable_ipv6 was set on both
>> servers
>>>>> so I
>>>>>>>> set it to 0. That brought things to life.
>>>>>>>>
>>>>>>>> So then I put it all back together again and it didn't work. I
>> once
>>>>>>>> again noticed disable_ipv6 was
>>>>>>>> set on the bond0 interfaces, now part of the bridge. Toggling
>> this
>>>>> on
>>>>>>>> the _bond_ interface made
>>>>>>>> things work again.
>>>>>>>>
>>>>>>>> What's setting disable_ipv6? Should this be having an impact if
>> the
>>>>>>>> port is part of a bridge?
>>>>>>
>>>>>> Hmm, as a further update... I brought up my VMs on the bridge with
>>>>>> disable_ipv6 turned off. The VMs on one host couldn't see what was
>> on
>>>>>> the other side of the bridge (on the other server) until I turned
>>>>>> promisc back on manually. So it's not entirely disable_ipv6's
>> fault.
>>>>>
>>>>> Hi,
>>>>>
>>>>> I don't want this to get lost around the Christmas break, so I'm
>> just
>>>>> resending it. I'm still seeing the same behaviour as before.
>>>>>
>>>>>   From above:
>>>>>
>>>>>>>>> For as far as I remember, setting bond0 to promisc should set
>> the
>>>>>>>>> bonding member to promisc too.
>>>>>>>>> And inserting bond0 into br0 should set bond0 to promisc... So
>>>>>>>>> everything should be in promisc
>>>>>>>>> mode anyway... but you shoudn't have to do it by hand.
>>>>>
>>>>> This definitely doesn't happen, at least according to 'ip link show
>> |
>>>>> grep PROMISC'.
>>>>>
>>>>> Chris
>>>>>
>>>>> --
>>>>> Chris Boot
>>>>> bo...@bootc.net
>>>>> --
>>>>> To unsubscribe from this list: send the line "unsubscribe netdev" in
>>>>> the body of a message to majord...@vger.kernel.org
>>>>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>>>>
>>>> Sorry for the delay in responding.  I'm not sure what is going on
>> here
>>>> and I'm not our bonding expert who is still out on holidays.
>> However,
>>>> we'll try to reproduce this.  When I get some more advice, I may be
>>>> asking for some more data.
>>>>
>>>> Thanks,
>>>>
>>>> Carolyn
>>>> Carolyn Wyborny
>>>> Linux Development
>>>> LAN Access Division
>>>> Intel Corporation
>>>> N�����r��y���b�X��ǧv�^�)޺{.n�+���z�^�)���w*
>>>> jg��������ݢj/���z�ޖ��2�ޙ���&�)ߡ�a�����G���h��j:+v���w�٥
>>>
>>> Hello,
>>>
>>> Check your ip_forward configuration on your bridge to make sure its
>> configured to forward ipv6 packets and also please send the contents of
>> /etc/modprobe.d/bonding.conf and the contents of your routing table and
>> we'll continue to work on this.
>>
>> Hi Carolyn,
>>
>> Surely ip_forward only needs to be set if I'm wanting to _route_ IPv6
>> rather than simply have them go through a bridge untouched? I don't want
>> the host to route IPv6 at all. Setting this also has the unintended
>> effect of disabling SLAAC which I wish to keep enabled.
>>
>> I don't have a /etc/modprobe.d/bonding.conf; I'm using Debian and
>> configuring my bonding and bridging using the configuration I pasted in
>> my original email. Here it is again:
>>
>>> iface bond0 inet manual
>>>          slaves eth0 eth1
>>>          bond-mode balance-rr
>>>          bond-miimon 100
>>>          bond-downdelay 200
>>>          bond-updelay 200
>>>
>>> iface br0 inet static
>>>          address [snip]
>>>          netmask 255.255.255.224
>>>          bridge_ports bond0
>>>          bridge_stp off
>>>          bridge_fd 0
>>>          bridge_maxwait 5
>>> iface br0 inet6 static
>>>          address [snip]
>>>          netmask 64
>>
>> Despite the static IPv6 address I use SLAAC to grab a default gateway.
>>
>> My IPv6 routing table:
>>
>> 2001:8b0:49:200::/64 dev br0  proto kernel  metric 256  expires
>> 2592317sec
>> fe80::/64 dev br0  proto kernel  metric 256
>> fe80::/64 dev bond1  proto kernel  metric 256
>> fe80::/64 dev vnet0  proto kernel  metric 256
>> fe80::/64 dev vnet1  proto kernel  metric 256
>> fe80::/64 dev vnet2  proto kernel  metric 256
>> fe80::/64 dev vnet3  proto kernel  metric 256
>> fe80::/64 dev vnet4  proto kernel  metric 256
>> default via fe80::5652:ff:fe16:15a0 dev br0  proto kernel  metric 1024
>> expires 1793sec
>>
>> HTH,
>> Chris
>>
>> --
>> Chris Boot
>> bo...@bootc.net
>
> This does seem more like a bonding problem than a driver problem and we 
> haven't seen a lot of ipv6 using with bonding, so we may be in new territory 
> here.  Do you have any other adapters, our or anyone else's to try the same 
> setup and see if the problem persists?

Unfortunately these machines are now in production use, and I can work 
around the problem by manually setting promiscuous mode. I can test 
various software configurations and kernels, but I can't change the 
hardware around at all.

> There is a situation between the bonding driver and all our drivers where the 
> promiscuous setting is not passed down to the driver.  We are not sure if 
> this is expected or not and it has not been addressed yet, but this explains 
> why promiscuous has to be set directly on the device.

If this is a known problem it would seem like that is indeed the 
culprit. Can someone confirm that adding a port to a bridge should set 
promisc mode on the port? And that setting promisc mode on a bond should 
set the ports within the bond to promisc as well?

Thanks,
Chris

-- 
Chris Boot
bo...@bootc.net

------------------------------------------------------------------------------
Ridiculously easy VDI. With Citrix VDI-in-a-Box, you don't need a complex
infrastructure or vast IT resources to deliver seamless, secure access to
virtual desktops. With this all-in-one solution, easily deploy virtual 
desktops for less than the cost of PCs and save 60% on VDI infrastructure 
costs. Try it free! http://p.sf.net/sfu/Citrix-VDIinabox
_______________________________________________
E1000-devel mailing list
E1000-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/e1000-devel
To learn more about Intel&#174; Ethernet, visit 
http://communities.intel.com/community/wired

Reply via email to