Re: [ovirt-users] 1. Re: ??: bond mode balance-alb (Jorick Astrego)

2014-12-31 Thread Christopher Young
I'm a little confused by your explanation of 'just do the bonding at the
guest level'.  I apologize for my ignorance here, but I'm trying to prepare
myself for a similar configuration where I'm going to need to get all much
bandwidth out of the bond as possible.  How would bonding multiple
interfaces at the VM level provide a better balance than at the hypervisor
level?  Wouldn't the traffic more or less end up traveling the same path
regardless of the virtual interface?

I'm trying to plan out an oVirt implementation where I would like to bond
multiple interfaces on my hypervisor nodes for balancing/redundancy, and
I'm very curious what others have done with Cisco hardware (in my case, a
pair of 3650's with MEC) in order to get the best solution.

I will read through these threads and see if I can gain a better
understanding, but if you happen to have an easy explanation that would
help my understand, I would greatly appreciate it.


On Wed, Dec 31, 2014 at 1:01 AM, Blaster blas...@556nato.com wrote:


 Thanks for your thoughts.  The problem is, most of the data is transmitted
 from a couple apps to a couple systems.  The chance of a hash collision
 (i.e., most of the data going out the same interface anyway) is quite
 high.  On Solaris, I just created two physical interfaces each with their
 own IP, and bound the apps to the appropriate interfaces.  This worked
 great.  Imagine my surprise when I discovered this doesn’t work on Linux
 and my crash course on weak host models.

 Interesting that no one commented on my thought to just do the bonding at
 the guest level (and use balance-alb) instead of at the hypervisor level.
 Some ESXi experts I have talked to say this is actually the preferred
 method with ESXi and not to do it at the hypervisor level, as the VM knows
 better than VMware.

 Or is the bonding mode issue with balance-alb/tlb more with the Linux TCP
 stack  itself and not with oVirt and VDSM?



 On Dec 30, 2014, at 4:34 AM, Nikolai Sednev nsed...@redhat.com wrote:

 Mode 2 will do the job the best way for you in case of static LAG
 supported only at the switch's side, I'd advise using of xmit_hash_policy
 layer3+4, so you'll get better distribution for your DC.


 Thanks in advance.

 Best regards,
 Nikolai
 
 Nikolai Sednev
 Senior Quality Engineer at Compute team
 Red Hat Israel
 34 Jerusalem Road,
 Ra'anana, Israel 43501

 Tel:   +972   9 7692043
 Mobile: +972 52 7342734
 Email: nsed...@redhat.com
 IRC: nsednev

 --
 *From: *users-requ...@ovirt.org
 *To: *users@ovirt.org
 *Sent: *Tuesday, December 30, 2014 2:12:58 AM
 *Subject: *Users Digest, Vol 39, Issue 173

 Send Users mailing list submissions to
 users@ovirt.org

 To subscribe or unsubscribe via the World Wide Web, visit
 http://lists.ovirt.org/mailman/listinfo/users
 or, via email, send a message with subject or body 'help' to
 users-requ...@ovirt.org

 You can reach the person managing the list at
 users-ow...@ovirt.org

 When replying, please edit your Subject line so it is more specific
 than Re: Contents of Users digest...


 Today's Topics:

1. Re:  ??: bond mode balance-alb (Jorick Astrego)
2. Re:  ??: bond mode balance-alb (Jorick Astrego)
3.  HostedEngine Deployment Woes (Mikola Rose)


 --

 Message: 1
 Date: Mon, 29 Dec 2014 20:13:40 +0100
 From: Jorick Astrego j.astr...@netbulae.eu
 To: users@ovirt.org
 Subject: Re: [ovirt-users] ??: bond mode balance-alb
 Message-ID: 54a1a7e4.90...@netbulae.eu
 Content-Type: text/plain; charset=utf-8


 On 12/29/2014 12:56 AM, Dan Kenigsberg wrote:
  On Fri, Dec 26, 2014 at 12:39:45PM -0600, Blaster wrote:
  On 12/23/2014 2:55 AM, Dan Kenigsberg wrote:
  Bug 1094842 - Bonding modes 0, 5 and 6 should be avoided for VM
 networks
  https://bugzilla.redhat.com/show_bug.cgi?id=1094842#c0
  Dan,
 
  What is bad about these modes that oVirt can't use them?
  I can only quote jpirko's workds from the link above:
 
  Do not use tlb or alb in bridge, never! It does not work, that's it.
 The reason
  is it mangles source macs in xmit frames and arps. When it is
 possible, just
  use mode 4 (lacp). That should be always possible because all
 enterprise
  switches support that. Generally, for 99% of use cases, you *should*
 use mode
  4. There is no reason to use other modes.
 
 This switch is more of an office switch and only supports part of the
 802.3ad standard:


 PowerConnect* *2824

 Scalable from small workgroups to dense access solutions, the 2824
 offers 24-port flexibility plus two combo small?form?factor
 pluggable (SFP) ports for connecting the switch to other networking
 equipment located beyond the 100 m distance limitations of copper
 cabling.

 Industry-standard link aggregation adhering to IEEE 802.3ad
 standards (static support only, LACP not supported)


 So 

Re: [ovirt-users] 1. Re: ??: bond mode balance-alb (Jorick Astrego)

2014-12-30 Thread Blaster

Thanks for your thoughts.  The problem is, most of the data is transmitted from 
a couple apps to a couple systems.  The chance of a hash collision (i.e., most 
of the data going out the same interface anyway) is quite high.  On Solaris, I 
just created two physical interfaces each with their own IP, and bound the apps 
to the appropriate interfaces.  This worked great.  Imagine my surprise when I 
discovered this doesn’t work on Linux and my crash course on weak host models.

Interesting that no one commented on my thought to just do the bonding at the 
guest level (and use balance-alb) instead of at the hypervisor level.  Some 
ESXi experts I have talked to say this is actually the preferred method with 
ESXi and not to do it at the hypervisor level, as the VM knows better than 
VMware.

Or is the bonding mode issue with balance-alb/tlb more with the Linux TCP stack 
 itself and not with oVirt and VDSM?



On Dec 30, 2014, at 4:34 AM, Nikolai Sednev nsed...@redhat.com wrote:

 Mode 2 will do the job the best way for you in case of static LAG supported 
 only at the switch's side, I'd advise using of xmit_hash_policy layer3+4, so 
 you'll get better distribution for your DC.
 
 
 Thanks in advance.
 
 Best regards,
 Nikolai
 
 Nikolai Sednev
 Senior Quality Engineer at Compute team
 Red Hat Israel
 34 Jerusalem Road,
 Ra'anana, Israel 43501
 
 Tel:   +972   9 7692043
 Mobile: +972 52 7342734
 Email: nsed...@redhat.com
 IRC: nsednev
 
 From: users-requ...@ovirt.org
 To: users@ovirt.org
 Sent: Tuesday, December 30, 2014 2:12:58 AM
 Subject: Users Digest, Vol 39, Issue 173
 
 Send Users mailing list submissions to
 users@ovirt.org
 
 To subscribe or unsubscribe via the World Wide Web, visit
 http://lists.ovirt.org/mailman/listinfo/users
 or, via email, send a message with subject or body 'help' to
 users-requ...@ovirt.org
 
 You can reach the person managing the list at
 users-ow...@ovirt.org
 
 When replying, please edit your Subject line so it is more specific
 than Re: Contents of Users digest...
 
 
 Today's Topics:
 
1. Re:  ??: bond mode balance-alb (Jorick Astrego)
2. Re:  ??: bond mode balance-alb (Jorick Astrego)
3.  HostedEngine Deployment Woes (Mikola Rose)
 
 
 --
 
 Message: 1
 Date: Mon, 29 Dec 2014 20:13:40 +0100
 From: Jorick Astrego j.astr...@netbulae.eu
 To: users@ovirt.org
 Subject: Re: [ovirt-users] ??: bond mode balance-alb
 Message-ID: 54a1a7e4.90...@netbulae.eu
 Content-Type: text/plain; charset=utf-8
 
 
 On 12/29/2014 12:56 AM, Dan Kenigsberg wrote:
  On Fri, Dec 26, 2014 at 12:39:45PM -0600, Blaster wrote:
  On 12/23/2014 2:55 AM, Dan Kenigsberg wrote:
  Bug 1094842 - Bonding modes 0, 5 and 6 should be avoided for VM networks
  https://bugzilla.redhat.com/show_bug.cgi?id=1094842#c0
  Dan,
 
  What is bad about these modes that oVirt can't use them?
  I can only quote jpirko's workds from the link above:
 
  Do not use tlb or alb in bridge, never! It does not work, that's it. 
  The reason
  is it mangles source macs in xmit frames and arps. When it is possible, 
  just
  use mode 4 (lacp). That should be always possible because all enterprise
  switches support that. Generally, for 99% of use cases, you *should* 
  use mode
  4. There is no reason to use other modes.
 
 This switch is more of an office switch and only supports part of the
 802.3ad standard:
 
 
 PowerConnect* *2824
 
 Scalable from small workgroups to dense access solutions, the 2824
 offers 24-port flexibility plus two combo small?form?factor
 pluggable (SFP) ports for connecting the switch to other networking
 equipment located beyond the 100 m distance limitations of copper
 cabling.
 
 Industry-standard link aggregation adhering to IEEE 802.3ad
 standards (static support only, LACP not supported)
 
 
 So the only way to have some kind of bonding without buying more
 expensive switches, is using balance-rr (mode=0), balance-xor (mode=2)
 or broadcast (modes=3).
  I just tested mode 4, and the LACP with Fedora 20 appears to not be
  compatible with the LAG mode on my Dell 2824.
 
  Would there be any issues with bringing two NICS into the VM and doing
  balance-alb at the guest level?
 
 Kind regards,
 
 Jorick Astrego
 
 
 
 Met vriendelijke groet, With kind regards,
 
 Jorick Astrego
 
 Netbulae Virtualization Experts 
 
 
 
 Tel: 053 20 30 270 i...@netbulae.eu Staalsteden 4-3A  
KvK 08198180
  Fax: 053 20 30 271 www.netbulae.eu 7547 TA Enschede  
BTW NL821234584B01
 
 
 
 -- next part --
 An HTML attachment was scrubbed...
 URL: 
 http://lists.ovirt.org/pipermail/users/attachments/20141229/dfacba22/attachment-0001.html
 
 --
 
 Message: 2
 Date: Mon, 29 Dec 2014 20:14:55