Re: [ovirt-users] Storage network question

2015-07-31 Thread Alan Murrell
Actually, I have to make a correction to my earlier statement... the 
article I referred to was using bond mode 0 (bond-rr) and not mode 1 as 
I had indicated.


I know mode 0 is not supported in the oVirt interface as one of the 
official options (but can be specified under custom) and probably is 
not typically recommended, but if setup correctly, it seems it would be 
perfect for the storage (and migration?) network/bonds?


-Alan


On 30/07/2015 10:41 PM, Patrick Russell wrote:

We just changed this up a little this week. We split our traffic into 2 bonds, 
10GB mode 1 as follows:

Guest vlans, managment vlan (including some NFS storage) - bond0
Migration layer 2 only vlan - bond1

This allowed us to tweak the vdsm.conf to speed up migrations without impacting 
management and guest traffic. As a result we’re currently pushing about 5Gb on 
bond1 when we do live migrations between hosts.

-Patrick


On Jul 28, 2015, at 1:34 AM, Alan Murrell li...@murrell.ca wrote:

Hi Patrick,

On 27/07/2015 7:25 AM, Patrick Russell wrote:

We currently have all our nics in the same bond. So we have guest
traffic, management,  and storage running over the same physical
nics, but different vlans.


Which bond mode do you use, out of curiousity?  Not sure I would go to this 
extreme, though; I would still want the physical isolation of Management vs. 
network/VM traffic vs. storage, but just curious which bonding mode?

Modes 1 and 5 would seem to be the best ones, as far as maximising throughput.  
I read an article just the other day where a guy detailed how he bonded four 
1Gbit NICs in mode 1 (with each on a different VLAN) and was able to achieve 
320MB/s throughput to NFS storage.

As far as the storage question, I like to put other storage on the network 
(smaller NAS devices, maybe SANs for other storage) and would want the VMs to 
be bale to get at those.  Being to use a NIC to carry VM traffic for storage as 
well as for host access to storage would cut down on the number of NICs I would 
need to have in each node.

-Alan


-Alan





___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Storage network question

2015-07-30 Thread Patrick Russell
We just changed this up a little this week. We split our traffic into 2 bonds, 
10GB mode 1 as follows:

Guest vlans, managment vlan (including some NFS storage) - bond0
Migration layer 2 only vlan - bond1

This allowed us to tweak the vdsm.conf to speed up migrations without impacting 
management and guest traffic. As a result we’re currently pushing about 5Gb on 
bond1 when we do live migrations between hosts.

-Patrick

 On Jul 28, 2015, at 1:34 AM, Alan Murrell li...@murrell.ca wrote:
 
 Hi Patrick,
 
 On 27/07/2015 7:25 AM, Patrick Russell wrote:
 We currently have all our nics in the same bond. So we have guest
 traffic, management,  and storage running over the same physical
 nics, but different vlans.
 
 Which bond mode do you use, out of curiousity?  Not sure I would go to this 
 extreme, though; I would still want the physical isolation of Management vs. 
 network/VM traffic vs. storage, but just curious which bonding mode?
 
 Modes 1 and 5 would seem to be the best ones, as far as maximising 
 throughput.  I read an article just the other day where a guy detailed how he 
 bonded four 1Gbit NICs in mode 1 (with each on a different VLAN) and was able 
 to achieve 320MB/s throughput to NFS storage.
 
 As far as the storage question, I like to put other storage on the network 
 (smaller NAS devices, maybe SANs for other storage) and would want the VMs to 
 be bale to get at those.  Being to use a NIC to carry VM traffic for storage 
 as well as for host access to storage would cut down on the number of NICs I 
 would need to have in each node.
 
 -Alan
 
 
 -Alan
 

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Storage network question

2015-07-28 Thread Alan Murrell

Hi Patrick,

On 27/07/2015 7:25 AM, Patrick Russell wrote:

We currently have all our nics in the same bond. So we have guest
traffic, management,  and storage running over the same physical
nics, but different vlans.


Which bond mode do you use, out of curiousity?  Not sure I would go to 
this extreme, though; I would still want the physical isolation of 
Management vs. network/VM traffic vs. storage, but just curious which 
bonding mode?


Modes 1 and 5 would seem to be the best ones, as far as maximising 
throughput.  I read an article just the other day where a guy detailed 
how he bonded four 1Gbit NICs in mode 1 (with each on a different VLAN) 
and was able to achieve 320MB/s throughput to NFS storage.


As far as the storage question, I like to put other storage on the 
network (smaller NAS devices, maybe SANs for other storage) and would 
want the VMs to be bale to get at those.  Being to use a NIC to carry VM 
traffic for storage as well as for host access to storage would cut down 
on the number of NICs I would need to have in each node.


-Alan


-Alan

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Storage network question

2015-07-27 Thread Patrick Russell
Alan,

We currently have all our nics in the same bond. So we have guest traffic, 
management,  and storage running over the same physical nics, but different 
vlans. 

Hope this helps,
Patrick

 On Jul 26, 2015, at 4:38 AM, Alan Murrell li...@murrell.ca wrote:
 
 If I am using a NIC on my host on the storage network to access storage
 (NFS, iSCSI, Gluster, etc.), is there any issue with allowing VMs to be
 assigned to it so they can access the same storage network?
 
 (the VMs would have a NIC added specifically for this, of course)
 
 Basically, unlike in VMware and Hyper-V, in oVirt can the same NIC be
 used for host and VM access to the storage network?
 
 Thanks! :-)
 
 -Alan
 
 
 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Storage network question

2015-07-26 Thread Alan Murrell
If I am using a NIC on my host on the storage network to access storage
(NFS, iSCSI, Gluster, etc.), is there any issue with allowing VMs to be
assigned to it so they can access the same storage network?

(the VMs would have a NIC added specifically for this, of course)

Basically, unlike in VMware and Hyper-V, in oVirt can the same NIC be
used for host and VM access to the storage network?

Thanks! :-)

-Alan


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users