Re: [ovirt-users] Best setup for nfs domain and 1gbs adapters on hosts

2017-02-23 Thread Karli Sjöberg
On Thu, 2017-02-23 at 12:47 +, Karli Sjöberg wrote:
> On Thu, 2017-02-23 at 11:14 +, Yaniv Kaul wrote:
> > 
> > 
> > On Thu, Feb 23, 2017 at 1:11 PM Gianluca Cecchi  > ai
> > l.com> wrote:
> > > On Sat, Feb 18, 2017 at 1:35 PM, Karli Sjöberg  > > .s
> > > e> wrote:
> > > > > 
> > > > > Thanks for your answer, K!
> > > > > So you mean to make a unique bond composed by all 4 network
> > > > 
> > > > adapters and put all the networks on it, comprised ovirtmgmt
> > > > and
> > > > such, through clans?
> > > > > How do you configure 802.3ad on 4 adapters? How many switches
> > > > 
> > > > do you have to connect to, from these 4 adapters? Or do you use
> > > > round robin bonding (but I presume it is not supported in court
> > > > this bond)?
> > > > > Thanks!
> > > > 
> > > > Well, in our case, we have two clustered switches from C-
> > > > company
> > > > so two NICs in each. And then, yeah, different VLANs for every
> > > > network on top of the same bond. Works like a charm:)
> > > > /K
> > > > 
> > > 
> > > Hello K,
> > > coming back to the question, I have only 1 VLAN dedicated to
> > > Netapp
> > > NFS infrastructure and I can't manage to add another.
> > > Can I manage to use anyway 802.3ad on 4 adapters using any
> > > particular option for this bonding mode at oVirt and switch
> > > side? 
> > > I think that at network side we have 4 x 3130 clustered Cisco
> > > switches so in theory I can use this bonding mode.
> > > I don't know if I can configure more than one IP on the same lan
> > > inside one Netapp svm to configure more that one NFS share.
> > > And if 802.3ad mode would assure to use 2 different network
> > > adapters when pointing to 2 different ips on the same network for
> > > the shares
> > > 
> > 
> > And multiple mount points, hence multiple SDs? They might, or might
> > not use the same physical NIC.
> > Y.
> 
> In my experience, they will use the same NIC. You need, at least, two
> addresses in different subnets for 802.3ad to load balance between
> the
> interfaces.
> 
> /K

P.S. The only thing I can say for sure is that proper load balancing
will occur with separate VLAN interfaces and addresses in different
subnets. That much I know, because that´s how we did it:)

Just using addresses in different subnets may work too, I just don´t
know. Please try and post your results:)

/K

> 
> >  
> > > And I have to configure parameters for bonding in oVirt too, in
> > > case I have to set any particular one..
> > >  
> > > Thanks again,
> > > Gianluca
> > > ___
> > > Users mailing list
> > > Users@ovirt.org
> > > http://lists.ovirt.org/mailman/listinfo/users
> > > 
> 
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Best setup for nfs domain and 1gbs adapters on hosts

2017-02-23 Thread Karli Sjöberg
On Thu, 2017-02-23 at 11:14 +, Yaniv Kaul wrote:
> 
> 
> On Thu, Feb 23, 2017 at 1:11 PM Gianluca Cecchi  l.com> wrote:
> > On Sat, Feb 18, 2017 at 1:35 PM, Karli Sjöberg  > e> wrote:
> > > >
> > > > Thanks for your answer, K!
> > > > So you mean to make a unique bond composed by all 4 network
> > > adapters and put all the networks on it, comprised ovirtmgmt and
> > > such, through clans?
> > > > How do you configure 802.3ad on 4 adapters? How many switches
> > > do you have to connect to, from these 4 adapters? Or do you use
> > > round robin bonding (but I presume it is not supported in court
> > > this bond)?
> > > > Thanks!
> > > Well, in our case, we have two clustered switches from C-company
> > > so two NICs in each. And then, yeah, different VLANs for every
> > > network on top of the same bond. Works like a charm:)
> > > /K
> > > 
> > 
> > Hello K,
> > coming back to the question, I have only 1 VLAN dedicated to Netapp
> > NFS infrastructure and I can't manage to add another.
> > Can I manage to use anyway 802.3ad on 4 adapters using any
> > particular option for this bonding mode at oVirt and switch side? 
> > I think that at network side we have 4 x 3130 clustered Cisco
> > switches so in theory I can use this bonding mode.
> > I don't know if I can configure more than one IP on the same lan
> > inside one Netapp svm to configure more that one NFS share.
> > And if 802.3ad mode would assure to use 2 different network
> > adapters when pointing to 2 different ips on the same network for
> > the shares
> > 
> 
> And multiple mount points, hence multiple SDs? They might, or might
> not use the same physical NIC.
> Y.

In my experience, they will use the same NIC. You need, at least, two
addresses in different subnets for 802.3ad to load balance between the
interfaces.

/K

>  
> > And I have to configure parameters for bonding in oVirt too, in
> > case I have to set any particular one..
> >  
> > Thanks again,
> > Gianluca
> > ___
> > Users mailing list
> > Users@ovirt.org
> > http://lists.ovirt.org/mailman/listinfo/users
> > 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Best setup for nfs domain and 1gbs adapters on hosts

2017-02-23 Thread Yaniv Kaul
On Thu, Feb 23, 2017 at 1:11 PM Gianluca Cecchi 
wrote:

> On Sat, Feb 18, 2017 at 1:35 PM, Karli Sjöberg 
> wrote:
>
>
> >
> > Thanks for your answer, K!
> > So you mean to make a unique bond composed by all 4 network adapters and
> put all the networks on it, comprised ovirtmgmt and such, through clans?
> > How do you configure 802.3ad on 4 adapters? How many switches do you
> have to connect to, from these 4 adapters? Or do you use round robin
> bonding (but I presume it is not supported in court this bond)?
> > Thanks!
>
> Well, in our case, we have two clustered switches from C-company so two
> NICs in each. And then, yeah, different VLANs for every network on top of
> the same bond. Works like a charm:)
>
> /K
>
>
> Hello K,
> coming back to the question, I have only 1 VLAN dedicated to Netapp NFS
> infrastructure and I can't manage to add another.
> Can I manage to use anyway 802.3ad on 4 adapters using any particular
> option for this bonding mode at oVirt and switch side?
> I think that at network side we have 4 x 3130 clustered Cisco switches so
> in theory I can use this bonding mode.
> I don't know if I can configure more than one IP on the same lan inside
> one Netapp svm to configure more that one NFS share.
> And if 802.3ad mode would assure to use 2 different network adapters when
> pointing to 2 different ips on the same network for the shares
>

And multiple mount points, hence multiple SDs? They might, or might not use
the same physical NIC.
Y.


> And I have to configure parameters for bonding in oVirt too, in case I
> have to set any particular one..
>
> Thanks again,
> Gianluca
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Best setup for nfs domain and 1gbs adapters on hosts

2017-02-23 Thread Gianluca Cecchi
On Sat, Feb 18, 2017 at 1:35 PM, Karli Sjöberg  wrote:

>
> >
> > Thanks for your answer, K!
> > So you mean to make a unique bond composed by all 4 network adapters and
> put all the networks on it, comprised ovirtmgmt and such, through clans?
> > How do you configure 802.3ad on 4 adapters? How many switches do you
> have to connect to, from these 4 adapters? Or do you use round robin
> bonding (but I presume it is not supported in court this bond)?
> > Thanks!
>
> Well, in our case, we have two clustered switches from C-company so two
> NICs in each. And then, yeah, different VLANs for every network on top of
> the same bond. Works like a charm:)
>
> /K
>

Hello K,
coming back to the question, I have only 1 VLAN dedicated to Netapp NFS
infrastructure and I can't manage to add another.
Can I manage to use anyway 802.3ad on 4 adapters using any particular
option for this bonding mode at oVirt and switch side?
I think that at network side we have 4 x 3130 clustered Cisco switches so
in theory I can use this bonding mode.
I don't know if I can configure more than one IP on the same lan inside one
Netapp svm to configure more that one NFS share.
And if 802.3ad mode would assure to use 2 different network adapters when
pointing to 2 different ips on the same network for the shares
And I have to configure parameters for bonding in oVirt too, in case I have
to set any particular one..

Thanks again,
Gianluca
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Best setup for nfs domain and 1gbs adapters on hosts

2017-02-18 Thread Karli Sjöberg

Den 18 feb. 2017 8:56 fm skrev Gianluca Cecchi :
>
>
>
> On Feb 17, 2017 7:22 PM, "Karli Sjöberg"  wrote:
>>
>>
>>
>> Den 17 feb. 2017 6:30 em skrev Gianluca Cecchi :
>>>
>>> Hello,
>>> I'm going to setup an environment where I will have 2 hosts and each with 2 
>>> adapters to connect to storage domain(s). This will be a test environment, 
>>> not a production one.
>>> The storage domain(s) will be NFS, provided by a Netapp system.
>>> The hosts have 4 x 1Gb/s adapters and I think to use 2 for ovirtmgmt and 
>>> VMs (through bonding and VLANs) and to dedicate the other 2 adapters to the 
>>> NFS domain connectivity.
>>> What would be the best setup to have both HA on the connection and also 
>>> using the whole 2Gb/s in normal load scenario?
>>> Is it better to make more storage domains (and more svm on Netapp side) or 
>>> only one?
>>> What would be the suitable bonding mode to put on adapters? I normally use 
>>> 802.3ad provided by the switches, but I'm not sure if in this configuration 
>>> I can use both the network adapters for the overall load of the different 
>>> VMs that I would have in place...
>>>
>>> Thanks in advance for every suggestion,
>>>
>>> Gianluca
>>
>>
>> Hey G!
>>
>> If it was me doing this, I would make one 4x1Gb/s 802.3ad bond on filer and 
>> hosts to KISS. Then, if bandwidth is of concern, I would set up two VLANs 
>> for storage interfaces with addresses on separate subnets (10.0.0.1 and 
>> 10.0.1.1 on filer. 10.0.0.(2,3) and 10.0.1.(2,3) on hosts) and then on the 
>> filer set up only two NFS exports where you try to as evenly as possible 
>> provision your VMs. This way the network load would evenly spread over all 
>> interfaces for simplest config and best fault tolerance, while keeping 
>> storage traffic at max 2Gb/s. You only need one SVM with several addresses 
>> to achieve this. We have our VMWare environment set up similar to this 
>> towards our NetApp. We also have our oVirt environment set up like this, but 
>> towards a different NFS storage, with great success.
>>
>> /K
>
>
> Thanks for your answer, K!
> So you mean to make a unique bond composed by all 4 network adapters and put 
> all the networks on it, comprised ovirtmgmt and such, through clans?
> How do you configure 802.3ad on 4 adapters? How many switches do you have to 
> connect to, from these 4 adapters? Or do you use round robin bonding (but I 
> presume it is not supported in court this bond)?
> Thanks!

Well, in our case, we have two clustered switches from C-company so two NICs in 
each. And then, yeah, different VLANs for every network on top of the same 
bond. Works like a charm:)

/K
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Best setup for nfs domain and 1gbs adapters on hosts

2017-02-17 Thread Gianluca Cecchi
On Feb 17, 2017 7:22 PM, "Karli Sjöberg"  wrote:



Den 17 feb. 2017 6:30 em skrev Gianluca Cecchi :

Hello,
I'm going to setup an environment where I will have 2 hosts and each with 2
adapters to connect to storage domain(s). This will be a test environment,
not a production one.
The storage domain(s) will be NFS, provided by a Netapp system.
The hosts have 4 x 1Gb/s adapters and I think to use 2 for ovirtmgmt and
VMs (through bonding and VLANs) and to dedicate the other 2 adapters to the
NFS domain connectivity.
What would be the best setup to have both HA on the connection and also
using the whole 2Gb/s in normal load scenario?
Is it better to make more storage domains (and more svm on Netapp side) or
only one?
What would be the suitable bonding mode to put on adapters? I normally
use 802.3ad provided by the switches, but I'm not sure if in this
configuration I can use both the network adapters for the overall load of
the different VMs that I would have in place...

Thanks in advance for every suggestion,

Gianluca


Hey G!

If it was me doing this, I would make one 4x1Gb/s 802.3ad bond on filer and
hosts to KISS. Then, if bandwidth is of concern, I would set up two VLANs
for storage interfaces with addresses on separate subnets (10.0.0.1 and
10.0.1.1 on filer. 10.0.0.(2,3) and 10.0.1.(2,3) on hosts) and then on the
filer set up only two NFS exports where you try to as evenly as possible
provision your VMs. This way the network load would evenly spread over all
interfaces for simplest config and best fault tolerance, while keeping
storage traffic at max 2Gb/s. You only need one SVM with several addresses
to achieve this. We have our VMWare environment set up similar to this
towards our NetApp. We also have our oVirt environment set up like this,
but towards a different NFS storage, with great success.

/K


Thanks for your answer, K!
So you mean to make a unique bond composed by all 4 network adapters and
put all the networks on it, comprised ovirtmgmt and such, through clans?
How do you configure 802.3ad on 4 adapters? How many switches do you have
to connect to, from these 4 adapters? Or do you use round robin bonding
(but I presume it is not supported in court this bond)?
Thanks!
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Best setup for nfs domain and 1gbs adapters on hosts

2017-02-17 Thread Karli Sjöberg


Den 17 feb. 2017 6:30 em skrev Gianluca Cecchi :
Hello,
I'm going to setup an environment where I will have 2 hosts and each with 2 
adapters to connect to storage domain(s). This will be a test environment, not 
a production one.
The storage domain(s) will be NFS, provided by a Netapp system.
The hosts have 4 x 1Gb/s adapters and I think to use 2 for ovirtmgmt and VMs 
(through bonding and VLANs) and to dedicate the other 2 adapters to the NFS 
domain connectivity.
What would be the best setup to have both HA on the connection and also using 
the whole 2Gb/s in normal load scenario?
Is it better to make more storage domains (and more svm on Netapp side) or only 
one?
What would be the suitable bonding mode to put on adapters? I normally use 
802.3ad provided by the switches, but I'm not sure if in this configuration I 
can use both the network adapters for the overall load of the different VMs 
that I would have in place...

Thanks in advance for every suggestion,

Gianluca

Hey G!

If it was me doing this, I would make one 4x1Gb/s 802.3ad bond on filer and 
hosts to KISS. Then, if bandwidth is of concern, I would set up two VLANs for 
storage interfaces with addresses on separate subnets (10.0.0.1 and 10.0.1.1 on 
filer. 10.0.0.(2,3) and 10.0.1.(2,3) on hosts) and then on the filer set up 
only two NFS exports where you try to as evenly as possible provision your VMs. 
This way the network load would evenly spread over all interfaces for simplest 
config and best fault tolerance, while keeping storage traffic at max 2Gb/s. 
You only need one SVM with several addresses to achieve this. We have our 
VMWare environment set up similar to this towards our NetApp. We also have our 
oVirt environment set up like this, but towards a different NFS storage, with 
great success.

/K
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Best setup for nfs domain and 1gbs adapters on hosts

2017-02-17 Thread Gianluca Cecchi
Hello,
I'm going to setup an environment where I will have 2 hosts and each with 2
adapters to connect to storage domain(s). This will be a test environment,
not a production one.
The storage domain(s) will be NFS, provided by a Netapp system.
The hosts have 4 x 1Gb/s adapters and I think to use 2 for ovirtmgmt and
VMs (through bonding and VLANs) and to dedicate the other 2 adapters to the
NFS domain connectivity.
What would be the best setup to have both HA on the connection and also
using the whole 2Gb/s in normal load scenario?
Is it better to make more storage domains (and more svm on Netapp side) or
only one?
What would be the suitable bonding mode to put on adapters? I normally
use 802.3ad provided by the switches, but I'm not sure if in this
configuration I can use both the network adapters for the overall load of
the different VMs that I would have in place...

Thanks in advance for every suggestion,

Gianluca
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users