Re: [ovirt-users] Best practices for LACP bonds on oVirt

2017-07-05 Thread Yaniv Kaul
On Wed, Jul 5, 2017 at 11:54 AM, Vinícius Ferrão  wrote:

>
> On 5 Jul 2017, at 05:35, Yaniv Kaul  wrote:
>
>
>
> On Wed, Jul 5, 2017 at 7:12 AM, Vinícius Ferrão  wrote:
>
>> Adding another question to what Matthias has said.
>>
>> I also noted that oVirt (and RHV) documentation does not mention the
>> supported block size on iSCSI domains.
>>
>> RHV: https://access.redhat.com/documentation/en-us/red_hat_
>> virtualization/4.0/html/administration_guide/chap-storage
>> oVirt: http://www.ovirt.org/documentation/admin-guide/chap-Storage/
>>
>> I’m interested on 4K blocks over iSCSI, but this isn’t really widely
>> supported. The question is: oVirt supports this? Or should we stay with the
>> default 512 bytes of block size?
>>
>
> It does not.
> Y.
>
>
> Discovered this with the hard way, the system is able to detect it as 4K
> LUN, but ovirt-hosted-engine-setup gets confused:
>
> [2] 36589cfc0071cbf2f2ef314a6212c   1600GiB
> FreeNAS iSCSI Disk
> status: free, paths: 4 active
>
> [3] 36589cfc0043589992bce09176478
>   200GiB  FreeNAS iSCSI Disk
> status: free, paths: 4 active
>
> [4] 36589cfc00992f7abf38c11295bb6
>   400GiB  FreeNAS iSCSI Disk
> status: free, paths: 4 active
>
> [2] is 4k
> [3] is 512bytes
> [4] is 1k (just to prove the point)
>
> On the system it’s appears to be OK:
>
> Disk /dev/mapper/36589cfc0071cbf2f2ef314a6212c: 214.7 GB,
> 214748364800 bytes, 52428800 sectors
> Units = sectors of 1 * 4096 = 4096 bytes
> Sector size (logical/physical): 4096 bytes / 16384 bytes
> I/O size (minimum/optimal): 16384 bytes / 1048576 bytes
>
>
> Disk /dev/mapper/36589cfc0043589992bce09176478: 214.7 GB,
> 214748364800 bytes, 419430400 sectors
> Units = sectors of 1 * 512 = 512 bytes
> Sector size (logical/physical): 512 bytes / 16384 bytes
> I/O size (minimum/optimal): 16384 bytes / 1048576 bytes
>
> But whatever, just reporting back to the list. It’s a good ideia to have a
> note about it on the documentation.
>

Indeed.
Can you file a bug or send a patch to upstream docs?
Y.


>
> V.
>
>
>
>>
>> Thanks,
>> V.
>>
>> On 4 Jul 2017, at 09:10, Matthias Leopold > c.at> wrote:
>>
>>
>>
>> Am 2017-07-04 um 10:01 schrieb Simone Tiraboschi:
>>
>> On Tue, Jul 4, 2017 at 5:50 AM, Vinícius Ferrão > mailto:fer...@if.ufrj.br >> wrote:
>>Thanks, Konstantin.
>>Just to be clear enough: the first deployment would be made on
>>classic eth interfaces and later after the deployment of Hosted
>>Engine I can convert the "ovirtmgmt" network to a LACP Bond, right?
>>Another question: what about iSCSI Multipath on Self Hosted Engine?
>>I've looked through the net and only found this issue:
>>https://bugzilla.redhat.com/show_bug.cgi?id=1193961
>>
>>Appears to be unsupported as today, but there's an workaround on the
>>comments. It's safe to deploy this way? Should I use NFS instead?
>> It's probably not the most tested path but once you have an engine you
>> should be able to create an iSCSI bond on your hosts from the engine.
>> Network configuration is persisted across host reboots and so the iSCSI
>> bond configuration.
>> A different story is instead having ovirt-ha-agent connecting multiple
>> IQNs or multiple targets over your SAN. This is currently not supported for
>> the hosted-engine storage domain.
>> See:
>> https://bugzilla.redhat.com/show_bug.cgi?id=1149579
>>
>>
>> Hi Simone,
>>
>> i think my post to this list titled "iSCSI multipathing setup troubles"
>> just recently is about the exact same problem, except i'm not talking about
>> the hosted-engine storage domain. i would like to configure _any_ iSCSI
>> storage domain the way you describe it in https://bugzilla.redhat.com/sh
>> ow_bug.cgi?id=1149579#c1. i would like to do so using the oVirt "iSCSI
>> Multipathing" GUI after everything else is setup. i can't find a way to do
>> this. is this now possible? i think the iSCSI Multipathing documentation
>> could be improved by describing an example IP setup for this.
>>
>> thanks a lot
>> matthias
>>
>>
>>
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>>
>>
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Best practices for LACP bonds on oVirt

2017-07-05 Thread Vinícius Ferrão

On 5 Jul 2017, at 05:35, Yaniv Kaul > 
wrote:



On Wed, Jul 5, 2017 at 7:12 AM, Vinícius Ferrão 
> wrote:
Adding another question to what Matthias has said.

I also noted that oVirt (and RHV) documentation does not mention the supported 
block size on iSCSI domains.

RHV: 
https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.0/html/administration_guide/chap-storage
oVirt: http://www.ovirt.org/documentation/admin-guide/chap-Storage/

I’m interested on 4K blocks over iSCSI, but this isn’t really widely supported. 
The question is: oVirt supports this? Or should we stay with the default 512 
bytes of block size?

It does not.
Y.

Discovered this with the hard way, the system is able to detect it as 4K LUN, 
but ovirt-hosted-engine-setup gets confused:

[2] 36589cfc0071cbf2f2ef314a6212c   1600GiB FreeNAS 
iSCSI Disk
status: free, paths: 4 active

[3] 36589cfc0043589992bce09176478   200GiB  FreeNAS 
iSCSI Disk
status: free, paths: 4 active

[4] 36589cfc00992f7abf38c11295bb6   400GiB  FreeNAS 
iSCSI Disk
status: free, paths: 4 active

[2] is 4k
[3] is 512bytes
[4] is 1k (just to prove the point)

On the system it’s appears to be OK:

Disk /dev/mapper/36589cfc0071cbf2f2ef314a6212c: 214.7 GB, 214748364800 
bytes, 52428800 sectors
Units = sectors of 1 * 4096 = 4096 bytes
Sector size (logical/physical): 4096 bytes / 16384 bytes
I/O size (minimum/optimal): 16384 bytes / 1048576 bytes


Disk /dev/mapper/36589cfc0043589992bce09176478: 214.7 GB, 214748364800 
bytes, 419430400 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 16384 bytes
I/O size (minimum/optimal): 16384 bytes / 1048576 bytes

But whatever, just reporting back to the list. It’s a good ideia to have a note 
about it on the documentation.

V.



Thanks,
V.

On 4 Jul 2017, at 09:10, Matthias Leopold 
> 
wrote:



Am 2017-07-04 um 10:01 schrieb Simone Tiraboschi:
On Tue, Jul 4, 2017 at 5:50 AM, Vinícius Ferrão 
 > wrote:
   Thanks, Konstantin.
   Just to be clear enough: the first deployment would be made on
   classic eth interfaces and later after the deployment of Hosted
   Engine I can convert the "ovirtmgmt" network to a LACP Bond, right?
   Another question: what about iSCSI Multipath on Self Hosted Engine?
   I've looked through the net and only found this issue:
   https://bugzilla.redhat.com/show_bug.cgi?id=1193961
   
   Appears to be unsupported as today, but there's an workaround on the
   comments. It's safe to deploy this way? Should I use NFS instead?
It's probably not the most tested path but once you have an engine you should 
be able to create an iSCSI bond on your hosts from the engine.
Network configuration is persisted across host reboots and so the iSCSI bond 
configuration.
A different story is instead having ovirt-ha-agent connecting multiple IQNs or 
multiple targets over your SAN. This is currently not supported for the 
hosted-engine storage domain.
See:
https://bugzilla.redhat.com/show_bug.cgi?id=1149579

Hi Simone,

i think my post to this list titled "iSCSI multipathing setup troubles" just 
recently is about the exact same problem, except i'm not talking about the 
hosted-engine storage domain. i would like to configure _any_ iSCSI storage 
domain the way you describe it in 
https://bugzilla.redhat.com/show_bug.cgi?id=1149579#c1. i would like to do so 
using the oVirt "iSCSI Multipathing" GUI after everything else is setup. i 
can't find a way to do this. is this now possible? i think the iSCSI 
Multipathing documentation could be improved by describing an example IP setup 
for this.

thanks a lot
matthias


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Best practices for LACP bonds on oVirt

2017-07-05 Thread Yaniv Kaul
On Wed, Jul 5, 2017 at 7:12 AM, Vinícius Ferrão  wrote:

> Adding another question to what Matthias has said.
>
> I also noted that oVirt (and RHV) documentation does not mention the
> supported block size on iSCSI domains.
>
> RHV: https://access.redhat.com/documentation/en-us/red_
> hat_virtualization/4.0/html/administration_guide/chap-storage
> oVirt: http://www.ovirt.org/documentation/admin-guide/chap-Storage/
>
> I’m interested on 4K blocks over iSCSI, but this isn’t really widely
> supported. The question is: oVirt supports this? Or should we stay with the
> default 512 bytes of block size?
>

It does not.
Y.


>
> Thanks,
> V.
>
> On 4 Jul 2017, at 09:10, Matthias Leopold  ac.at> wrote:
>
>
>
> Am 2017-07-04 um 10:01 schrieb Simone Tiraboschi:
>
> On Tue, Jul 4, 2017 at 5:50 AM, Vinícius Ferrão  mailto:fer...@if.ufrj.br >> wrote:
>Thanks, Konstantin.
>Just to be clear enough: the first deployment would be made on
>classic eth interfaces and later after the deployment of Hosted
>Engine I can convert the "ovirtmgmt" network to a LACP Bond, right?
>Another question: what about iSCSI Multipath on Self Hosted Engine?
>I've looked through the net and only found this issue:
>https://bugzilla.redhat.com/show_bug.cgi?id=1193961
>
>Appears to be unsupported as today, but there's an workaround on the
>comments. It's safe to deploy this way? Should I use NFS instead?
> It's probably not the most tested path but once you have an engine you
> should be able to create an iSCSI bond on your hosts from the engine.
> Network configuration is persisted across host reboots and so the iSCSI
> bond configuration.
> A different story is instead having ovirt-ha-agent connecting multiple
> IQNs or multiple targets over your SAN. This is currently not supported for
> the hosted-engine storage domain.
> See:
> https://bugzilla.redhat.com/show_bug.cgi?id=1149579
>
>
> Hi Simone,
>
> i think my post to this list titled "iSCSI multipathing setup troubles"
> just recently is about the exact same problem, except i'm not talking about
> the hosted-engine storage domain. i would like to configure _any_ iSCSI
> storage domain the way you describe it in https://bugzilla.redhat.com/
> show_bug.cgi?id=1149579#c1. i would like to do so using the oVirt "iSCSI
> Multipathing" GUI after everything else is setup. i can't find a way to do
> this. is this now possible? i think the iSCSI Multipathing documentation
> could be improved by describing an example IP setup for this.
>
> thanks a lot
> matthias
>
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Best practices for LACP bonds on oVirt

2017-07-04 Thread Vinícius Ferrão
Adding another question to what Matthias has said.

I also noted that oVirt (and RHV) documentation does not mention the supported 
block size on iSCSI domains.

RHV: 
https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.0/html/administration_guide/chap-storage
oVirt: http://www.ovirt.org/documentation/admin-guide/chap-Storage/

I’m interested on 4K blocks over iSCSI, but this isn’t really widely supported. 
The question is: oVirt supports this? Or should we stay with the default 512 
bytes of block size?

Thanks,
V.

On 4 Jul 2017, at 09:10, Matthias Leopold 
> 
wrote:



Am 2017-07-04 um 10:01 schrieb Simone Tiraboschi:
On Tue, Jul 4, 2017 at 5:50 AM, Vinícius Ferrão 
 > wrote:
   Thanks, Konstantin.
   Just to be clear enough: the first deployment would be made on
   classic eth interfaces and later after the deployment of Hosted
   Engine I can convert the "ovirtmgmt" network to a LACP Bond, right?
   Another question: what about iSCSI Multipath on Self Hosted Engine?
   I've looked through the net and only found this issue:
   https://bugzilla.redhat.com/show_bug.cgi?id=1193961
   
   Appears to be unsupported as today, but there's an workaround on the
   comments. It's safe to deploy this way? Should I use NFS instead?
It's probably not the most tested path but once you have an engine you should 
be able to create an iSCSI bond on your hosts from the engine.
Network configuration is persisted across host reboots and so the iSCSI bond 
configuration.
A different story is instead having ovirt-ha-agent connecting multiple IQNs or 
multiple targets over your SAN. This is currently not supported for the 
hosted-engine storage domain.
See:
https://bugzilla.redhat.com/show_bug.cgi?id=1149579

Hi Simone,

i think my post to this list titled "iSCSI multipathing setup troubles" just 
recently is about the exact same problem, except i'm not talking about the 
hosted-engine storage domain. i would like to configure _any_ iSCSI storage 
domain the way you describe it in 
https://bugzilla.redhat.com/show_bug.cgi?id=1149579#c1. i would like to do so 
using the oVirt "iSCSI Multipathing" GUI after everything else is setup. i 
can't find a way to do this. is this now possible? i think the iSCSI 
Multipathing documentation could be improved by describing an example IP setup 
for this.

thanks a lot
matthias

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Best practices for LACP bonds on oVirt

2017-07-04 Thread Matthias Leopold



Am 2017-07-04 um 10:01 schrieb Simone Tiraboschi:



On Tue, Jul 4, 2017 at 5:50 AM, Vinícius Ferrão > wrote:


Thanks, Konstantin.

Just to be clear enough: the first deployment would be made on
classic eth interfaces and later after the deployment of Hosted
Engine I can convert the "ovirtmgmt" network to a LACP Bond, right?

Another question: what about iSCSI Multipath on Self Hosted Engine?
I've looked through the net and only found this issue:
https://bugzilla.redhat.com/show_bug.cgi?id=1193961


Appears to be unsupported as today, but there's an workaround on the
comments. It's safe to deploy this way? Should I use NFS instead?


It's probably not the most tested path but once you have an engine you 
should be able to create an iSCSI bond on your hosts from the engine.
Network configuration is persisted across host reboots and so the iSCSI 
bond configuration.


A different story is instead having ovirt-ha-agent connecting multiple 
IQNs or multiple targets over your SAN. This is currently not supported 
for the hosted-engine storage domain.

See:
https://bugzilla.redhat.com/show_bug.cgi?id=1149579



Hi Simone,

i think my post to this list titled "iSCSI multipathing setup troubles" 
just recently is about the exact same problem, except i'm not talking 
about the hosted-engine storage domain. i would like to configure _any_ 
iSCSI storage domain the way you describe it in 
https://bugzilla.redhat.com/show_bug.cgi?id=1149579#c1. i would like to 
do so using the oVirt "iSCSI Multipathing" GUI after everything else is 
setup. i can't find a way to do this. is this now possible? i think the 
iSCSI Multipathing documentation could be improved by describing an 
example IP setup for this.


thanks a lot
matthias
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Best practices for LACP bonds on oVirt

2017-07-04 Thread Simone Tiraboschi
On Tue, Jul 4, 2017 at 10:30 AM, Vinícius Ferrão  wrote:

> Thanks for your input, Simone.
>
> On 4 Jul 2017, at 05:01, Simone Tiraboschi  wrote:
>
>
>
> On Tue, Jul 4, 2017 at 5:50 AM, Vinícius Ferrão  wrote:
>
>> Thanks, Konstantin.
>>
>> Just to be clear enough: the first deployment would be made on classic
>> eth interfaces and later after the deployment of Hosted Engine I can
>> convert the "ovirtmgmt" network to a LACP Bond, right?
>>
>> Another question: what about iSCSI Multipath on Self Hosted Engine? I've
>> looked through the net and only found this issue: https://bugzilla.redhat
>> .com/show_bug.cgi?id=1193961
>>
>> Appears to be unsupported as today, but there's an workaround on the
>> comments. It's safe to deploy this way? Should I use NFS instead?
>>
>
> It's probably not the most tested path but once you have an engine you
> should be able to create an iSCSI bond on your hosts from the engine.
> Network configuration is persisted across host reboots and so the iSCSI
> bond configuration.
>
> A different story is instead having ovirt-ha-agent connecting multiple
> IQNs or multiple targets over your SAN. This is currently not supported for
> the hosted-engine storage domain.
> See:
> https://bugzilla.redhat.com/show_bug.cgi?id=1149579
>
>
> Just to be clear, when we talk about bonding on iSCSI, we’re talking about
> iSCSI MPIO and not LACP (or something similar) on iSCSI interfaces, right?
>

Yes, correct.


> In my case there are two different fabrics dedicated to iSCSI. They do not
> even transit on the same switch, so it’s plain ethernet (with fancy things,
> like mtu 9216 enabled and QoS).
>
> So I think we’re talking about the unsupported feature of multiple IQN’s
> right?
>

Multiple IQNs on the host side (multiple initiators) should work trough
iSCSI bonding as managed by oVirt engine:
https://www.ovirt.org/documentation/admin-guide/chap-Storage/#configuring-iscsi-multipathing

Multiple IQN on your SAN are instead currently not supported by
ovirt-ha-agent for the hosted-engine storage domain



>
> Thanks once again,
> V.
>
>
>
>>
>> Thanks,
>> V.
>>
>> Sent from my iPhone
>>
>> On 3 Jul 2017, at 21:55, Konstantin Shalygin  wrote:
>>
>> Hello,
>>
>>
>> I’m deploying oVirt for the first time and a question has emerged: what
>> is the good practice to enable LACP on oVirt Node? Should I create 802.3ad
>> bond during the oVirt Node installation in Anaconda, or it should be done
>> in a posterior moment inside the Hosted Engine manager?
>>
>>
>> In my deployment we have 4x GbE interfaces. eth0 and eth1 will be a LACP
>> bond for management and servers VLAN’s, while eth1 and eth2 are Multipath
>> iSCSI disks (MPIO).
>>
>>
>> Thanks,
>>
>> V.
>>
>>
>> Do all your network settings in ovirt-engine webadmin.
>>
>>
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>>
>>
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Best practices for LACP bonds on oVirt

2017-07-04 Thread Vinícius Ferrão
Thanks for your input, Simone.

On 4 Jul 2017, at 05:01, Simone Tiraboschi 
> wrote:



On Tue, Jul 4, 2017 at 5:50 AM, Vinícius Ferrão 
> wrote:
Thanks, Konstantin.

Just to be clear enough: the first deployment would be made on classic eth 
interfaces and later after the deployment of Hosted Engine I can convert the 
"ovirtmgmt" network to a LACP Bond, right?

Another question: what about iSCSI Multipath on Self Hosted Engine? I've looked 
through the net and only found this issue: 
https://bugzilla.redhat.com/show_bug.cgi?id=1193961

Appears to be unsupported as today, but there's an workaround on the comments. 
It's safe to deploy this way? Should I use NFS instead?

It's probably not the most tested path but once you have an engine you should 
be able to create an iSCSI bond on your hosts from the engine.
Network configuration is persisted across host reboots and so the iSCSI bond 
configuration.

A different story is instead having ovirt-ha-agent connecting multiple IQNs or 
multiple targets over your SAN. This is currently not supported for the 
hosted-engine storage domain.
See:
https://bugzilla.redhat.com/show_bug.cgi?id=1149579

Just to be clear, when we talk about bonding on iSCSI, we’re talking about 
iSCSI MPIO and not LACP (or something similar) on iSCSI interfaces, right? In 
my case there are two different fabrics dedicated to iSCSI. They do not even 
transit on the same switch, so it’s plain ethernet (with fancy things, like mtu 
9216 enabled and QoS).

So I think we’re talking about the unsupported feature of multiple IQN’s right?

Thanks once again,
V.



Thanks,
V.

Sent from my iPhone

On 3 Jul 2017, at 21:55, Konstantin Shalygin 
> wrote:

Hello,

I’m deploying oVirt for the first time and a question has emerged: what is the 
good practice to enable LACP on oVirt Node? Should I create 802.3ad bond during 
the oVirt Node installation in Anaconda, or it should be done in a posterior 
moment inside the Hosted Engine manager?

In my deployment we have 4x GbE interfaces. eth0 and eth1 will be a LACP bond 
for management and servers VLAN’s, while eth1 and eth2 are Multipath iSCSI 
disks (MPIO).

Thanks,
V.

Do all your network settings in ovirt-engine webadmin.

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Best practices for LACP bonds on oVirt

2017-07-04 Thread Vinícius Ferrão

> On 4 Jul 2017, at 02:49, Yedidyah Bar David  wrote:
> 
> On Tue, Jul 4, 2017 at 3:51 AM, Vinícius Ferrão  wrote:
>> Hello,
>> 
>> I’m deploying oVirt for the first time and a question has emerged: what is 
>> the good practice to enable LACP on oVirt Node? Should I create 802.3ad bond 
>> during the oVirt Node installation in Anaconda, or it should be done in a 
>> posterior moment inside the Hosted Engine manager?
> 
> Adding Simone for this, but I think that hosted-engine --deploy does
> not know to create bonds, so you better do this beforehand. It does
> know to recognize bonds and their slaves, and so will not let you
> configure the ovirtmgmt bridge on one of the slave nics of a bond.
> 
>> 
>> In my deployment we have 4x GbE interfaces. eth0 and eth1 will be a LACP 
>> bond for management and servers VLAN’s, while eth1 and eth2 are Multipath 
>> iSCSI disks (MPIO).
> 
> You probably meant eth2 and eth3 for the latter bond.
> 
> This is probably more a matter of personal preference than a result of
> a scientific examination, but I personally prefer, assuming that eth0
> and eth1 are managed by a single PCI component and eth2 and eth3 by
> another one, and especially if they are different, and using different
> kernel modules, to have one bond on eth0 and eth2, and another on eth1
> and eth3. This way, presumably, if some (hardware or software) bug hits
> one of the PCI devices, both bonds hopefully keep working.

It’s one single card with 4 interfaces. It came onboard on the IBM System x3550 
M4 servers that I’m using and they are Intel based, don’t remember exactly 
which chipset. Anyway this is interesting, I really avoid mixing different 
controllers on bonding to keep things stable, but you’ve a point.

> Just my two cents,
> 
>> 
>> Thanks,
>> V.
>> 
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
> 
> 
> 
> -- 
> Didi

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Best practices for LACP bonds on oVirt

2017-07-04 Thread Simone Tiraboschi
On Tue, Jul 4, 2017 at 5:50 AM, Vinícius Ferrão  wrote:

> Thanks, Konstantin.
>
> Just to be clear enough: the first deployment would be made on classic eth
> interfaces and later after the deployment of Hosted Engine I can convert
> the "ovirtmgmt" network to a LACP Bond, right?
>
> Another question: what about iSCSI Multipath on Self Hosted Engine? I've
> looked through the net and only found this issue: https://bugzilla.
> redhat.com/show_bug.cgi?id=1193961
>
> Appears to be unsupported as today, but there's an workaround on the
> comments. It's safe to deploy this way? Should I use NFS instead?
>

It's probably not the most tested path but once you have an engine you
should be able to create an iSCSI bond on your hosts from the engine.
Network configuration is persisted across host reboots and so the iSCSI
bond configuration.

A different story is instead having ovirt-ha-agent connecting multiple IQNs
or multiple targets over your SAN. This is currently not supported for the
hosted-engine storage domain.
See:
https://bugzilla.redhat.com/show_bug.cgi?id=1149579


>
> Thanks,
> V.
>
> Sent from my iPhone
>
> On 3 Jul 2017, at 21:55, Konstantin Shalygin  wrote:
>
> Hello,
>
>
> I’m deploying oVirt for the first time and a question has emerged: what is
> the good practice to enable LACP on oVirt Node? Should I create 802.3ad
> bond during the oVirt Node installation in Anaconda, or it should be done
> in a posterior moment inside the Hosted Engine manager?
>
>
> In my deployment we have 4x GbE interfaces. eth0 and eth1 will be a LACP
> bond for management and servers VLAN’s, while eth1 and eth2 are Multipath
> iSCSI disks (MPIO).
>
>
> Thanks,
>
> V.
>
>
> Do all your network settings in ovirt-engine webadmin.
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Best practices for LACP bonds on oVirt

2017-07-03 Thread Yedidyah Bar David
On Tue, Jul 4, 2017 at 3:51 AM, Vinícius Ferrão  wrote:
> Hello,
>
> I’m deploying oVirt for the first time and a question has emerged: what is 
> the good practice to enable LACP on oVirt Node? Should I create 802.3ad bond 
> during the oVirt Node installation in Anaconda, or it should be done in a 
> posterior moment inside the Hosted Engine manager?

Adding Simone for this, but I think that hosted-engine --deploy does
not know to create bonds, so you better do this beforehand. It does
know to recognize bonds and their slaves, and so will not let you
configure the ovirtmgmt bridge on one of the slave nics of a bond.

>
> In my deployment we have 4x GbE interfaces. eth0 and eth1 will be a LACP bond 
> for management and servers VLAN’s, while eth1 and eth2 are Multipath iSCSI 
> disks (MPIO).

You probably meant eth2 and eth3 for the latter bond.

This is probably more a matter of personal preference than a result of
a scientific examination, but I personally prefer, assuming that eth0
and eth1 are managed by a single PCI component and eth2 and eth3 by
another one, and especially if they are different, and using different
kernel modules, to have one bond on eth0 and eth2, and another on eth1
and eth3. This way, presumably, if some (hardware or software) bug hits
one of the PCI devices, both bonds hopefully keep working.

Just my two cents,

>
> Thanks,
> V.
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users



-- 
Didi
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Best practices for LACP bonds on oVirt

2017-07-03 Thread Konstantin Shalygin
Red Hat Virtualization Host (RHVH) is a minimal operating system based 
on Red Hat Enterprise Linux that is designed to provide a simple 
method for setting up a physical machine to act as a hypervisor in a 
Red Hat Virtualization environment. The minimal operating system 
contains only the packages required for the machine to act as a 
hypervisor, and features a Cockpit user interface for monitoring the 
host and performing administrative tasks.


I as Administrator self-know what packages is required for my hardware 
and I don't need Cockpit. So CentOS minimal is my choice.



On 07/04/2017 11:36 AM, Vinícius Ferrão wrote:

It’s the hypervisor appliance, just like RHVH.


--
Best regards,
Konstantin Shalygin

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Best practices for LACP bonds on oVirt

2017-07-03 Thread Vinícius Ferrão
LOL.

It’s the hypervisor appliance, just like RHVH.

> On 4 Jul 2017, at 01:23, Konstantin Shalygin  wrote:
> 
> I don't know what is oVirt Node :)
> 
> And for "generic_linux" I have 95% automation (work in progress).
> 
> 
> On 07/04/2017 11:20 AM, Vinícius Ferrão wrote:
>> Just abusing a little more, why you use CentOS instead of oVirt Node? What’s 
>> the reason behind this choice?
> 
> -- 
> Best regards,
> Konstantin Shalygin
> 

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Best practices for LACP bonds on oVirt

2017-07-03 Thread Konstantin Shalygin

Not for hosted engine, with ovirt-engine of course.


On 07/04/2017 11:27 AM, Yaniv Kaul wrote:

How are you using Ceph for hosted engine?


--
Best regards,
Konstantin Shalygin

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Best practices for LACP bonds on oVirt

2017-07-03 Thread Yaniv Kaul
On Jul 4, 2017 7:14 AM, "Konstantin Shalygin"  wrote:

Yes, I do deployment in four steps:

1. Install CentOS via iDRAC.
2. Attach vlan to 10G physdev via iproute. This is one handwork. May be
replaced via DHCP management, but for now I only have 2x10G Fiber, without
any DHCP.
3. Run ovirt_deploy Ansible role.
4. Attach oVirt networks after host activate.

About iSCSI, NFS. I don't know anything about it. I use Ceph.


How are you using Ceph for hosted engine?
Y.


On 07/04/2017 10:50 AM, Vinícius Ferrão wrote:

Thanks, Konstantin.

Just to be clear enough: the first deployment would be made on classic eth
interfaces and later after the deployment of Hosted Engine I can convert
the "ovirtmgmt" network to a LACP Bond, right?

Another question: what about iSCSI Multipath on Self Hosted Engine? I've
looked through the net and only found this issue: https://bugzilla.
redhat.com/show_bug.cgi?id=1193961

Appears to be unsupported as today, but there's an workaround on the
comments. It's safe to deploy this way? Should I use NFS instead?


-- 
Best regards,
Konstantin Shalygin


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Best practices for LACP bonds on oVirt

2017-07-03 Thread Konstantin Shalygin

I don't know what is oVirt Node :)

And for "generic_linux" I have 95% automation (work in progress).


On 07/04/2017 11:20 AM, Vinícius Ferrão wrote:
Just abusing a little more, why you use CentOS instead of oVirt Node? 
What’s the reason behind this choice?


--
Best regards,
Konstantin Shalygin

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Best practices for LACP bonds on oVirt

2017-07-03 Thread Vinícius Ferrão
Ok, thank you!

Just abusing a little more, why you use CentOS instead of oVirt Node? What’s 
the reason behind this choice?

Thanks,
V.

On 4 Jul 2017, at 00:50, Vinícius Ferrão 
> wrote:

Thanks, Konstantin.

Just to be clear enough: the first deployment would be made on classic eth 
interfaces and later after the deployment of Hosted Engine I can convert the 
"ovirtmgmt" network to a LACP Bond, right?

Another question: what about iSCSI Multipath on Self Hosted Engine? I've looked 
through the net and only found this issue: 
https://bugzilla.redhat.com/show_bug.cgi?id=1193961

Appears to be unsupported as today, but there's an workaround on the comments. 
It's safe to deploy this way? Should I use NFS instead?

Thanks,
V.

Sent from my iPhone

On 3 Jul 2017, at 21:55, Konstantin Shalygin 
> wrote:

Hello,

I’m deploying oVirt for the first time and a question has emerged: what is the 
good practice to enable LACP on oVirt Node? Should I create 802.3ad bond during 
the oVirt Node installation in Anaconda, or it should be done in a posterior 
moment inside the Hosted Engine manager?

In my deployment we have 4x GbE interfaces. eth0 and eth1 will be a LACP bond 
for management and servers VLAN’s, while eth1 and eth2 are Multipath iSCSI 
disks (MPIO).

Thanks,
V.

Do all your network settings in ovirt-engine webadmin.

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Best practices for LACP bonds on oVirt

2017-07-03 Thread Konstantin Shalygin

Yes, I do deployment in four steps:

1. Install CentOS via iDRAC.
2. Attach vlan to 10G physdev via iproute. This is one handwork. May be 
replaced via DHCP management, but for now I only have 2x10G Fiber, 
without any DHCP.

3. Run ovirt_deploy Ansible role.
4. Attach oVirt networks after host activate.

About iSCSI, NFS. I don't know anything about it. I use Ceph.


On 07/04/2017 10:50 AM, Vinícius Ferrão wrote:

Thanks, Konstantin.

Just to be clear enough: the first deployment would be made on classic 
eth interfaces and later after the deployment of Hosted Engine I can 
convert the "ovirtmgmt" network to a LACP Bond, right?


Another question: what about iSCSI Multipath on Self Hosted Engine? 
I've looked through the net and only found this issue: 
https://bugzilla.redhat.com/show_bug.cgi?id=1193961


Appears to be unsupported as today, but there's an workaround on the 
comments. It's safe to deploy this way? Should I use NFS instead?


--
Best regards,
Konstantin Shalygin

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Best practices for LACP bonds on oVirt

2017-07-03 Thread Vinícius Ferrão
Thanks, Konstantin.

Just to be clear enough: the first deployment would be made on classic eth 
interfaces and later after the deployment of Hosted Engine I can convert the 
"ovirtmgmt" network to a LACP Bond, right?

Another question: what about iSCSI Multipath on Self Hosted Engine? I've looked 
through the net and only found this issue: 
https://bugzilla.redhat.com/show_bug.cgi?id=1193961

Appears to be unsupported as today, but there's an workaround on the comments. 
It's safe to deploy this way? Should I use NFS instead?

Thanks,
V.

Sent from my iPhone

On 3 Jul 2017, at 21:55, Konstantin Shalygin 
> wrote:

Hello,

I'm deploying oVirt for the first time and a question has emerged: what is the 
good practice to enable LACP on oVirt Node? Should I create 802.3ad bond during 
the oVirt Node installation in Anaconda, or it should be done in a posterior 
moment inside the Hosted Engine manager?

In my deployment we have 4x GbE interfaces. eth0 and eth1 will be a LACP bond 
for management and servers VLAN's, while eth1 and eth2 are Multipath iSCSI 
disks (MPIO).

Thanks,
V.

Do all your network settings in ovirt-engine webadmin.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Best practices for LACP bonds on oVirt

2017-07-03 Thread Konstantin Shalygin

Hello,

I’m deploying oVirt for the first time and a question has emerged: what is the 
good practice to enable LACP on oVirt Node? Should I create 802.3ad bond during 
the oVirt Node installation in Anaconda, or it should be done in a posterior 
moment inside the Hosted Engine manager?

In my deployment we have 4x GbE interfaces. eth0 and eth1 will be a LACP bond 
for management and servers VLAN’s, while eth1 and eth2 are Multipath iSCSI 
disks (MPIO).

Thanks,
V.


Do all your network settings in ovirt-engine webadmin.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users