Re: [ovirt-users] VDSM multipath.conf - prevent automatic management of local devices

2017-11-27 Thread Ben Bradley

On 23/11/17 06:46, Maton, Brett wrote:

Might not be quite what you're after but adding

# RHEV PRIVATE

To /etc/multipath.conf will stop vdsm from changing the file.
|||


Hi there. Thanks for the reply.
Yes I am aware of that and it seems that's what I will have to do.
I have no problem with VDSM managing the file, I just wish it didn't 
automatically load local storage devices into multipathd.


I'm still not clear on the purpose of this automatic management though.
From what I can tell there is no difference to hosts/clusters made 
through this automatic management - i.e. you still have to add storage 
domains manually in oVirt.


Could anyone give any info on the purpose of this auto-management of 
local storage devices into multipathd in VDSM?
Then I will be able to make an informed decision as to the benefit of 
letting it continue.


Thanks, Ben



On 22 November 2017 at 22:42, Ben Bradley <list...@virtx.net 
<mailto:list...@virtx.net>> wrote:


Hi All

I have been running ovirt in a lab environment on CentOS 7 for
several months but have only just got around to really testing things.
I understand that VDSM manages multipath.conf and I understand that
I can make changes to that file and set it to private to prevent
VDSM making further changes.

I don't mind VDSM managing the file but is it possible to set to
prevent local devices being automatically added to multipathd?

Many times I have had to flush local devices from multipath when
they are added/removed or re-partitioned or the system is rebooted.
It doesn't even look like oVirt does anything with these devices
once they are setup in multipathd.

I'm assuming it's the VDSM additions to multipath that are causing
this. Can anyone else confirm this?

Is there a way to prevent new or local devices being added
automatically?

Regards
Ben
___
Users mailing list
Users@ovirt.org <mailto:Users@ovirt.org>
http://lists.ovirt.org/mailman/listinfo/users
<http://lists.ovirt.org/mailman/listinfo/users>




___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] VDSM multipath.conf - prevent automatic management of local devices

2017-11-22 Thread Ben Bradley

Hi All

I have been running ovirt in a lab environment on CentOS 7 for several 
months but have only just got around to really testing things.
I understand that VDSM manages multipath.conf and I understand that I 
can make changes to that file and set it to private to prevent VDSM 
making further changes.


I don't mind VDSM managing the file but is it possible to set to prevent 
local devices being automatically added to multipathd?


Many times I have had to flush local devices from multipath when they 
are added/removed or re-partitioned or the system is rebooted.
It doesn't even look like oVirt does anything with these devices once 
they are setup in multipathd.


I'm assuming it's the VDSM additions to multipath that are causing this. 
Can anyone else confirm this?


Is there a way to prevent new or local devices being added automatically?

Regards
Ben
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] iSCSI VLAN host connections - bond or multipath & IPv6

2017-09-30 Thread Ben Bradley

On 28/09/17 22:27, Ben Bradley wrote:

On 28/09/17 08:32, Yaniv Kaul wrote:



On Wed, Sep 27, 2017 at 10:59 PM, Ben Bradley <list...@virtx.net 
<mailto:list...@virtx.net>> wrote:


Hi All

I'm looking to add a new host to my oVirt lab installation.
I'm going to share out some LVs from a separate box over iSCSI and
will hook the new host up to that.
I have 2 NICs on the storage host and 2 NICs on the new Ovirt host
to dedicate to the iSCSI traffic.
I also have 2 separate switches so I'm looking for redundancy here.
Both iSCSI host and oVirt host plugged into both switches.

If this was non-iSCSI traffic and without oVirt I would create
bonded interfaces in active-backup mode and layer the VLANs on top
of that.

But for iSCSI traffic without oVirt involved I wouldn't bother with
a bond and just use multipath.

 From scanning the oVirt docs it looks like there is an option to
have oVirt configure iSCSI multipathing.


Look for iSCSI bonding - that's the feature you are looking for.


Thanks for the replies.

By iSCSI bonding, do you mean the oVirt feature "iSCSI multipathing" as 
mentioned here 
https://www.ovirt.org/documentation/admin-guide/chap-Storage/ ?


Separate links seems to be the consensus then. Since these are links 
dedicated to iSCSI traffic, not shared. the ovirtmgmt bridge lives on 
top of an active-backup bond on other NICs.


Thanks, Ben


And an extra question about oVirt's iSCSI multipathing - should each 
path be a separate VLAN+subnet?
I assume it should be separate VLANs for running separate physical 
fabrics if desired.


Thanks, Ben




So what's the best/most-supported option for oVirt?
Manually create active-backup bonds so oVirt just sees a single
storage link between host and storage?
Or leave them as separate interfaces on each side and use oVirt's
multipath/bonding?

Also I quite like the idea of using IPv6 for the iSCSI VLAN, purely
down to the fact I could use link-local addressing and not have to
worry about setting up static IPv4 addresses or DHCP. Is IPv6 iSCSI
supported by oVirt?


No, we do not. There has been some work in the area[1], but I'm not 
sure it is complete.

Y.

[1] 
https://gerrit.ovirt.org/#/q/status:merged+project:vdsm+branch:master+topic:ipv6-iscsi-target-support 




Thanks, Ben
___
Users mailing list
Users@ovirt.org <mailto:Users@ovirt.org>
http://lists.ovirt.org/mailman/listinfo/users
<http://lists.ovirt.org/mailman/listinfo/users>




___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] iSCSI VLAN host connections - bond or multipath & IPv6

2017-09-28 Thread Ben Bradley

On 28/09/17 08:32, Yaniv Kaul wrote:



On Wed, Sep 27, 2017 at 10:59 PM, Ben Bradley <list...@virtx.net 
<mailto:list...@virtx.net>> wrote:


Hi All

I'm looking to add a new host to my oVirt lab installation.
I'm going to share out some LVs from a separate box over iSCSI and
will hook the new host up to that.
I have 2 NICs on the storage host and 2 NICs on the new Ovirt host
to dedicate to the iSCSI traffic.
I also have 2 separate switches so I'm looking for redundancy here.
Both iSCSI host and oVirt host plugged into both switches.

If this was non-iSCSI traffic and without oVirt I would create
bonded interfaces in active-backup mode and layer the VLANs on top
of that.

But for iSCSI traffic without oVirt involved I wouldn't bother with
a bond and just use multipath.

 From scanning the oVirt docs it looks like there is an option to
have oVirt configure iSCSI multipathing.


Look for iSCSI bonding - that's the feature you are looking for.


Thanks for the replies.

By iSCSI bonding, do you mean the oVirt feature "iSCSI multipathing" as 
mentioned here 
https://www.ovirt.org/documentation/admin-guide/chap-Storage/ ?


Separate links seems to be the consensus then. Since these are links 
dedicated to iSCSI traffic, not shared. the ovirtmgmt bridge lives on 
top of an active-backup bond on other NICs.


Thanks, Ben


So what's the best/most-supported option for oVirt?
Manually create active-backup bonds so oVirt just sees a single
storage link between host and storage?
Or leave them as separate interfaces on each side and use oVirt's
multipath/bonding?

Also I quite like the idea of using IPv6 for the iSCSI VLAN, purely
down to the fact I could use link-local addressing and not have to
worry about setting up static IPv4 addresses or DHCP. Is IPv6 iSCSI
supported by oVirt?


No, we do not. There has been some work in the area[1], but I'm not sure 
it is complete.

Y.

[1] 
https://gerrit.ovirt.org/#/q/status:merged+project:vdsm+branch:master+topic:ipv6-iscsi-target-support



Thanks, Ben
___
Users mailing list
Users@ovirt.org <mailto:Users@ovirt.org>
http://lists.ovirt.org/mailman/listinfo/users
<http://lists.ovirt.org/mailman/listinfo/users>




___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] iSCSI VLAN host connections - bond or multipath & IPv6

2017-09-27 Thread Ben Bradley

Hi All

I'm looking to add a new host to my oVirt lab installation.
I'm going to share out some LVs from a separate box over iSCSI and will 
hook the new host up to that.
I have 2 NICs on the storage host and 2 NICs on the new Ovirt host to 
dedicate to the iSCSI traffic.
I also have 2 separate switches so I'm looking for redundancy here. Both 
iSCSI host and oVirt host plugged into both switches.


If this was non-iSCSI traffic and without oVirt I would create bonded 
interfaces in active-backup mode and layer the VLANs on top of that.


But for iSCSI traffic without oVirt involved I wouldn't bother with a 
bond and just use multipath.


From scanning the oVirt docs it looks like there is an option to have 
oVirt configure iSCSI multipathing.


So what's the best/most-supported option for oVirt?
Manually create active-backup bonds so oVirt just sees a single storage 
link between host and storage?
Or leave them as separate interfaces on each side and use oVirt's 
multipath/bonding?


Also I quite like the idea of using IPv6 for the iSCSI VLAN, purely down 
to the fact I could use link-local addressing and not have to worry 
about setting up static IPv4 addresses or DHCP. Is IPv6 iSCSI supported 
by oVirt?


Thanks, Ben
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Engine migration and host import

2017-09-25 Thread Ben Bradley

On 25/09/17 13:56, Simone Tiraboschi wrote:



On Sun, Sep 24, 2017 at 10:59 PM, Ben Bradley <list...@virtx.net 
<mailto:list...@virtx.net>> wrote:


On 23/09/17 00:27, Ben Bradley wrote:

On 20/09/17 15:41, Simone Tiraboschi wrote:


On Wed, Sep 20, 2017 at 12:30 AM, Ben Bradley
<list...@virtx.net <mailto:list...@virtx.net>
<mailto:list...@virtx.net <mailto:list...@virtx.net>>> wrote:

 Hi All

 I've been running a single-host ovirt setup for several
months,
 having previously used a basic QEMU/KVM for a few years
in lab
 environments.

 I currently have the ovirt engine running at the
bare-metal level,
 with the box also acting as the single host. I am also
running this
 with local storage.

 I now have an extra host I can use and would like to
migrate to a
 hosted engine. The following documentation appears to
be perfect and
 pretty clear about the steps involved:

https://www.ovirt.org/develop/developer-guide/engine/migrate-to-hosted-engine/

<https://www.ovirt.org/develop/developer-guide/engine/migrate-to-hosted-engine/>


<https://www.ovirt.org/develop/developer-guide/engine/migrate-to-hosted-engine/


<https://www.ovirt.org/develop/developer-guide/engine/migrate-to-hosted-engine/>>

 and

https://www.ovirt.org/documentation/self-hosted/chap-Migrating_from_Bare_Metal_to_an_EL-Based_Self-Hosted_Environment

<https://www.ovirt.org/documentation/self-hosted/chap-Migrating_from_Bare_Metal_to_an_EL-Based_Self-Hosted_Environment>


<https://www.ovirt.org/documentation/self-hosted/chap-Migrating_from_Bare_Metal_to_an_EL-Based_Self-Hosted_Environment


<https://www.ovirt.org/documentation/self-hosted/chap-Migrating_from_Bare_Metal_to_an_EL-Based_Self-Hosted_Environment>>


 However I'd like to try and get a bit more of an
understanding of
 the process that happens behind the scenes during the
cut-over from
 one engine to a new/hosted engine.

 As an experiment I attempted the following:
 - created a new VM within my current environment
(bare-metal engine)
 - creating an engine-backup
 - stopped the bare-metal engine
 - restored the backup into the new VM
 - ran engine-setup within the new VM
 The new engine started up ok and I was able to connect
and login to
 the web UI. However my host was "unresponsive" and I
was unable to
 manage it in any way from the VM. I shut the VM down
and started the
 bare-metal ovirt-engine again on the host and
everything worked as
 before. I didn't try very hard to make it work however.

 The magic missing from the basic process I tried is the
 synchronising and importing of the existing host, which
is what the
 hosted-engine utility does.


No magic up to now: the host are simply in the DB you restored.
If the VM has network connectivity and the same host-name of
the old machine you shouldn't see any issue.
If you changed the host-name moving to the VM, you should
simply run engine-rename after the restore.


Thank you for the reply.
I tried this again this evening - again it failed.

The host is present within the new engine but I am unable to
manage it.
Host is marked as down but Activate is greyed out. I can get get
into the "Edit" screen for the host and on right-click I get the
following options:
- Maintenance
- Confirm Host has been Rebooted
- SSH Management: Restart and Stop both available
The VMs are still running and accessible but are not listed as
running under the web interface. This time however I did lose
access to the ovirtmgmt bridge and the web interface, running
VMs and host SSH session were unavailable until I rebooted.
Luckily I left ovirt-engine service enabled to restart on boot
so everything came back up.

The engine URL is a CNAME so I just re-pointed to the hostname
of the VM just before running engine-setup after the restore.

This time though I have kept the new engine VM so I can power it
up again and try and debug.

I am going to try a few times over

Re: [ovirt-users] Engine migration and host import

2017-09-24 Thread Ben Bradley

On 23/09/17 00:27, Ben Bradley wrote:

On 20/09/17 15:41, Simone Tiraboschi wrote:


On Wed, Sep 20, 2017 at 12:30 AM, Ben Bradley <list...@virtx.net 
<mailto:list...@virtx.net>> wrote:


Hi All

I've been running a single-host ovirt setup for several months,
having previously used a basic QEMU/KVM for a few years in lab
environments.

I currently have the ovirt engine running at the bare-metal level,
with the box also acting as the single host. I am also running this
with local storage.

I now have an extra host I can use and would like to migrate to a
hosted engine. The following documentation appears to be perfect and
pretty clear about the steps involved:

https://www.ovirt.org/develop/developer-guide/engine/migrate-to-hosted-engine/ 


<https://www.ovirt.org/develop/developer-guide/engine/migrate-to-hosted-engine/> 


and

https://www.ovirt.org/documentation/self-hosted/chap-Migrating_from_Bare_Metal_to_an_EL-Based_Self-Hosted_Environment 


<https://www.ovirt.org/documentation/self-hosted/chap-Migrating_from_Bare_Metal_to_an_EL-Based_Self-Hosted_Environment> 



However I'd like to try and get a bit more of an understanding of
the process that happens behind the scenes during the cut-over from
one engine to a new/hosted engine.

As an experiment I attempted the following:
- created a new VM within my current environment (bare-metal engine)
- creating an engine-backup
- stopped the bare-metal engine
- restored the backup into the new VM
- ran engine-setup within the new VM
The new engine started up ok and I was able to connect and login to
the web UI. However my host was "unresponsive" and I was unable to
manage it in any way from the VM. I shut the VM down and started the
bare-metal ovirt-engine again on the host and everything worked as
before. I didn't try very hard to make it work however.

The magic missing from the basic process I tried is the
synchronising and importing of the existing host, which is what the
hosted-engine utility does.


No magic up to now: the host are simply in the DB you restored.
If the VM has network connectivity and the same host-name of the old 
machine you shouldn't see any issue.
If you changed the host-name moving to the VM, you should simply run 
engine-rename after the restore.


Thank you for the reply.
I tried this again this evening - again it failed.

The host is present within the new engine but I am unable to manage it.
Host is marked as down but Activate is greyed out. I can get get into 
the "Edit" screen for the host and on right-click I get the following 
options:

- Maintenance
- Confirm Host has been Rebooted
- SSH Management: Restart and Stop both available
The VMs are still running and accessible but are not listed as running 
under the web interface. This time however I did lose access to the 
ovirtmgmt bridge and the web interface, running VMs and host SSH session 
were unavailable until I rebooted.
Luckily I left ovirt-engine service enabled to restart on boot so 
everything came back up.


The engine URL is a CNAME so I just re-pointed to the hostname of the VM 
just before running engine-setup after the restore.


This time though I have kept the new engine VM so I can power it up 
again and try and debug.


I am going to try a few times over the weekend and I have setup serial 
console access so I can do a bit more debugging.


What ovirt logs could I check on the host to see if the new engine VM is 
able to connect and sync to the host properly?


Thanks, Ben


So I tried again to migrate my bare-metal host to a hosted VM but no 
luck. The host remained in unresponsive state in the engine web UI and I 
was unable to manage the host in anyway. Although all VMs continued to run.


I did capture some logs though.
From the new engine VM... engine.log
https://p.bsd-unix.net/view/raw/666839d1

From the host...
mom.log  https://p.bsd-unix.net/view/raw/ac9379f0
supervdsm.log  https://p.bsd-unix.net/view/raw/f9018dec
vdsm.log  https://p.bsd-unix.net/view/raw/bcdcdb13

The engine VM is complaining about being unable to connect to the host, 
though I can see from tcpdump communication is fine. I believe this is 
backed up by the pings seen in mom.log


Though I can see the following in vdsm.log... [vds] recovery: waiting 
for storage pool to go up (clientIF:569)

So I wonder if this is blocking the engine bringing the host up.

The host is running local storage, which I believe is a pretty recent 
addition to ovirt. So I could see how trying to run an engine VM on a 
host's local storage might cause weird issues.


I realise that there won't be HA with this setup, until I create my 
second host and configure HA on the VM.


If I am unable to migrate from bare-metal -> engine VM then it doesn't 
give me any confidence that I would be able to restore a setup from a 
backup onto

Re: [ovirt-users] Engine migration and host import

2017-09-22 Thread Ben Bradley

On 20/09/17 15:41, Simone Tiraboschi wrote:


On Wed, Sep 20, 2017 at 12:30 AM, Ben Bradley <list...@virtx.net 
<mailto:list...@virtx.net>> wrote:


Hi All

I've been running a single-host ovirt setup for several months,
having previously used a basic QEMU/KVM for a few years in lab
environments.

I currently have the ovirt engine running at the bare-metal level,
with the box also acting as the single host. I am also running this
with local storage.

I now have an extra host I can use and would like to migrate to a
hosted engine. The following documentation appears to be perfect and
pretty clear about the steps involved:

https://www.ovirt.org/develop/developer-guide/engine/migrate-to-hosted-engine/

<https://www.ovirt.org/develop/developer-guide/engine/migrate-to-hosted-engine/>
and

https://www.ovirt.org/documentation/self-hosted/chap-Migrating_from_Bare_Metal_to_an_EL-Based_Self-Hosted_Environment

<https://www.ovirt.org/documentation/self-hosted/chap-Migrating_from_Bare_Metal_to_an_EL-Based_Self-Hosted_Environment>

However I'd like to try and get a bit more of an understanding of
the process that happens behind the scenes during the cut-over from
one engine to a new/hosted engine.

As an experiment I attempted the following:
- created a new VM within my current environment (bare-metal engine)
- creating an engine-backup
- stopped the bare-metal engine
- restored the backup into the new VM
- ran engine-setup within the new VM
The new engine started up ok and I was able to connect and login to
the web UI. However my host was "unresponsive" and I was unable to
manage it in any way from the VM. I shut the VM down and started the
bare-metal ovirt-engine again on the host and everything worked as
before. I didn't try very hard to make it work however.

The magic missing from the basic process I tried is the
synchronising and importing of the existing host, which is what the
hosted-engine utility does.


No magic up to now: the host are simply in the DB you restored.
If the VM has network connectivity and the same host-name of the old 
machine you shouldn't see any issue.
If you changed the host-name moving to the VM, you should simply run 
engine-rename after the restore.


Thank you for the reply.
I tried this again this evening - again it failed.

The host is present within the new engine but I am unable to manage it.
Host is marked as down but Activate is greyed out. I can get get into 
the "Edit" screen for the host and on right-click I get the following 
options:

- Maintenance
- Confirm Host has been Rebooted
- SSH Management: Restart and Stop both available
The VMs are still running and accessible but are not listed as running 
under the web interface. This time however I did lose access to the 
ovirtmgmt bridge and the web interface, running VMs and host SSH session 
were unavailable until I rebooted.
Luckily I left ovirt-engine service enabled to restart on boot so 
everything came back up.


The engine URL is a CNAME so I just re-pointed to the hostname of the VM 
just before running engine-setup after the restore.


This time though I have kept the new engine VM so I can power it up 
again and try and debug.


I am going to try a few times over the weekend and I have setup serial 
console access so I can do a bit more debugging.


What ovirt logs could I check on the host to see if the new engine VM is 
able to connect and sync to the host properly?


Thanks, Ben


The only detail is that hosted-engine-setup will try to add the host 
where you are running it to the engine and so you have to manually 
remove it just after the restore in order to avoid a failure there.



Can anyone describe that process in a bit more detail?
Is it possible to perform any part of that process manually?

I'm planning to expand my lab and dev environments so for me it's
important to discover the following...
- That I'm able to reverse the process back to bare-metal engine if
I ever need/want to
- That I can setup a new VM or host with nothing more than an
engine-backup but still be able to regain control of exiting hosts
and VMs within the cluster

My main concern after my basic attempt at a "restore/migration"
above is that I might not be able to re-import/sync an existing host
after I have restored engine from a backup.

I have been able to export VMs to storage, remove them from ovirt,
re-install engine and restore, then import VMs from the export
domain. That all worked fine. But it involved shutting down all VMs
and removing their definitions from the environment.

Are there any pre-requisites to being able to re-import an existing
running host (and VMs), such as placing ALL hosts into maintenance
mode and shutting down any VMs first?

Any insi

[ovirt-users] Engine migration and host import

2017-09-19 Thread Ben Bradley

Hi All

I've been running a single-host ovirt setup for several months, having 
previously used a basic QEMU/KVM for a few years in lab environments.


I currently have the ovirt engine running at the bare-metal level, with 
the box also acting as the single host. I am also running this with 
local storage.


I now have an extra host I can use and would like to migrate to a hosted 
engine. The following documentation appears to be perfect and pretty 
clear about the steps involved: 
https://www.ovirt.org/develop/developer-guide/engine/migrate-to-hosted-engine/ 
and 
https://www.ovirt.org/documentation/self-hosted/chap-Migrating_from_Bare_Metal_to_an_EL-Based_Self-Hosted_Environment


However I'd like to try and get a bit more of an understanding of the 
process that happens behind the scenes during the cut-over from one 
engine to a new/hosted engine.


As an experiment I attempted the following:
- created a new VM within my current environment (bare-metal engine)
- creating an engine-backup
- stopped the bare-metal engine
- restored the backup into the new VM
- ran engine-setup within the new VM
The new engine started up ok and I was able to connect and login to the 
web UI. However my host was "unresponsive" and I was unable to manage it 
in any way from the VM. I shut the VM down and started the bare-metal 
ovirt-engine again on the host and everything worked as before. I didn't 
try very hard to make it work however.


The magic missing from the basic process I tried is the synchronising 
and importing of the existing host, which is what the hosted-engine 
utility does.


Can anyone describe that process in a bit more detail?
Is it possible to perform any part of that process manually?

I'm planning to expand my lab and dev environments so for me it's 
important to discover the following...
- That I'm able to reverse the process back to bare-metal engine if I 
ever need/want to
- That I can setup a new VM or host with nothing more than an 
engine-backup but still be able to regain control of exiting hosts and 
VMs within the cluster


My main concern after my basic attempt at a "restore/migration" above is 
that I might not be able to re-import/sync an existing host after I have 
restored engine from a backup.


I have been able to export VMs to storage, remove them from ovirt, 
re-install engine and restore, then import VMs from the export domain. 
That all worked fine. But it involved shutting down all VMs and removing 
their definitions from the environment.


Are there any pre-requisites to being able to re-import an existing 
running host (and VMs), such as placing ALL hosts into maintenance mode 
and shutting down any VMs first?


Any insight into host recovery/import/sync processes and steps will be 
greatly appreciated.


Best regards
Ben
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users