Re: [ovirt-users] thin_check run on VM disk by host on startup ?!

2016-08-31 Thread Nir Soffer
On Wed, Aug 31, 2016 at 3:15 PM, Rik Theys  wrote:
> Hi,
>
> On 08/31/2016 02:04 PM, Nir Soffer wrote:
>> On Wed, Aug 31, 2016 at 2:30 PM, Rik Theys  
>> wrote:
>>> Hi,
>>>
>>> On 08/31/2016 11:51 AM, Nir Soffer wrote:
 On Wed, Aug 31, 2016 at 11:07 AM, Rik Theys  
 wrote:
> On 08/31/2016 09:43 AM, Rik Theys wrote:
>> On 08/30/2016 04:47 PM, Nir Soffer wrote:
>>> On Tue, Aug 30, 2016 at 3:51 PM, Rik Theys  
>>> wrote:
 While rebooting one of the hosts in an oVirt cluster, I noticed that
 thin_check is run on the thin pool devices of one of the VM's on which
 the disk is assigned to.

 That seems strange to me. I would expect the host to stay clear of any
 VM disks.
>>>
>>> We expect the same thing, but unfortunately systemd and lvm try to
>>> auto activate stuff. This may be good idea for desktop system, but
>>> probably bad idea for a server and in particular a hypervisor.
>>>
>>> We don't have a solution yet, but you can try these:
>>>
>>> 1. disable lvmetad service
>>>
>>> systemctl stop lvm2-lvmetad.service lvm2-lvmetad.socket
>>> systemctl mask lvm2-lvmetad.service lvm2-lvmetad.socket
>>>
>>> Edit /etc/lvm/lvm.conf:
>>>
>>> use_lvmetad = 0
>>>
>>> 2. disable lvm auto activation
>>>
>>> Edit /etc/lvm/lvm.conf:
>>>
>>> auto_activation_volume_list = []
>>>
>>> 3. both 1 and 2
>>>
>>
>> I've now applied both of the above and regenerated the initramfs and
>> rebooted and the host no longer lists the LV's of the VM. Since I
>> rebooted the host before without this issue, I'm not sure a single
>> reboot is enough to conclude it has fully fixed the issue.
>>
>> You mention that there's no solution yet. Does that mean the above
>> settings are not 100% certain to avoid this behaviour?
>>
>> I was thinking of setting a global_filter in /etc/lvm/lvm.conf to only
>> include the PV's for the hypervisor disks (on which the OS is installed)
>> so the system lvm commands only touches those. Since vdsm is using its
>> own lvm.conf this should be OK for vdsm?
>
> This does not seem to work. The host can not be activated as it can't
> find his volume group(s). To be able to use the global_filter in
> /etc/lvm/lvm.conf, I believe it should be overridden in vdsm's lvm.conf
> to revert back to the default.
>
> I've moved my filter from global_filter to filter and that seems to
> work. When lvmetad is disabled I believe this should have the same
> effect as global_filter? The comments in /etc/lvm/lvm.conf indicate also
> udev might ignore the filter setting?

 Right, global_filter exist so you can override filter used from the command
 line.

 For example, hiding certain devices from vdsm. This is why we are using
 filter in vdsm, leaving global_filter for the administrator.

 Can you explain why do you need global_filter or filter for the
 hypervisor disks?
>>>
>>> Based on the comment in /etc/lvm/lvm.conf regarding global_filter I
>>> concluded that not only lvmetad but also udev might perform action on
>>> the devices and I wanted to prevent that.
>>>
>>> I've now set the following settings in /etc/lvm/lvm.conf:
>>>
>>> use_lvmetad = 0
>>> auto_activation_volume_list = []
>>> filter = ["a|/dev/sda5|", "r|.*|" ]
>>
>> Better use /dev/disk/by-uuid/ to select the specific device, without
>> depending on device order.
>>
>>>
>>> On other systems I have kept the default filter.
>>>
 Do you have any issue with the current settings, disabling auto activation 
 and
 lvmetad?
>>>
>>> Keeping those two disabled also seems to work. The ovirt LV's do show up
>>> in 'lvs' output but are not activated.
>>
>> Good
>
> When vdsm runs the lvchange command to activate the LV of a VM (so it
> can boot it), will LVM still try to scan the new LV for PV's (and thin
> pools, etc)? Is this also prevented by the auto_activation_volume_list
> parameter in this case?

I don't know, but I don't see why lvm would scan the lv for pv and thin pools,
this should happen on on the guest.

We never had reports about such issue.

>>> I wanted to be absolutely sure the VM LV's were not touched, I added the
>>> filter on some of our hosts.
>>
>> The only problem with this filter is it may break if you change the host in 
>> some
>> way, like boot from another disk.
>
> I am aware of this issue, but in this case I would rather mess with a
> hypervisor that no longer boots because it needs an updated lvm.conf
> than try to fix corrupted VM's because the host was accessing the disks
> of the VM while it was running on another node.
>> It would be nice if you file a bug for this, and mention the configuration 
>> that
>> fixes this 

Re: [ovirt-users] thin_check run on VM disk by host on startup ?!

2016-08-31 Thread Rik Theys
Hi,

On 08/31/2016 02:04 PM, Nir Soffer wrote:
> On Wed, Aug 31, 2016 at 2:30 PM, Rik Theys  wrote:
>> Hi,
>>
>> On 08/31/2016 11:51 AM, Nir Soffer wrote:
>>> On Wed, Aug 31, 2016 at 11:07 AM, Rik Theys  
>>> wrote:
 On 08/31/2016 09:43 AM, Rik Theys wrote:
> On 08/30/2016 04:47 PM, Nir Soffer wrote:
>> On Tue, Aug 30, 2016 at 3:51 PM, Rik Theys  
>> wrote:
>>> While rebooting one of the hosts in an oVirt cluster, I noticed that
>>> thin_check is run on the thin pool devices of one of the VM's on which
>>> the disk is assigned to.
>>>
>>> That seems strange to me. I would expect the host to stay clear of any
>>> VM disks.
>>
>> We expect the same thing, but unfortunately systemd and lvm try to
>> auto activate stuff. This may be good idea for desktop system, but
>> probably bad idea for a server and in particular a hypervisor.
>>
>> We don't have a solution yet, but you can try these:
>>
>> 1. disable lvmetad service
>>
>> systemctl stop lvm2-lvmetad.service lvm2-lvmetad.socket
>> systemctl mask lvm2-lvmetad.service lvm2-lvmetad.socket
>>
>> Edit /etc/lvm/lvm.conf:
>>
>> use_lvmetad = 0
>>
>> 2. disable lvm auto activation
>>
>> Edit /etc/lvm/lvm.conf:
>>
>> auto_activation_volume_list = []
>>
>> 3. both 1 and 2
>>
>
> I've now applied both of the above and regenerated the initramfs and
> rebooted and the host no longer lists the LV's of the VM. Since I
> rebooted the host before without this issue, I'm not sure a single
> reboot is enough to conclude it has fully fixed the issue.
>
> You mention that there's no solution yet. Does that mean the above
> settings are not 100% certain to avoid this behaviour?
>
> I was thinking of setting a global_filter in /etc/lvm/lvm.conf to only
> include the PV's for the hypervisor disks (on which the OS is installed)
> so the system lvm commands only touches those. Since vdsm is using its
> own lvm.conf this should be OK for vdsm?

 This does not seem to work. The host can not be activated as it can't
 find his volume group(s). To be able to use the global_filter in
 /etc/lvm/lvm.conf, I believe it should be overridden in vdsm's lvm.conf
 to revert back to the default.

 I've moved my filter from global_filter to filter and that seems to
 work. When lvmetad is disabled I believe this should have the same
 effect as global_filter? The comments in /etc/lvm/lvm.conf indicate also
 udev might ignore the filter setting?
>>>
>>> Right, global_filter exist so you can override filter used from the command
>>> line.
>>>
>>> For example, hiding certain devices from vdsm. This is why we are using
>>> filter in vdsm, leaving global_filter for the administrator.
>>>
>>> Can you explain why do you need global_filter or filter for the
>>> hypervisor disks?
>>
>> Based on the comment in /etc/lvm/lvm.conf regarding global_filter I
>> concluded that not only lvmetad but also udev might perform action on
>> the devices and I wanted to prevent that.
>>
>> I've now set the following settings in /etc/lvm/lvm.conf:
>>
>> use_lvmetad = 0
>> auto_activation_volume_list = []
>> filter = ["a|/dev/sda5|", "r|.*|" ]
> 
> Better use /dev/disk/by-uuid/ to select the specific device, without
> depending on device order.
> 
>>
>> On other systems I have kept the default filter.
>>
>>> Do you have any issue with the current settings, disabling auto activation 
>>> and
>>> lvmetad?
>>
>> Keeping those two disabled also seems to work. The ovirt LV's do show up
>> in 'lvs' output but are not activated.
> 
> Good

When vdsm runs the lvchange command to activate the LV of a VM (so it
can boot it), will LVM still try to scan the new LV for PV's (and thin
pools, etc)? Is this also prevented by the auto_activation_volume_list
parameter in this case?

>> I wanted to be absolutely sure the VM LV's were not touched, I added the
>> filter on some of our hosts.
> 
> The only problem with this filter is it may break if you change the host in 
> some
> way, like boot from another disk.

I am aware of this issue, but in this case I would rather mess with a
hypervisor that no longer boots because it needs an updated lvm.conf
than try to fix corrupted VM's because the host was accessing the disks
of the VM while it was running on another node.

> It would be nice if you file a bug for this, and mention the configuration 
> that
> fixes this issue, we certainly need to improve the way we configure lvm.

Against which component of oVirt should I file this bug?

Regards,

Rik

-- 
Rik Theys
System Engineer
KU Leuven - Dept. Elektrotechniek (ESAT)
Kasteelpark Arenberg 10 bus 2440  - B-3001 Leuven-Heverlee
+32(0)16/32.11.07

<>

Re: [ovirt-users] thin_check run on VM disk by host on startup ?!

2016-08-31 Thread Nir Soffer
On Wed, Aug 31, 2016 at 2:30 PM, Rik Theys  wrote:
> Hi,
>
> On 08/31/2016 11:51 AM, Nir Soffer wrote:
>> On Wed, Aug 31, 2016 at 11:07 AM, Rik Theys  
>> wrote:
>>> On 08/31/2016 09:43 AM, Rik Theys wrote:
 On 08/30/2016 04:47 PM, Nir Soffer wrote:
> On Tue, Aug 30, 2016 at 3:51 PM, Rik Theys  
> wrote:
>> While rebooting one of the hosts in an oVirt cluster, I noticed that
>> thin_check is run on the thin pool devices of one of the VM's on which
>> the disk is assigned to.
>>
>> That seems strange to me. I would expect the host to stay clear of any
>> VM disks.
>
> We expect the same thing, but unfortunately systemd and lvm try to
> auto activate stuff. This may be good idea for desktop system, but
> probably bad idea for a server and in particular a hypervisor.
>
> We don't have a solution yet, but you can try these:
>
> 1. disable lvmetad service
>
> systemctl stop lvm2-lvmetad.service lvm2-lvmetad.socket
> systemctl mask lvm2-lvmetad.service lvm2-lvmetad.socket
>
> Edit /etc/lvm/lvm.conf:
>
> use_lvmetad = 0
>
> 2. disable lvm auto activation
>
> Edit /etc/lvm/lvm.conf:
>
> auto_activation_volume_list = []
>
> 3. both 1 and 2
>

 I've now applied both of the above and regenerated the initramfs and
 rebooted and the host no longer lists the LV's of the VM. Since I
 rebooted the host before without this issue, I'm not sure a single
 reboot is enough to conclude it has fully fixed the issue.

 You mention that there's no solution yet. Does that mean the above
 settings are not 100% certain to avoid this behaviour?

 I was thinking of setting a global_filter in /etc/lvm/lvm.conf to only
 include the PV's for the hypervisor disks (on which the OS is installed)
 so the system lvm commands only touches those. Since vdsm is using its
 own lvm.conf this should be OK for vdsm?
>>>
>>> This does not seem to work. The host can not be activated as it can't
>>> find his volume group(s). To be able to use the global_filter in
>>> /etc/lvm/lvm.conf, I believe it should be overridden in vdsm's lvm.conf
>>> to revert back to the default.
>>>
>>> I've moved my filter from global_filter to filter and that seems to
>>> work. When lvmetad is disabled I believe this should have the same
>>> effect as global_filter? The comments in /etc/lvm/lvm.conf indicate also
>>> udev might ignore the filter setting?
>>
>> Right, global_filter exist so you can override filter used from the command
>> line.
>>
>> For example, hiding certain devices from vdsm. This is why we are using
>> filter in vdsm, leaving global_filter for the administrator.
>>
>> Can you explain why do you need global_filter or filter for the
>> hypervisor disks?
>
> Based on the comment in /etc/lvm/lvm.conf regarding global_filter I
> concluded that not only lvmetad but also udev might perform action on
> the devices and I wanted to prevent that.
>
> I've now set the following settings in /etc/lvm/lvm.conf:
>
> use_lvmetad = 0
> auto_activation_volume_list = []
> filter = ["a|/dev/sda5|", "r|.*|" ]

Better use /dev/disk/by-uuid/ to select the specific device, without
depending on device order.

>
> On other systems I have kept the default filter.
>
>> Do you have any issue with the current settings, disabling auto activation 
>> and
>> lvmetad?
>
> Keeping those two disabled also seems to work. The ovirt LV's do show up
> in 'lvs' output but are not activated.

Good

> I wanted to be absolutely sure the VM LV's were not touched, I added the
> filter on some of our hosts.

The only problem with this filter is it may break if you change the host in some
way, like boot from another disk.

It would be nice if you file a bug for this, and mention the configuration that
fixes this issue, we certainly need to improve the way we configure lvm.

Nir
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] thin_check run on VM disk by host on startup ?!

2016-08-31 Thread Rik Theys
Hi,

On 08/31/2016 11:51 AM, Nir Soffer wrote:
> On Wed, Aug 31, 2016 at 11:07 AM, Rik Theys  
> wrote:
>> On 08/31/2016 09:43 AM, Rik Theys wrote:
>>> On 08/30/2016 04:47 PM, Nir Soffer wrote:
 On Tue, Aug 30, 2016 at 3:51 PM, Rik Theys  
 wrote:
> While rebooting one of the hosts in an oVirt cluster, I noticed that
> thin_check is run on the thin pool devices of one of the VM's on which
> the disk is assigned to.
>
> That seems strange to me. I would expect the host to stay clear of any
> VM disks.

 We expect the same thing, but unfortunately systemd and lvm try to
 auto activate stuff. This may be good idea for desktop system, but
 probably bad idea for a server and in particular a hypervisor.

 We don't have a solution yet, but you can try these:

 1. disable lvmetad service

 systemctl stop lvm2-lvmetad.service lvm2-lvmetad.socket
 systemctl mask lvm2-lvmetad.service lvm2-lvmetad.socket

 Edit /etc/lvm/lvm.conf:

 use_lvmetad = 0

 2. disable lvm auto activation

 Edit /etc/lvm/lvm.conf:

 auto_activation_volume_list = []

 3. both 1 and 2

>>>
>>> I've now applied both of the above and regenerated the initramfs and
>>> rebooted and the host no longer lists the LV's of the VM. Since I
>>> rebooted the host before without this issue, I'm not sure a single
>>> reboot is enough to conclude it has fully fixed the issue.
>>>
>>> You mention that there's no solution yet. Does that mean the above
>>> settings are not 100% certain to avoid this behaviour?
>>>
>>> I was thinking of setting a global_filter in /etc/lvm/lvm.conf to only
>>> include the PV's for the hypervisor disks (on which the OS is installed)
>>> so the system lvm commands only touches those. Since vdsm is using its
>>> own lvm.conf this should be OK for vdsm?
>>
>> This does not seem to work. The host can not be activated as it can't
>> find his volume group(s). To be able to use the global_filter in
>> /etc/lvm/lvm.conf, I believe it should be overridden in vdsm's lvm.conf
>> to revert back to the default.
>>
>> I've moved my filter from global_filter to filter and that seems to
>> work. When lvmetad is disabled I believe this should have the same
>> effect as global_filter? The comments in /etc/lvm/lvm.conf indicate also
>> udev might ignore the filter setting?
> 
> Right, global_filter exist so you can override filter used from the command
> line.
> 
> For example, hiding certain devices from vdsm. This is why we are using
> filter in vdsm, leaving global_filter for the administrator.
> 
> Can you explain why do you need global_filter or filter for the
> hypervisor disks?

Based on the comment in /etc/lvm/lvm.conf regarding global_filter I
concluded that not only lvmetad but also udev might perform action on
the devices and I wanted to prevent that.

I've now set the following settings in /etc/lvm/lvm.conf:

use_lvmetad = 0
auto_activation_volume_list = []
filter = ["a|/dev/sda5|", "r|.*|" ]

On other systems I have kept the default filter.

> Do you have any issue with the current settings, disabling auto activation and
> lvmetad?

Keeping those two disabled also seems to work. The ovirt LV's do show up
in 'lvs' output but are not activated.

I wanted to be absolutely sure the VM LV's were not touched, I added the
filter on some of our hosts.

Regards,

Rik


-- 
Rik Theys
System Engineer
KU Leuven - Dept. Elektrotechniek (ESAT)
Kasteelpark Arenberg 10 bus 2440  - B-3001 Leuven-Heverlee
+32(0)16/32.11.07

<>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] thin_check run on VM disk by host on startup ?!

2016-08-31 Thread Nir Soffer
On Wed, Aug 31, 2016 at 11:07 AM, Rik Theys  wrote:
> On 08/31/2016 09:43 AM, Rik Theys wrote:
>> On 08/30/2016 04:47 PM, Nir Soffer wrote:
>>> On Tue, Aug 30, 2016 at 3:51 PM, Rik Theys  
>>> wrote:
 While rebooting one of the hosts in an oVirt cluster, I noticed that
 thin_check is run on the thin pool devices of one of the VM's on which
 the disk is assigned to.

 That seems strange to me. I would expect the host to stay clear of any
 VM disks.
>>>
>>> We expect the same thing, but unfortunately systemd and lvm try to
>>> auto activate stuff. This may be good idea for desktop system, but
>>> probably bad idea for a server and in particular a hypervisor.
>>>
>>> We don't have a solution yet, but you can try these:
>>>
>>> 1. disable lvmetad service
>>>
>>> systemctl stop lvm2-lvmetad.service lvm2-lvmetad.socket
>>> systemctl mask lvm2-lvmetad.service lvm2-lvmetad.socket
>>>
>>> Edit /etc/lvm/lvm.conf:
>>>
>>> use_lvmetad = 0
>>>
>>> 2. disable lvm auto activation
>>>
>>> Edit /etc/lvm/lvm.conf:
>>>
>>> auto_activation_volume_list = []
>>>
>>> 3. both 1 and 2
>>>
>>
>> I've now applied both of the above and regenerated the initramfs and
>> rebooted and the host no longer lists the LV's of the VM. Since I
>> rebooted the host before without this issue, I'm not sure a single
>> reboot is enough to conclude it has fully fixed the issue.
>>
>> You mention that there's no solution yet. Does that mean the above
>> settings are not 100% certain to avoid this behaviour?
>>
>> I was thinking of setting a global_filter in /etc/lvm/lvm.conf to only
>> include the PV's for the hypervisor disks (on which the OS is installed)
>> so the system lvm commands only touches those. Since vdsm is using its
>> own lvm.conf this should be OK for vdsm?
>
> This does not seem to work. The host can not be activated as it can't
> find his volume group(s). To be able to use the global_filter in
> /etc/lvm/lvm.conf, I believe it should be overridden in vdsm's lvm.conf
> to revert back to the default.
>
> I've moved my filter from global_filter to filter and that seems to
> work. When lvmetad is disabled I believe this should have the same
> effect as global_filter? The comments in /etc/lvm/lvm.conf indicate also
> udev might ignore the filter setting?

Right, global_filter exist so you can override filter used from the command
line.

For example, hiding certain devices from vdsm. This is why we are using
filter in vdsm, leaving global_filter for the administrator.

Can you explain why do you need global_filter or filter for the
hypervisor disks?

Do you have any issue with the current settings, disabling auto activation and
lvmetad?

Nir
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] thin_check run on VM disk by host on startup ?!

2016-08-31 Thread Rik Theys
On 08/31/2016 09:43 AM, Rik Theys wrote:
> On 08/30/2016 04:47 PM, Nir Soffer wrote:
>> On Tue, Aug 30, 2016 at 3:51 PM, Rik Theys  
>> wrote:
>>> While rebooting one of the hosts in an oVirt cluster, I noticed that
>>> thin_check is run on the thin pool devices of one of the VM's on which
>>> the disk is assigned to.
>>>
>>> That seems strange to me. I would expect the host to stay clear of any
>>> VM disks.
>>
>> We expect the same thing, but unfortunately systemd and lvm try to
>> auto activate stuff. This may be good idea for desktop system, but
>> probably bad idea for a server and in particular a hypervisor.
>>
>> We don't have a solution yet, but you can try these:
>>
>> 1. disable lvmetad service
>>
>> systemctl stop lvm2-lvmetad.service lvm2-lvmetad.socket
>> systemctl mask lvm2-lvmetad.service lvm2-lvmetad.socket
>>
>> Edit /etc/lvm/lvm.conf:
>>
>> use_lvmetad = 0
>>
>> 2. disable lvm auto activation
>>
>> Edit /etc/lvm/lvm.conf:
>>
>> auto_activation_volume_list = []
>>
>> 3. both 1 and 2
>>
> 
> I've now applied both of the above and regenerated the initramfs and
> rebooted and the host no longer lists the LV's of the VM. Since I
> rebooted the host before without this issue, I'm not sure a single
> reboot is enough to conclude it has fully fixed the issue.
> 
> You mention that there's no solution yet. Does that mean the above
> settings are not 100% certain to avoid this behaviour?
> 
> I was thinking of setting a global_filter in /etc/lvm/lvm.conf to only
> include the PV's for the hypervisor disks (on which the OS is installed)
> so the system lvm commands only touches those. Since vdsm is using its
> own lvm.conf this should be OK for vdsm?

This does not seem to work. The host can not be activated as it can't
find his volume group(s). To be able to use the global_filter in
/etc/lvm/lvm.conf, I believe it should be overridden in vdsm's lvm.conf
to revert back to the default.

I've moved my filter from global_filter to filter and that seems to
work. When lvmetad is disabled I believe this should have the same
effect as global_filter? The comments in /etc/lvm/lvm.conf indicate also
udev might ignore the filter setting?

Rik

-- 
Rik Theys
System Engineer
KU Leuven - Dept. Elektrotechniek (ESAT)
Kasteelpark Arenberg 10 bus 2440  - B-3001 Leuven-Heverlee
+32(0)16/32.11.07

<>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] thin_check run on VM disk by host on startup ?!

2016-08-30 Thread Nir Soffer
On Tue, Aug 30, 2016 at 3:51 PM, Rik Theys  wrote:
>
> Hi,
>
> While rebooting one of the hosts in an oVirt cluster, I noticed that
> thin_check is run on the thin pool devices of one of the VM's on which
> the disk is assigned to.
>
> That seems strange to me. I would expect the host to stay clear of any
> VM disks.

We expect the same thing, but unfortunately systemd and lvm try to
auto activate stuff. This may be good idea for desktop system, but
probably bad idea for a server and in particular a hypervisor.

We don't have a solution yet, but you can try these:

1. disable lvmetad service

systemctl stop lvm2-lvmetad.service lvm2-lvmetad.socket
systemctl mask lvm2-lvmetad.service lvm2-lvmetad.socket

Edit /etc/lvm/lvm.conf:

use_lvmetad = 0

2. disable lvm auto activation

Edit /etc/lvm/lvm.conf:

auto_activation_volume_list = []

3. both 1 and 2

Currently we don't touch lvm.conf, and override it using --config option
from the command line when running lvm commands from vdsm. But this
does not help with startup time automatic activation and lvm peeking into
lvs owned by vms. We are probably going to introduce the changes above
in the future.

> When I look at the 'lvs' output on the host, it seems to have activated
> the VM's volume group that has the thin pool in it. It has also
> activated one other volume group (and it's LV's) that is _not_ a thin
> pool. All other VM disks are shown as LV's with their uuid. See output
> below.
>
> Is this expected behaviour? I would hope/expect that the host will never
> touch any VM disks. Am I supposed to configure an LVM filter myself to
> prevent this issue?
>
> We had a thin pool completely break on an VM a while ago and I never
> determined the root cause (was a test VM). If the host changed something
> on the disk while the VM was running on the other host this might have
> been the root cause.
>
> Regards,
>
> Rik
>
>
>  LV   VG
>   Attr   LSizePool  Origin Data%  Meta%  Move Log Cpy%Sync Convert
>   05a0fc6e-b43a-47b4-8979-92458dc1c76b
> a797e417-adeb-4611-b4cf-10844132eef4 -wi---   10.00g
>
>   08d25785-5b52-4795-863c-222b3416ed8d
> a797e417-adeb-4611-b4cf-10844132eef4 -wi---3.00g
>
>   0a9d0272-afc5-4d2e-87e4-ce32312352ee
> a797e417-adeb-4611-b4cf-10844132eef4 -wi---   40.00g
>
>   0bc1cdc9-edfc-4192-9017-b6d55c01195d
> a797e417-adeb-4611-b4cf-10844132eef4 -wi---   17.00g
>
>   181e6a61-0d4b-41c9-97a0-ef6b6d2e85e4
> a797e417-adeb-4611-b4cf-10844132eef4 -wi---  400.00g
>
>   1913b207-d229-4874-9af0-c7e019aea51d
> a797e417-adeb-4611-b4cf-10844132eef4 -wi---  128.00m
>
>   1adf2b74-60c2-4c16-8c92-e7da81642bc6
> a797e417-adeb-4611-b4cf-10844132eef4 -wi---5.00g
>
>   1e64f92f-d266-4746-ada6-0f7f20b158a6
> a797e417-adeb-4611-b4cf-10844132eef4 -wi---   65.00g
>
>   22a0bc66-7c74-486a-9be3-bb8112c6fc9e
> a797e417-adeb-4611-b4cf-10844132eef4 -wi---9.00g
>
>   2563b835-58be-4a4e-ac02-96556c5e8c1c
> a797e417-adeb-4611-b4cf-10844132eef4 -wi---9.00g
>
>   27edac09-6438-4be4-b930-8834e7eecad5
> a797e417-adeb-4611-b4cf-10844132eef4 -wi---5.00g
>
>   2b57d8e1-d304-47a3-aa2f-7f44804f5c66
> a797e417-adeb-4611-b4cf-10844132eef4 -wi---   15.00g
>
>   2f073f9b-4e46-4d7b-9012-05a26fede87d
> a797e417-adeb-4611-b4cf-10844132eef4 -wi---   36.00g
>
>   310ef6e4-0759-4f1a-916f-6c757e15feb5
> a797e417-adeb-4611-b4cf-10844132eef4 -wi---5.00g
>
>   3125ac02-64cb-433b-a1c1-7105a11a1d1c
> a797e417-adeb-4611-b4cf-10844132eef4 -wi---   36.00g
>
>   315e9ad9-7787-40b6-8b43-d766b08708e2
> a797e417-adeb-4611-b4cf-10844132eef4 -wi---   64.00g
>
>   32e4e88d-5482-40f6-a906-ad8907d773ce
> a797e417-adeb-4611-b4cf-10844132eef4 -wi---   10.00g
>
>   32f0948b-324f-4e73-937a-fef25abd9bdc
> a797e417-adeb-4611-b4cf-10844132eef4 -wi---   32.00g
>
>   334e0961-2dad-4d58-86d9-bfec3a4e83e4
> a797e417-adeb-4611-b4cf-10844132eef4 -wi---   11.00g
>
>   36c4a274-cbbe-4a03-95e5-8b98ffde0b1b
> a797e417-adeb-4611-b4cf-10844132eef4 -wi---1.17t
>
>   36d95661-308f-42e6-a887-ddd4d4ed04e9
> a797e417-adeb-4611-b4cf-10844132eef4 -wi---  250.00g
>
>   39d88a73-4e59-44a7-b773-5bc446817415
> a797e417-adeb-4611-b4cf-10844132eef4 -wi---4.00g
>
>   3aeb1c19-7113-4a4b-9e27-849af904ed41
> a797e417-adeb-4611-b4cf-10844132eef4 -wi---   32.00g
>
>   43ac529e-4295-4e77-ae06-963104950921
> a797e417-adeb-4611-b4cf-10844132eef4 -wi---   33.00g
>
>   462f1cc4-e3a7-4fcd-b46c-7f356bfc5589
> a797e417-adeb-4611-b4cf-10844132eef4 -wi---   10.00g
>
>   479fc9a4-592c-4287-b8fe-db7600360676
> a797e417-adeb-4611-b4cf-10844132eef4 -wi---1.00t
>
>   4d05ce38-2d33-4f01-ae33-973ff8eeb897
> a797e417-adeb-4611-b4cf-10844132eef4 -wi---4.00g
>
>   4d307170-7d45-4bb9-9885-68aeacee5f33
> a797e417-adeb-4611-b4cf-10844132eef4 -wi---   24.00g
>
>   4dca97ab-25de-4b69-9094-bf02698abf0e
> 

Re: [ovirt-users] thin_check run on VM disk by host on startup ?!

2016-08-30 Thread Rik Theys
On 08/30/2016 02:51 PM, Rik Theys wrote:
> While rebooting one of the hosts in an oVirt cluster, I noticed that
> thin_check is run on the thin pool devices of one of the VM's on which
> the disk is assigned to.
> 
> That seems strange to me. I would expect the host to stay clear of any
> VM disks.

> We had a thin pool completely break on an VM a while ago and I never
> determined the root cause (was a test VM). If the host changed something
> on the disk while the VM was running on the other host this might have
> been the root cause.

I just rebooted the affected VM and indeed the systems fails to activate
the thinpool now :-(.

When I try to activate it I get:

Check of pool maildata/pool0 failed: (status:1). Manual repair required!
0 logical volume(s) in volume group "maildata" now active.

Mvg,

Rik


-- 
Rik Theys
System Engineer
KU Leuven - Dept. Elektrotechniek (ESAT)
Kasteelpark Arenberg 10 bus 2440  - B-3001 Leuven-Heverlee
+32(0)16/32.11.07

<>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] thin_check run on VM disk by host on startup ?!

2016-08-30 Thread Rik Theys
Hi,

While rebooting one of the hosts in an oVirt cluster, I noticed that
thin_check is run on the thin pool devices of one of the VM's on which
the disk is assigned to.

That seems strange to me. I would expect the host to stay clear of any
VM disks.

When I look at the 'lvs' output on the host, it seems to have activated
the VM's volume group that has the thin pool in it. It has also
activated one other volume group (and it's LV's) that is _not_ a thin
pool. All other VM disks are shown as LV's with their uuid. See output
below.

Is this expected behaviour? I would hope/expect that the host will never
touch any VM disks. Am I supposed to configure an LVM filter myself to
prevent this issue?

We had a thin pool completely break on an VM a while ago and I never
determined the root cause (was a test VM). If the host changed something
on the disk while the VM was running on the other host this might have
been the root cause.

Regards,

Rik


 LV   VG
  Attr   LSizePool  Origin Data%  Meta%  Move Log Cpy%Sync Convert
  05a0fc6e-b43a-47b4-8979-92458dc1c76b
a797e417-adeb-4611-b4cf-10844132eef4 -wi---   10.00g

  08d25785-5b52-4795-863c-222b3416ed8d
a797e417-adeb-4611-b4cf-10844132eef4 -wi---3.00g

  0a9d0272-afc5-4d2e-87e4-ce32312352ee
a797e417-adeb-4611-b4cf-10844132eef4 -wi---   40.00g

  0bc1cdc9-edfc-4192-9017-b6d55c01195d
a797e417-adeb-4611-b4cf-10844132eef4 -wi---   17.00g

  181e6a61-0d4b-41c9-97a0-ef6b6d2e85e4
a797e417-adeb-4611-b4cf-10844132eef4 -wi---  400.00g

  1913b207-d229-4874-9af0-c7e019aea51d
a797e417-adeb-4611-b4cf-10844132eef4 -wi---  128.00m

  1adf2b74-60c2-4c16-8c92-e7da81642bc6
a797e417-adeb-4611-b4cf-10844132eef4 -wi---5.00g

  1e64f92f-d266-4746-ada6-0f7f20b158a6
a797e417-adeb-4611-b4cf-10844132eef4 -wi---   65.00g

  22a0bc66-7c74-486a-9be3-bb8112c6fc9e
a797e417-adeb-4611-b4cf-10844132eef4 -wi---9.00g

  2563b835-58be-4a4e-ac02-96556c5e8c1c
a797e417-adeb-4611-b4cf-10844132eef4 -wi---9.00g

  27edac09-6438-4be4-b930-8834e7eecad5
a797e417-adeb-4611-b4cf-10844132eef4 -wi---5.00g

  2b57d8e1-d304-47a3-aa2f-7f44804f5c66
a797e417-adeb-4611-b4cf-10844132eef4 -wi---   15.00g

  2f073f9b-4e46-4d7b-9012-05a26fede87d
a797e417-adeb-4611-b4cf-10844132eef4 -wi---   36.00g

  310ef6e4-0759-4f1a-916f-6c757e15feb5
a797e417-adeb-4611-b4cf-10844132eef4 -wi---5.00g

  3125ac02-64cb-433b-a1c1-7105a11a1d1c
a797e417-adeb-4611-b4cf-10844132eef4 -wi---   36.00g

  315e9ad9-7787-40b6-8b43-d766b08708e2
a797e417-adeb-4611-b4cf-10844132eef4 -wi---   64.00g

  32e4e88d-5482-40f6-a906-ad8907d773ce
a797e417-adeb-4611-b4cf-10844132eef4 -wi---   10.00g

  32f0948b-324f-4e73-937a-fef25abd9bdc
a797e417-adeb-4611-b4cf-10844132eef4 -wi---   32.00g

  334e0961-2dad-4d58-86d9-bfec3a4e83e4
a797e417-adeb-4611-b4cf-10844132eef4 -wi---   11.00g

  36c4a274-cbbe-4a03-95e5-8b98ffde0b1b
a797e417-adeb-4611-b4cf-10844132eef4 -wi---1.17t

  36d95661-308f-42e6-a887-ddd4d4ed04e9
a797e417-adeb-4611-b4cf-10844132eef4 -wi---  250.00g

  39d88a73-4e59-44a7-b773-5bc446817415
a797e417-adeb-4611-b4cf-10844132eef4 -wi---4.00g

  3aeb1c19-7113-4a4b-9e27-849af904ed41
a797e417-adeb-4611-b4cf-10844132eef4 -wi---   32.00g

  43ac529e-4295-4e77-ae06-963104950921
a797e417-adeb-4611-b4cf-10844132eef4 -wi---   33.00g

  462f1cc4-e3a7-4fcd-b46c-7f356bfc5589
a797e417-adeb-4611-b4cf-10844132eef4 -wi---   10.00g

  479fc9a4-592c-4287-b8fe-db7600360676
a797e417-adeb-4611-b4cf-10844132eef4 -wi---1.00t

  4d05ce38-2d33-4f01-ae33-973ff8eeb897
a797e417-adeb-4611-b4cf-10844132eef4 -wi---4.00g

  4d307170-7d45-4bb9-9885-68aeacee5f33
a797e417-adeb-4611-b4cf-10844132eef4 -wi---   24.00g

  4dca97ab-25de-4b69-9094-bf02698abf0e
a797e417-adeb-4611-b4cf-10844132eef4 -wi---8.00g

  54013d0f-fd47-4c91-aa31-f6dd2d4eeaad
a797e417-adeb-4611-b4cf-10844132eef4 -wi---   64.00g

  55a2a2aa-88f2-4aca-bbfd-052f6be3079d
a797e417-adeb-4611-b4cf-10844132eef4 -wi---4.00g

  57131782-4eda-4fbd-aeaa-af49d12a79e9
a797e417-adeb-4611-b4cf-10844132eef4 -wi---   14.00g

  5884b686-6a2e-4166-9727-303097a349fe
a797e417-adeb-4611-b4cf-10844132eef4 -wi---   32.00g

  6164a2b2-e84b-4bf9-abab-f7bc19e1099d
a797e417-adeb-4611-b4cf-10844132eef4 -wi---   10.00g

  62c8a0be-4d7f-4297-af1c-5c9d794f4ba0
a797e417-adeb-4611-b4cf-10844132eef4 -wi---  500.00g

  67499042-4c19-425c-9106-b3c382edbec1
a797e417-adeb-4611-b4cf-10844132eef4 -wi-a-1.00t

  6ba95bb2-04ed-4512-95f4-34e38168f9e9
a797e417-adeb-4611-b4cf-10844132eef4 -wi---   22.00g

  6ebf91c2-2051-4e2b-b1c3-57350834f236
a797e417-adeb-4611-b4cf-10844132eef4 -wi---  100.00g

  70c834a0-05f4-469f-877c-24101c7abf25
a797e417-adeb-4611-b4cf-10844132eef4 -wi---6.00g

  71401df8-19dd-45d0-be33-f07937fe042d
a797e417-adeb-4611-b4cf-10844132eef4 -wi---6.00g

  73d5eb12-88e3-44a2-a791-9b7ad5b77953