Re: [ovirt-users] Update to 3.5.1 scrambled multipath.conf?

2015-01-27 Thread Nir Soffer
- Original Message -
 From: Gianluca Cecchi gianluca.cec...@gmail.com
 To: Nir Soffer nsof...@redhat.com
 Cc: Dan Kenigsberg dan...@redhat.com, users users@ovirt.org, Yeela 
 Kaplan ykap...@redhat.com, Benjamin
 Marzinski bmarz...@redhat.com
 Sent: Monday, January 26, 2015 8:46:52 PM
 Subject: Re: [ovirt-users] Update to 3.5.1 scrambled multipath.conf?
 
 On Mon, Jan 26, 2015 at 1:37 PM, Nir Soffer nsof...@redhat.com wrote:
 
 
Any suggestion appreciated.
   
Current multipath.conf (where I also commented out the getuid_callout
  that
is not used anymore):
   
[root@tekkaman setup]# cat /etc/multipath.conf
# RHEV REVISION 1.1
   
blacklist {
devnode ^(sda|sdb)[0-9]*
}
 
 
  I think what happened is:
 
  1. 3.5.1 had new multipath version
 
 
 what do you mean with new multipath version?

I mean new multipath.conf version:

# RHEV REVISION 1.1

When vdsm finds that its current configuration version is different from current
multipath.conf version, it replaces the current multipath.conf with a new 
version.

 I currently have device-mapper-multipath-0.4.9-56.fc20.x86_64
 The system came from f18 to f19 and then to f20 via fedup in both cases
 In my yum.log files I see this about January 2013 when I was in Fedora 18
 Jan 07 00:11:44 Installed: device-mapper-multipath-0.4.9-36.fc18.x86_64
 
 I then upgraded to f19 on 24th November 2013 and device-mapper-multipath
 was the one pushed during the fedup update:
 device-mapper-multipath-libs-0.4.9-51.fc19.x86_64
 
 Then on 12th of November 2014 I passed form f19 to f20 and fedup pushed
 device-mapper-multipath-0.4.9-56.fc20.x86_64
 that is my current version
 At that time I also passed from oVirt 3.4.4 to 3.5.0.
 And I didn't register any problem with my internal disk
 
 It was sufficient to keep inside
 blacklist {
devnode sd[a-b]
 }
 
 At the head of the file I only had:
 # RHEV REVISION 1.0

This is why vdsm replaced the file.

 
 No reference about
 # RHEV PRIVATE
 
 But right now that I'm writing I notice that my rule for blacklist after
 migration to 3.5.1 was
 
 devnode ^(sda|sdb)[0-9]*
 
 probably blacklists only partitions and not the full disks ;-)
 So I'm going to check with the old blacklist option and/or the PRIVATE
 label as suggested by Dan...
 
 Probably passing from RHEV REVISION 1.0 to 1.1 the original blacklist part
 was thrown away...
 
 2. So vdsm upgraded the local file
  3. blacklist above was removed
 (it should exists in /etc/multipath.bak)
 
 
 It seems it didn't generate any multipath.conf.bak ...
 

In 3.5.1 it should rotate multipath.conf, saving multipath.conf.1 ...

If you don't find the backup file after multipath.conf was updated, this is a 
bug.
Can you open a a bug about it?

Nir
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Update to 3.5.1 scrambled multipath.conf?

2015-01-26 Thread Nir Soffer
- Original Message -
 From: Dan Kenigsberg dan...@redhat.com
 To: Gianluca Cecchi gianluca.cec...@gmail.com, nsof...@redhat.com
 Cc: users users@ovirt.org, ykap...@redhat.com
 Sent: Monday, January 26, 2015 2:09:23 PM
 Subject: Re: [ovirt-users] Update to 3.5.1 scrambled multipath.conf?
 
 On Sat, Jan 24, 2015 at 12:59:01AM +0100, Gianluca Cecchi wrote:
  Hello,
  on my all-in-one installation @home I had 3.5.0 with F20.
  Today I updated to 3.5.1.
  
  it seems it modified /etc/multipath.conf preventing me from using my second
  disk at all...
  
  My system has internal ssd disk (sda) for OS and one local storage domain
  and another disk (sdb) with some partitions (on one of them there is also
  another local storage domain).
  
  At reboot I was put in emergency boot because partitions at sdb disk could
  not be mounted (they were busy).
  it took me some time to understand that the problem was due to sdb gone
  managed as multipath device and so busy for partitions to be mounted.
  
  Here you can find how multipath became after update and reboot
  https://drive.google.com/file/d/0BwoPbcrMv8mvS0FkMnNyMTdVTms/view?usp=sharing
  
  No device-mapper-multipath update in yum.log
  
  Also it seems that after changing it, it was then reverted at boot again (I
  don't know if the responsible was initrd/dracut or vdsmd) so in the mean
  time the only thing I could do was to make the file immutable with
  
  chattr +i /etc/multipath.conf
 
 The supported method of achieving this is to place # RHEV PRIVATE in
 the second line of your hand-modified multipath.conf
 
 I do not understand why this has happened only after upgrade to 3.5.1 -
 3.5.0's should have reverted you multipath.conf just as well during each
 vdsm startup.
 
 The good thing is that this annoying behavior has been dropped from the
 master branch, so that 3.6 is not going to have it. Vdsm is not to mess
 with other services config file while it is running. The logic moved to
 `vdsm-tool configure`
 
  
  and so I was able to reboot and verify that my partitions on sdb were ok
  and I was able to mount them (for safe I also ran an fsck against them)
  
  Update ran around 19:20 and finished at 19:34
  here the log in gzip format
  https://drive.google.com/file/d/0BwoPbcrMv8mvWjJDTXU1YjRWOFk/view?usp=sharing
  
  Reboot was done around 21:10-21:14
  
  Here my /var/log/messages in gzip format, where you can see latest days.
  https://drive.google.com/file/d/0BwoPbcrMv8mvMm1ldXljd3hZWnM/view?usp=sharing
  
  
  Any suggestion appreciated.
  
  Current multipath.conf (where I also commented out the getuid_callout that
  is not used anymore):
  
  [root@tekkaman setup]# cat /etc/multipath.conf
  # RHEV REVISION 1.1
  
  blacklist {
  devnode ^(sda|sdb)[0-9]*
  }


I think what happened is:

1. 3.5.1 had new multipath version
2. So vdsm upgraded the local file
3. blacklist above was removed
   (it should exists in /etc/multipath.bak)

To prevent local changes, you have to mark the file as private
as Dan suggests.

Seems to be related to the find_multiapth = yes bug:
https://bugzilla.redhat.com/show_bug.cgi?id=1173290

Ben, can you confirm that this is the same issue?

  
  defaults {
  polling_interval5
  #getuid_callout  /usr/lib/udev/scsi_id --whitelisted
  --replace-whitespace --device=/dev/%n
  no_path_retry   fail
  user_friendly_names no
  flush_on_last_del   yes
  fast_io_fail_tmo5
  dev_loss_tmo30
  max_fds 4096
  }
 

Regards,
Nir
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Update to 3.5.1 scrambled multipath.conf?

2015-01-26 Thread Fabian Deutsch


- Original Message -
 - Original Message -
  From: Dan Kenigsberg dan...@redhat.com
  To: Gianluca Cecchi gianluca.cec...@gmail.com, nsof...@redhat.com
  Cc: users users@ovirt.org, ykap...@redhat.com
  Sent: Monday, January 26, 2015 2:09:23 PM
  Subject: Re: [ovirt-users] Update to 3.5.1 scrambled multipath.conf?
  
  On Sat, Jan 24, 2015 at 12:59:01AM +0100, Gianluca Cecchi wrote:
   Hello,
   on my all-in-one installation @home I had 3.5.0 with F20.
   Today I updated to 3.5.1.
   
   it seems it modified /etc/multipath.conf preventing me from using my
   second
   disk at all...
   
   My system has internal ssd disk (sda) for OS and one local storage domain
   and another disk (sdb) with some partitions (on one of them there is also
   another local storage domain).
   
   At reboot I was put in emergency boot because partitions at sdb disk
   could
   not be mounted (they were busy).
   it took me some time to understand that the problem was due to sdb gone
   managed as multipath device and so busy for partitions to be mounted.
   
   Here you can find how multipath became after update and reboot
   https://drive.google.com/file/d/0BwoPbcrMv8mvS0FkMnNyMTdVTms/view?usp=sharing
   
   No device-mapper-multipath update in yum.log
   
   Also it seems that after changing it, it was then reverted at boot again
   (I
   don't know if the responsible was initrd/dracut or vdsmd) so in the mean
   time the only thing I could do was to make the file immutable with
   
   chattr +i /etc/multipath.conf
  
  The supported method of achieving this is to place # RHEV PRIVATE in
  the second line of your hand-modified multipath.conf
  
  I do not understand why this has happened only after upgrade to 3.5.1 -
  3.5.0's should have reverted you multipath.conf just as well during each
  vdsm startup.
  
  The good thing is that this annoying behavior has been dropped from the
  master branch, so that 3.6 is not going to have it. Vdsm is not to mess
  with other services config file while it is running. The logic moved to
  `vdsm-tool configure`
  
   
   and so I was able to reboot and verify that my partitions on sdb were ok
   and I was able to mount them (for safe I also ran an fsck against them)
   
   Update ran around 19:20 and finished at 19:34
   here the log in gzip format
   https://drive.google.com/file/d/0BwoPbcrMv8mvWjJDTXU1YjRWOFk/view?usp=sharing
   
   Reboot was done around 21:10-21:14
   
   Here my /var/log/messages in gzip format, where you can see latest days.
   https://drive.google.com/file/d/0BwoPbcrMv8mvMm1ldXljd3hZWnM/view?usp=sharing
   
   
   Any suggestion appreciated.
   
   Current multipath.conf (where I also commented out the getuid_callout
   that
   is not used anymore):
   
   [root@tekkaman setup]# cat /etc/multipath.conf
   # RHEV REVISION 1.1
   
   blacklist {
   devnode ^(sda|sdb)[0-9]*
   }
 
 
 I think what happened is:
 
 1. 3.5.1 had new multipath version
 2. So vdsm upgraded the local file
 3. blacklist above was removed
(it should exists in /etc/multipath.bak)
 
 To prevent local changes, you have to mark the file as private
 as Dan suggests.
 
 Seems to be related to the find_multiapth = yes bug:
 https://bugzilla.redhat.com/show_bug.cgi?id=1173290

The symptoms above sound exactly liek this issue.
When find_multipahts is no (the default when the directive is not present)
I sthat all non-blacklisted devices are tried to get claimed, and this happened 
above.

Blacklisting the devices works, or adding find_mutlipaths yes should also 
work, because
in that case only device which have more than one path (or are explicitly 
named) will be
claimed by multipath.

My 2ct.

- fabian

 Ben, can you confirm that this is the same issue?
 
   
   defaults {
   polling_interval5
   #getuid_callout  /usr/lib/udev/scsi_id --whitelisted
   --replace-whitespace --device=/dev/%n
   no_path_retry   fail
   user_friendly_names no
   flush_on_last_del   yes
   fast_io_fail_tmo5
   dev_loss_tmo30
   max_fds 4096
   }
  
 
 Regards,
 Nir
 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users
 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Update to 3.5.1 scrambled multipath.conf?

2015-01-26 Thread Gianluca Cecchi
On Mon, Jan 26, 2015 at 1:37 PM, Nir Soffer nsof...@redhat.com wrote:


   Any suggestion appreciated.
  
   Current multipath.conf (where I also commented out the getuid_callout
 that
   is not used anymore):
  
   [root@tekkaman setup]# cat /etc/multipath.conf
   # RHEV REVISION 1.1
  
   blacklist {
   devnode ^(sda|sdb)[0-9]*
   }


 I think what happened is:

 1. 3.5.1 had new multipath version


what do you mean with new multipath version?
I currently have device-mapper-multipath-0.4.9-56.fc20.x86_64
The system came from f18 to f19 and then to f20 via fedup in both cases
In my yum.log files I see this about January 2013 when I was in Fedora 18
Jan 07 00:11:44 Installed: device-mapper-multipath-0.4.9-36.fc18.x86_64

I then upgraded to f19 on 24th November 2013 and device-mapper-multipath
was the one pushed during the fedup update:
device-mapper-multipath-libs-0.4.9-51.fc19.x86_64

Then on 12th of November 2014 I passed form f19 to f20 and fedup pushed
device-mapper-multipath-0.4.9-56.fc20.x86_64
that is my current version
At that time I also passed from oVirt 3.4.4 to 3.5.0.
And I didn't register any problem with my internal disk

It was sufficient to keep inside
blacklist {
   devnode sd[a-b]
}

At the head of the file I only had:
# RHEV REVISION 1.0

No reference about
# RHEV PRIVATE

But right now that I'm writing I notice that my rule for blacklist after
migration to 3.5.1 was

devnode ^(sda|sdb)[0-9]*

probably blacklists only partitions and not the full disks ;-)
So I'm going to check with the old blacklist option and/or the PRIVATE
label as suggested by Dan...

Probably passing from RHEV REVISION 1.0 to 1.1 the original blacklist part
was thrown away...

2. So vdsm upgraded the local file
 3. blacklist above was removed
(it should exists in /etc/multipath.bak)


It seems it didn't generate any multipath.conf.bak ...
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Update to 3.5.1 scrambled multipath.conf?

2015-01-26 Thread Benjamin Marzinski
On Mon, Jan 26, 2015 at 10:27:23AM -0500, Fabian Deutsch wrote:
 
 
 - Original Message -
  - Original Message -
   From: Dan Kenigsberg dan...@redhat.com
   To: Gianluca Cecchi gianluca.cec...@gmail.com, nsof...@redhat.com
   Cc: users users@ovirt.org, ykap...@redhat.com
   Sent: Monday, January 26, 2015 2:09:23 PM
   Subject: Re: [ovirt-users] Update to 3.5.1 scrambled multipath.conf?
   
   On Sat, Jan 24, 2015 at 12:59:01AM +0100, Gianluca Cecchi wrote:
Hello,
on my all-in-one installation @home I had 3.5.0 with F20.
Today I updated to 3.5.1.

it seems it modified /etc/multipath.conf preventing me from using my
second
disk at all...

My system has internal ssd disk (sda) for OS and one local storage 
domain
and another disk (sdb) with some partitions (on one of them there is 
also
another local storage domain).

At reboot I was put in emergency boot because partitions at sdb disk
could
not be mounted (they were busy).
it took me some time to understand that the problem was due to sdb gone
managed as multipath device and so busy for partitions to be mounted.

Here you can find how multipath became after update and reboot
https://drive.google.com/file/d/0BwoPbcrMv8mvS0FkMnNyMTdVTms/view?usp=sharing

No device-mapper-multipath update in yum.log

Also it seems that after changing it, it was then reverted at boot again
(I
don't know if the responsible was initrd/dracut or vdsmd) so in the mean
time the only thing I could do was to make the file immutable with

chattr +i /etc/multipath.conf
   
   The supported method of achieving this is to place # RHEV PRIVATE in
   the second line of your hand-modified multipath.conf
   
   I do not understand why this has happened only after upgrade to 3.5.1 -
   3.5.0's should have reverted you multipath.conf just as well during each
   vdsm startup.
   
   The good thing is that this annoying behavior has been dropped from the
   master branch, so that 3.6 is not going to have it. Vdsm is not to mess
   with other services config file while it is running. The logic moved to
   `vdsm-tool configure`
   

and so I was able to reboot and verify that my partitions on sdb were ok
and I was able to mount them (for safe I also ran an fsck against them)

Update ran around 19:20 and finished at 19:34
here the log in gzip format
https://drive.google.com/file/d/0BwoPbcrMv8mvWjJDTXU1YjRWOFk/view?usp=sharing

Reboot was done around 21:10-21:14

Here my /var/log/messages in gzip format, where you can see latest days.
https://drive.google.com/file/d/0BwoPbcrMv8mvMm1ldXljd3hZWnM/view?usp=sharing


Any suggestion appreciated.

Current multipath.conf (where I also commented out the getuid_callout
that
is not used anymore):

[root@tekkaman setup]# cat /etc/multipath.conf
# RHEV REVISION 1.1

blacklist {
devnode ^(sda|sdb)[0-9]*
}
  
  
  I think what happened is:
  
  1. 3.5.1 had new multipath version
  2. So vdsm upgraded the local file
  3. blacklist above was removed
 (it should exists in /etc/multipath.bak)
  
  To prevent local changes, you have to mark the file as private
  as Dan suggests.
  
  Seems to be related to the find_multiapth = yes bug:
  https://bugzilla.redhat.com/show_bug.cgi?id=1173290
 
 The symptoms above sound exactly liek this issue.
 When find_multipahts is no (the default when the directive is not present)
 I sthat all non-blacklisted devices are tried to get claimed, and this 
 happened above.
 
 Blacklisting the devices works, or adding find_mutlipaths yes should also 
 work, because
 in that case only device which have more than one path (or are explicitly 
 named) will be
 claimed by multipath.

I would like to point out one issue.  Once a device is claimed (even if
find_multipaths wasn't set when it was claimed) it will get added to
/etc/multipath/wwids.  This means that if you have previously claimed a
single path device, adding find_multipaths yes won't stop that device
from being claimed in the fulture (since it is in the wwids file). You
would need to either run:

# multipath -w device

to remove the device's wwid from the wwids file, or run

# multipath -W

to reset the wwids file to only include the wwids of the current
multipath devices (Obviously, you need to remove any devices that you
don't want multipathed before your run this).

 
 My 2ct.
 
 - fabian
 
  Ben, can you confirm that this is the same issue?

Yeah, I think so.

-Ben

  

defaults {
polling_interval5
#getuid_callout  /usr/lib/udev/scsi_id --whitelisted
--replace-whitespace --device=/dev/%n
no_path_retry   fail
user_friendly_names no
flush_on_last_del   yes
fast_io_fail_tmo5
dev_loss_tmo30
max_fds

Re: [ovirt-users] Update to 3.5.1 scrambled multipath.conf?

2015-01-26 Thread Dan Kenigsberg
On Sat, Jan 24, 2015 at 12:59:01AM +0100, Gianluca Cecchi wrote:
 Hello,
 on my all-in-one installation @home I had 3.5.0 with F20.
 Today I updated to 3.5.1.
 
 it seems it modified /etc/multipath.conf preventing me from using my second
 disk at all...
 
 My system has internal ssd disk (sda) for OS and one local storage domain
 and another disk (sdb) with some partitions (on one of them there is also
 another local storage domain).
 
 At reboot I was put in emergency boot because partitions at sdb disk could
 not be mounted (they were busy).
 it took me some time to understand that the problem was due to sdb gone
 managed as multipath device and so busy for partitions to be mounted.
 
 Here you can find how multipath became after update and reboot
 https://drive.google.com/file/d/0BwoPbcrMv8mvS0FkMnNyMTdVTms/view?usp=sharing
 
 No device-mapper-multipath update in yum.log
 
 Also it seems that after changing it, it was then reverted at boot again (I
 don't know if the responsible was initrd/dracut or vdsmd) so in the mean
 time the only thing I could do was to make the file immutable with
 
 chattr +i /etc/multipath.conf

The supported method of achieving this is to place # RHEV PRIVATE in
the second line of your hand-modified multipath.conf

I do not understand why this has happened only after upgrade to 3.5.1 -
3.5.0's should have reverted you multipath.conf just as well during each
vdsm startup.

The good thing is that this annoying behavior has been dropped from the
master branch, so that 3.6 is not going to have it. Vdsm is not to mess
with other services config file while it is running. The logic moved to
`vdsm-tool configure`

 
 and so I was able to reboot and verify that my partitions on sdb were ok
 and I was able to mount them (for safe I also ran an fsck against them)
 
 Update ran around 19:20 and finished at 19:34
 here the log in gzip format
 https://drive.google.com/file/d/0BwoPbcrMv8mvWjJDTXU1YjRWOFk/view?usp=sharing
 
 Reboot was done around 21:10-21:14
 
 Here my /var/log/messages in gzip format, where you can see latest days.
 https://drive.google.com/file/d/0BwoPbcrMv8mvMm1ldXljd3hZWnM/view?usp=sharing
 
 
 Any suggestion appreciated.
 
 Current multipath.conf (where I also commented out the getuid_callout that
 is not used anymore):
 
 [root@tekkaman setup]# cat /etc/multipath.conf
 # RHEV REVISION 1.1
 
 blacklist {
 devnode ^(sda|sdb)[0-9]*
 }
 
 defaults {
 polling_interval5
 #getuid_callout  /usr/lib/udev/scsi_id --whitelisted
 --replace-whitespace --device=/dev/%n
 no_path_retry   fail
 user_friendly_names no
 flush_on_last_del   yes
 fast_io_fail_tmo5
 dev_loss_tmo30
 max_fds 4096
 }
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Update to 3.5.1 scrambled multipath.conf?

2015-01-23 Thread Gianluca Cecchi
Hello,
on my all-in-one installation @home I had 3.5.0 with F20.
Today I updated to 3.5.1.

it seems it modified /etc/multipath.conf preventing me from using my second
disk at all...

My system has internal ssd disk (sda) for OS and one local storage domain
and another disk (sdb) with some partitions (on one of them there is also
another local storage domain).

At reboot I was put in emergency boot because partitions at sdb disk could
not be mounted (they were busy).
it took me some time to understand that the problem was due to sdb gone
managed as multipath device and so busy for partitions to be mounted.

Here you can find how multipath became after update and reboot
https://drive.google.com/file/d/0BwoPbcrMv8mvS0FkMnNyMTdVTms/view?usp=sharing

No device-mapper-multipath update in yum.log

Also it seems that after changing it, it was then reverted at boot again (I
don't know if the responsible was initrd/dracut or vdsmd) so in the mean
time the only thing I could do was to make the file immutable with

chattr +i /etc/multipath.conf

and so I was able to reboot and verify that my partitions on sdb were ok
and I was able to mount them (for safe I also ran an fsck against them)

Update ran around 19:20 and finished at 19:34
here the log in gzip format
https://drive.google.com/file/d/0BwoPbcrMv8mvWjJDTXU1YjRWOFk/view?usp=sharing

Reboot was done around 21:10-21:14

Here my /var/log/messages in gzip format, where you can see latest days.
https://drive.google.com/file/d/0BwoPbcrMv8mvMm1ldXljd3hZWnM/view?usp=sharing


Any suggestion appreciated.

Current multipath.conf (where I also commented out the getuid_callout that
is not used anymore):

[root@tekkaman setup]# cat /etc/multipath.conf
# RHEV REVISION 1.1

blacklist {
devnode ^(sda|sdb)[0-9]*
}

defaults {
polling_interval5
#getuid_callout  /usr/lib/udev/scsi_id --whitelisted
--replace-whitespace --device=/dev/%n
no_path_retry   fail
user_friendly_names no
flush_on_last_del   yes
fast_io_fail_tmo5
dev_loss_tmo30
max_fds 4096
}


Gianluca
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users