At this point I would seriously consider obtaining commercial support. This
is clearly a support issue and not a bug and has been closed as invalid. LVM 
could
have easier to grok filtering but it does work, you just have to get creative by
changing the scan directory.

For example: UNTESTED

scan = [ "/dev/mapper"]
filter = [ "a|^/dev/mapper/mpath.*|", "r/.*/" ]

pvscan -vvv is your friend. Concerning your other SAN question, please
seek professional support or seek forums (google) where SAN admins gather.
We can't know how to configure *every single SAN on the market*.

Again, LVM & multipath are distro agnostic, *any other distro doc
will do*.

e.g. https://access.redhat.com/site/documentation/en-
US/Red_Hat_Enterprise_Linux/5/html/Logical_Volume_Manager_Administration/lvm_filters.html

On 06/10/2013 03:33 PM, Jernej Jakob wrote:
> Thank you for your help. I wasn't aware of the limitation
> (incompatibility) caused by co-deploying LVM and multipath, it was my
> assumption that multipath would be configured to take precedence over
> LVM for having PVs on multipath block devices, as having it the other
> way around, as in multiple paths over LVM logical volumes would not make
> sense. I can assume that this would be a pretty common scenario, as for
> example, you may have one volume mapped from SAN over FC on which the
> tools that LVM offers would be valuable.
>
> It would seem that a simple check at boot time, checking whether LVM and
> multipath are co-deployed, starting multipath first, letting it create
> paths and then adding those found to LVM's filter.
>
> I understand however that this is in no way common usage and requires a
> skillful enough administrator to be aware of the possible scenarios
> caused.
>
> Regarding the multipath config itself, this is the default config, 
> functionally the same as the one proposed for MSA2012fc by HP.
> The virtual one is of course bogus... not to be used, just to demonstrate the 
> failure mode. The actual servers will have a dual-port FC HBA with each port 
> going to separate zones on a switch, with itself being connected with 2 
> cables each to the 2 controllers (4 in total) of the SAN, so there will be 2 
> paths for each volume. This is required for this SAN in order to have 
> redundancy for controller failure.
>
> I just tried this with the physical server, and it still doesn't work as
> it should. In lvm.conf, blacklisted all the sd* devices and allowed only
> /dev/disk/by-id. Or should even the id's of the MP volumes be
> blacklisted, only allowing the one single PV's id?
>

-- 
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to multipath-tools in Ubuntu.
https://bugs.launchpad.net/bugs/1178721

Title:
  multipathd fails to create mappings when multipath.conf is present

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/multipath-tools/+bug/1178721/+subscriptions

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs

Reply via email to