Be sure to check ...- If the PV is already part of a VG on the system
  (possibly already active given the message)
- Check other systems (again, PV already active in a VG)

Keep in mind that a VG (or its LV for that matter) to not be mounted, 
but the VG (and its LVs) are already active and locked.  In my experience, it 
is not uncommon for SAN administrators to provision out storage and make an 
error, even if very infrequently.  So check if those WWNs are already in a VG, 
active and locked on another system.  ;)
In all cases, do a "pvscan," then "vgscan" and then "vgs" to see these details 
(look for the "a" flag).  I personally use "vgs -o pv_name,vg_name,vg_attr" 
with regularly to show the PVs allocated to the VG, plus if it is already 
active and locked.




----- Original Message -----
From: Win Htin <win.h...@gmail.com>
Sent: Wednesday, October 5, 2011 10:21 AM

Hi folks,

I am trying to create a new LVM and when I tried to run the "pvcreate"
command it fails to create with the following error message.

[root@server01 ~]# pvcreate /dev/mpath/log
  Can't open /dev/mpath/log exclusively.  Mounted filesystem? <== ???

This volume is not mounted anywhere as the following "df -h" command shows.
[root@server01 ~]#  df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/mapper/VolGroup00-ROOT
                       88G  2.3G   81G   3% /
/dev/mapper/rootp1     99M   17M   77M  18% /boot
tmpfs                  24G     0   24G   0% /dev/shm

The devices are there.
[root@server01 ~]# ls -l /dev/mpath
total 0
lrwxrwxrwx 1 root root 7 Oct  5 10:00 binaries -> ../dm-5
lrwxrwxrwx 1 root root 7 Oct  5 10:00 binariesp1 -> ../dm-8
lrwxrwxrwx 1 root root 7 Oct  5 10:00 log -> ../dm-6
lrwxrwxrwx 1 root root 7 Oct  5 10:00 logp1 -> ../dm-7
lrwxrwxrwx 1 root root 7 Oct  5 09:59 mpath0 -> ../dm-0
lrwxrwxrwx 1 root root 7 Oct  5 09:59 mpath0p1 -> ../dm-1
lrwxrwxrwx 1 root root 7 Oct  5 09:59 mpath0p2 -> ../dm-2

My /etc/multipath.conf file is working properly.
[root@server01 ~]# multipath -ll
log (360050768028081627400000000000009) dm-6 IBM,2145
[size=100G][features=0][hwhandler=0][rw]
\_ round-robin 0 [prio=100][active]
\_ 0:0:0:2 sdc 8:32  [active][ready]
\_ 1:0:0:2 sdi 8:128 [active][ready]
\_ round-robin 0 [prio=20][enabled]
\_ 0:0:1:2 sdf 8:80  [active][ready]
\_ 1:0:1:2 sdl 8:176 [active][ready]
binaries (360050768028081627400000000000007) dm-5 IBM,2145
[size=100G][features=0][hwhandler=0][rw]
\_ round-robin 0 [prio=100][active]
\_ 0:0:0:1 sdb 8:16  [active][ready]
\_ 1:0:0:1 sdh 8:112 [active][ready]
\_ round-robin 0 [prio=20][enabled]
\_ 0:0:1:1 sde 8:64  [active][ready]
\_ 1:0:1:1 sdk 8:160 [active][ready]
root (360050768028081627400000000000001) dm-0 IBM,2145
[size=100G][features=0][hwhandler=0][rw]
\_ round-robin 0 [prio=100][active]
\_ 0:0:0:0 sda 8:0   [active][ready]
\_ 1:0:0:0 sdg 8:96  [active][ready]
\_ round-robin 0 [prio=20][enabled]
\_ 0:0:1:0 sdd 8:48  [active][ready]
\_ 1:0:1:0 sdj 8:144 [active][ready]

This is on a blade server running kernel version 2.6.18-238.el5 (RHEL
Server Release 5.6). I have done exactly the same procedure on exact
same type of IBM V7000 storage Array and HS22 servers a few months
back without any issues. I am completely baffled and help is much
appreciated.
https://www.redhat.com/mailman/listinfo/rhelv5-list


_______________________________________________
rhelv5-list mailing list
rhelv5-list@redhat.com
https://www.redhat.com/mailman/listinfo/rhelv5-list

Reply via email to