22.08.2011 09:21, Ulrich Windl wrote:
> Hi!
> 
> I wonder: What is the preferred way to select the correct devices for
> LVM in a SAN multipath environment where the names change a lot?
> There are /dev/sd*, /dev-dm*, /dev/mapper/*, dev/disk/by-*/*. What I
> don't want is to change the list after each change to the LVM
> configuration.

I prefer to have selected items from /dev/disk/by-* or /dev/mapper
enabled and everything else disabled. Just find persistent name and use
it (but do not forget to deny everything else, block device is scanned
for being PV if any of its names is allowed).
Strictly speaking you are not guaranteed to have the same sd* or dm-*
name every time you connect to NAS (think about two connections to it,
which may occur in a different order).

Best,
Vladislav.

> 
> Regards,
> Ulrich
> 
> 
>>>> Vladislav Bogdanov <bub...@hoster-ok.com> schrieb am 20.08.2011 um 23:07 in
> Nachricht <4e502227.2010...@hoster-ok.com>:
>> 05.08.2011 14:55, Ulrich Windl wrote:
>>> Hi,
>>>
>>> we run a cluster that has about 30 LVM VGs that are monitored every
>>> minute with a timeout interval of 90s. Surprisingly even if the system
>>> is in nominal state, the LVM monitor times out.
>>>
>>> I suspect this has to do with multiple LVM commands being run in parallel 
>> like this:
>>> # ps ax |grep vg
>>>  2014 pts/0    D+     0:00 vgs
>>>  2580 ?        D      0:00 vgdisplay -v NFS_C11_IO
>>>  2638 ?        D      0:00 vgck CBW_DB_BTD
>>>  2992 ?        D      0:00 vgdisplay -v C11_DB_Exe
>>>  3002 ?        D      0:00 vgdisplay -v C11_DB_15k
>>>  4564 pts/2    S+     0:00 grep vg
>>> # ps ax |grep vg
>>>  8095 ?        D      0:00 vgck CBW_DB_Exe
>>>  8119 ?        D      0:00 vgdisplay -v C11_DB_FATA
>>>  8194 ?        D      0:00 vgdisplay -v NFS_SAP_Exe
>>>
>>> When I tried a "vgs" manually, it could not be suspended or killed, and it 
>> took more than 30 seconds to complete.
>>
>> You just need to filter unneeded block devices (or leave only needed)
>> from LVM "suspects to be PV". Otherwise LVM tries to open every LV to
>> look if it is PV. Look at /etc/lvm/lvm.conf for "filter" line.
>>
>> BTW under very high CPU/IO load I found that "chrt -r 99" helps LVM
>> utils to work much faster. Combining this with "timeout" utility (to
>> prevent that LVM utils from "never-finish") does some more magic.
>>
>> Best,
>> Vladislav
>> _______________________________________________
>> Linux-HA mailing list
>> Linux-HA@lists.linux-ha.org 
>> http://lists.linux-ha.org/mailman/listinfo/linux-ha 
>> See also: http://linux-ha.org/ReportingProblems 
>>
> 
>  
>  
> 
> _______________________________________________
> Linux-HA mailing list
> Linux-HA@lists.linux-ha.org
> http://lists.linux-ha.org/mailman/listinfo/linux-ha
> See also: http://linux-ha.org/ReportingProblems

_______________________________________________
Linux-HA mailing list
Linux-HA@lists.linux-ha.org
http://lists.linux-ha.org/mailman/listinfo/linux-ha
See also: http://linux-ha.org/ReportingProblems

Reply via email to