Re: [linux-lvm] lvm raid1 metadata on different pv

2017-09-26 Thread emmanuel segura
I did as follow and it works, without storing the metadata on zram:

  truncate -s500M  disk1.dd
  truncate -s500M  disk2.dd
  losetup /dev/loop0 /root/disk1.dd
  losetup /dev/loop1 /root/disk2.dd

pvcreate /dev/loop0
pvcreate /dev/loop1
vgcreate vg00/dev/loop0 /dev/loop1
vgcreate vg00 /dev/loop0 /dev/loop1

lvcreate --type raid1 --name raid1vol -L 400M vg00

[root@puppetserver:~# lvs -ao +devices
  LV  VG   Attr LSize   Pool Origin Data%  Move Log
Copy%  Convert Devices
  raid1volvg00 rwi-a-m- 400.00m
100.00 raid1vol_rimage_0(0),raid1vol_rimage_1(0)
  [raid1vol_rimage_0] vg00 iwi-aor-
400.00m
/dev/loop0(1)
  [raid1vol_rimage_1] vg00 iwi-aor-
400.00m
/dev/loop1(1)
  [raid1vol_rmeta_0]  vg00 ewi-aor-
4.00m
/dev/loop0(0)
  [raid1vol_rmeta_1]  vg00 ewi-aor-
4.00m
/dev/loop1(0)

Now I will the metadata to other pvs:

truncate -s 100M meta1.dd
truncate -s 100M meta2.dd

 losetup /dev/loop2 meta1.dd
 losetup /dev/loop3 meta2.dd

 vgextend vg00 /dev/loop2
 vgextend vg00 /dev/loop3

pvmove -n 'raid1vol_rmeta_0' /dev/loop0 /dev/loop2
pvmove -n 'raid1vol_rmeta_1' /dev/loop1 /dev/loop3

vgchange -an vg00
  0 logical volume(s) in volume group "vg00" now active

vgchange -ay vg00
  1 logical volume(s) in volume group "vg00" now active




2017-09-25 11:30 GMT+02:00 Alexander 'Leo' Bergolth :

> Hi!
>
> I tried to move the raid1 metadata subvolumes to different PVs (SSD
> devices for performance).
>
> Moving with pvmove works fine but activation fails when both legs of the
> metadata had been moved to external devices. (See below.)
>
> Interestingly moving just one metadata LV to another device works fine.
> (Raid LV can be activated afterwards.)
>
> I guess raid1 metadata on different PVs is not supported (yet)?
>
> I am using Centos 7.4 and kernel 3.10.0-693.el7.x86_64.
>
> Cheers,
> --leo
>
>  8< 
> modprobe zram num_devices=2
> echo 300M > /sys/block/zram0/disksize
> echo 300M > /sys/block/zram1/disksize
>
> pvcreate /dev/sda2
> pvcreate /dev/sdb2
> pvcreate /dev/zram0
> pvcreate /dev/zram1
>
> vgcreate vg_sys /dev/sda2 /dev/sdb2 /dev/zram0 /dev/zram1
> lvcreate --type raid1 -m 1 --regionsize 64M -L 500m -n lv_boot vg_sys
> /dev/sda2 /dev/sdb2
>
> pvmove -n 'lv_boot_rmeta_0' /dev/sda2 /dev/zram0
> # and maybe
> # pvmove -n 'lv_boot_rmeta_1' /dev/sdb2 /dev/zram1
>
>  8< 
> Creating vg_sys-lv_boot
> dm create vg_sys-lv_boot LVM-l6Eg7Uvcm2KieevnXDjLLje3wqmSVG
> a1e56whxycwUR2RvGvcQNLy1GdfpzlZuQk [ noopencount flush ]   [16384] (*1)
> Loading vg_sys-lv_boot table (253:7)
>   Getting target version for raid
> dm versions   [ opencount flush ]   [16384] (*1)
>   Found raid target v1.12.0.
> Adding target to (253:7): 0 1024000 raid raid1 3 0 region_size
> 8192 2 253:3 253:4 253:5 253:6
> dm table   (253:7) [ opencount flush ]   [16384] (*1)
> dm reload   (253:7) [ noopencount flush ]   [16384] (*1)
>   device-mapper: reload ioctl on  (253:7) failed: Input/output error
>  8< 
> [ 8130.110467] md/raid1:mdX: active with 2 out of 2 mirrors
> [ 8130.111361] mdX: failed to create bitmap (-5)
> [ 8130.112254] device-mapper: table: 253:7: raid: Failed to run raid array
> [ 8130.113154] device-mapper: ioctl: error adding target to table
>  8< 
> # lvs -a -o+devices
>   LV VG Attr   LSize   Pool Origin Data%  Meta%
> Move Log Cpy%Sync Convert Devices
>   lv_bootvg_sys rwi---r--- 500.00m
>  lv_boot_rimage_0(0),lv_boot_rimage_1(0)
>   [lv_boot_rimage_0] vg_sys Iwi-a-r-r- 500.00m
>  /dev/sda2(1)
>   [lv_boot_rimage_1] vg_sys Iwi-a-r-r- 500.00m
>  /dev/sdb2(1)
>   [lv_boot_rmeta_0]  vg_sys ewi-a-r-r-   4.00m
>  /dev/zram0(0)
>   [lv_boot_rmeta_1]  vg_sys ewi-a-r-r-   4.00m
>  /dev/zram1(0)
>  8< 
>
> Full vgchange output can be found at:
>   http://leo.kloburg.at/tmp/lvm-raid1-ext-meta/
>
>
> --
> e-mail   ::: Leo.Bergolth (at) wu.ac.at
> fax  ::: +43-1-31336-906050
> location ::: IT-Services | Vienna University of Economics | Austria
>
> ___
> linux-lvm mailing list
> linux-lvm@redhat.com
> https://www.redhat.com/mailman/listinfo/linux-lvm
> read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/
>



-- 
  .~.
  /V\
 //  \\
/(   )\
^`~'^
___
linux-lvm mailing list
linux-lvm@redhat.com
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/

Re: [linux-lvm] clvm: failed to activate logical volumes sometimes

2017-04-20 Thread emmanuel segura
maybe you are using an old clvm version, I rember that in the new
version you don't need to execute any command on the secondary node.

2017-04-20 10:06 GMT+02:00 Eric Ren :
> Hi!
>
> This issue can be replicated by the following steps:
> 1. setup two-node HA cluster with dlm and clvmd RAs configured;
> 2. prepare a shared disk through iscsi, named "sdb" for example;
>
> 3. execute lvm cmds on n1:
> lvm2dev1:~# pvcreate /dev/sdb
> Physical volume "/dev/sdb" successfully created
> lvm2dev1:~ # vgcreate vg1 /dev/sdb
> Clustered volume group "vg1" successfully created
> lvm2dev1:~ # lvcreate -l100%VG -n lv1 vg1
> Logical volume "lv1" created.
> lvm2dev1:~ # lvchange -an vg1/lv1
>
> 4. disconnect shared iscsi disk on n2;
> 5. to activate vg1/lv1 on n1:
> lvm2dev1:~ # lvchange -ay vg1/lv1
> Error locking on node UNKNOWN 1084783200: Volume group for uuid not
> found: TG0VguoR1HxSO1OPA0nk737FJSQTLYAMKV2M20cfttItrRnJetTZmKxtKs3a88Ri
>
> 6. re-connect shared disk on n2;
> 7. execute `clvmd -R` on n1; and then I can activate lv1 successfully.
>
> In local mode, lvm will make a full scan on disks each time when lvmetad is
> disable. As we know,
> lvmetad is also disable when clvm is in use, so that  device cache can not
> be refreshed automatically
> when device is added or removed. We can solve this issue by executing "clvmd
> -R" manually. But,
> in some auto scripts, it's boring to put "clvmd -R" before some lvm commands
> everywhere.
>
> So, is there an option to enable full scan every time when lvm is invoked in
> cluster scenario?
> Thanks in advance:)
>
> Regards,
> Eric
>
> On 04/14/2017 06:27 PM, Eric Ren wrote:
>>
>> Hi!
>>
>> In cluster environment, lvcreate/lvchange may fail to activate logical
>> volumes sometimes.
>>
>> For example:
>>
>> # lvcreate -l100%VG -n lv001 clustermd
>>Error locking on node a52cbcb: Volume group for uuid not found:
>> SPxo6WiQhEJWDFyeul4gKYX2bNDVEsoXRNfU3fI5TI9Pd3OrIEuIm8jGtElDJzEy
>>Failed to activate new LV.
>>
>> The log file for this failure is attached. My thoughts on this issue
>> follows, for example on two nodes:
>> n1:
>> ===
>> #lvchange -ay vg/lv1
>> ...
>> clvmd will ask for peer daemon on n2
>> to activate lv1 as well
>>
>> n2:
>> ===
>> lvm need to find lv1 and the PVs for lv1,
>> in device cache which aims to avoid frequent scan all
>> disks. But if the PV(s) might not be available
>> in device cache, it responses n1 with errors
>>
>> We found that 'clvmd -R' can be a workaround before activating LV, because
>> what "clvmd -R" is to refresh device cache on every node as its commit
>> message said:
>> ===
>> commit 13583874fcbdf1e63239ff943247bf5a21c87862
>> Author: Patrick Caulfield 
>> Date:   Wed Oct 4 08:22:16 2006 +
>>
>>  Add -R switch to clvmd.
>>  This option will instruct all the clvmd daemons in the cluster to
>> reload their device cache
>> ==
>>
>> I think the reason why clvm doesn't refresh device cache every time before
>> activating LV,
>> is to avoid scanning all disks frequently.
>>
>> But, I'm not sure if I understand this issue correctly, will appreciate
>> much if someone can
>> help.
>>
>> Regards,
>> Eric
>>
>>
>>
>>
>> ___
>> linux-lvm mailing list
>> linux-lvm@redhat.com
>> https://www.redhat.com/mailman/listinfo/linux-lvm
>> read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/
>
>
>
> ___
> linux-lvm mailing list
> linux-lvm@redhat.com
> https://www.redhat.com/mailman/listinfo/linux-lvm
> read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/



-- 
  .~.
  /V\
 //  \\
/(   )\
^`~'^

___
linux-lvm mailing list
linux-lvm@redhat.com
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/


Re: [linux-lvm] Copying a raw disk image to LVM2

2016-07-10 Thread emmanuel segura
the lvm metadata is stored in the begining of the physical volume, the
logical volume is simple block device, so using dd you don't overwrite
any lvm header.

2016-07-09 19:33 GMT+02:00 Марк Коренберг :
> One note: `sync` does not syncs written data, since it affects only
> data, written through filesystem. You should use `dd 
> conv=fdatasync` instead.
>
> 2016-07-09 22:00 GMT+05:00 Digimer :
>> On 08/07/16 11:52 AM, Brian McCullough wrote:
>>>
>>> I have been hunting for some time over the past couple of days, and find
>>> several documentss that talk about converting from an LVM2 volume to a
>>> raw disk image for Xen, but nothing about the reverse.
>>>
>>> I have a VHD disk file that I would like to put on to an LVM2 volume,
>>> like my other DomU guests.
>>>
>>> I can see using dd, but am concerned about overwriting the LVM2 header.
>>>
>>>
>>> Does anybody have any suggestions?
>>
>> I've done this with KVM before just fine. The LV metadata won't be
>> overwritten when you write to the actual LV. So this would work fine;
>>
>> dd if=/path/to/image.raw of=/dev/vg0/lv_foo bs=4M; sync
>>
>> Then just change your server's definition to point at the LV instead of
>> the raw file and voila.
>>
>> --
>> Digimer
>> Papers and Projects: https://alteeve.ca/w/
>> What if the cure for cancer is trapped in the mind of a person without
>> access to education?
>>
>> ___
>> linux-lvm mailing list
>> linux-lvm@redhat.com
>> https://www.redhat.com/mailman/listinfo/linux-lvm
>> read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/
>
>
>
> --
> Segmentation fault
>
> ___
> linux-lvm mailing list
> linux-lvm@redhat.com
> https://www.redhat.com/mailman/listinfo/linux-lvm
> read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/



-- 
  .~.
  /V\
 //  \\
/(   )\
^`~'^

___
linux-lvm mailing list
linux-lvm@redhat.com
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/