Re: [linux-lvm] lvcreate hangs forever and udev work timeout

2019-04-14 Thread Eric Ren
Hi, The reason is found out, as you've predicted. > is it because of slow read operation (i.e. some raid arrays are known to > wake-up slowly) The udev worker is blocked until timeout before killed, cause the "blkid" in /usr/lib/udev/rules.d/13-dm-disk.rules is too slow because of high IO load

Re: [linux-lvm] [lvm-devel] lvcreate hangs forever and udev work timeout

2019-04-12 Thread Eric Ren
Hi, > Hmm would it be possible that associated thin pool would be in some erroneous > condition - i.e. out-of-space - or processing some resize ? The testing model is: create/mount/"do IO" (on) thin LV; and then umount/delete thin LV. doing this work flow in parallel. > This likely could

Re: [linux-lvm] lvcreate hangs forever and udev work timeout

2019-04-12 Thread Eric Ren
Hi, > Although the /dev/dm-26 is visible, but the device seems not ready in kernel. Sorry, it's not: [root@iZuf6dbyd7ede51sykedamZ ~]# dmsetup info /dev/dm-26 Device dm-26 not found Command failed. [root@iZuf6dbyd7ede51sykedamZ ~]# dmsetup info /dev/dm-25 Name: vg0-21 State:

Re: [linux-lvm] lvcreate hangs forever and udev work timeout

2019-04-12 Thread Eric Ren
Hi, > Since the /dev/dm-x has been created, I don't understand what it waits > udev to do? > Just waits udev rules to create device symbol links? Although the /dev/dm-26 is visible, but the device seems not ready in kernel. [root@iZuf6dbyd7ede51sykedamZ ~]# dmsetup udevcookies Cookie

Re: [linux-lvm] lvcreate hangs forever and udev work timeout

2019-04-12 Thread Eric Ren
Hi! > When udev kills its worker due to timeout - so udev rules was not finished > within predefined timeout (which unfortunately changes according to mind > change of udev developer - so it ranges from 90seconds to 300seconds depending > on date of release) - you need to look out for reason why

[linux-lvm] lvcreate hangs forever and udev work timeout

2019-04-12 Thread Eric Ren
Hi, As subject, it seems a interaction problem between lvm and systemd-udev: ``` #lvm version LVM version: 2.02.130(2)-RHEL7 (2015-10-14) Library version: 1.02.107-RHEL7 (2015-10-14) Driver version: 4.35.0 ``` lvm call trace when hangs: ``` (gdb) bt #0 0x7f7030b876a7 in semop

Re: [linux-lvm] Aborting. LV mythinpool_tmeta is now incomplete

2019-04-11 Thread Eric Ren
Hi, > So do you get 'partial' error on thin-pool activation on your physical > server ? Yes, the VG of the thin pool only has one simple physical disk, at beginning, I also suspected the disk may disconnect at that moment. But, I start to think maybe it is caused by some reason hidden in the

Re: [linux-lvm] Aborting. LV mythinpool_tmeta is now incomplete

2019-04-11 Thread Eric Ren
So, we're evaluating such solution now~ Thanks, Eric On Thu, 11 Apr 2019 at 19:04, Zdenek Kabelac wrote: > Dne 11. 04. 19 v 2:27 Eric Ren napsal(a): > > Hello list, > > > > Recently, we're exercising our container environment which uses lvm to > manage > > thin

Re: [linux-lvm] Aborting. LV mythinpool_tmeta is now incomplete

2019-04-11 Thread Eric Ren
Hi, > Hi, I could recommend to orient towards the solution where the 'host' system > provides some service for your containers - so container ask for action, > service orchestrates the action on the system - and returns asked resource > to > the container. > Right, it's all k8s, containerd,

Re: [linux-lvm] Aborting. LV mythinpool_tmeta is now incomplete

2019-04-11 Thread Eric Ren
t reasons might cause this errors? Regards, Eric On Thu, 11 Apr 2019 at 08:27, Eric Ren wrote: > Hello list, > > Recently, we're exercising our container environment which uses lvm to > manage thin LVs, meanwhile we found a very strange error to activate the > thin LV: > &

[linux-lvm] Aborting. LV mythinpool_tmeta is now incomplete

2019-04-11 Thread Eric Ren
Hello list, Recently, we're exercising our container environment which uses lvm to manage thin LVs, meanwhile we found a very strange error to activate the thin LV: “Aborting. LV mythinpool_tmeta is now incomplete and '--activationmode partial' was not specified.\n: exit status 5: unknown"

[linux-lvm] buffer overflow detected: lvcreate terminated

2019-04-03 Thread Eric Ren
Hi Marian, I have a lvm failure below when creating a lot of thin snapshot LVs for containers as rootfs. ``` *** buffer overflow detected ***: lvcreate terminated\n=== Backtrace: =\n/lib64/libc.so.6(__fortify_*fail*

Re: [linux-lvm] Question about thin-pool/thin LV with stripes

2019-01-28 Thread Eric Ren
Hi Zdenek, When 'stripe_size' is the 'right size' - striped device should appear > faster, > but telling you what's the best size is some sort of 'black magic' :) > > Basically - strip size should match boundaries of thin-pool chunk sizes. > > i.e. for thin-pool with 128K chunksize - and 2

Re: [linux-lvm] Question about thin-pool/thin LV with stripes

2019-01-25 Thread Eric Ren
Hi, With single command to create thin-pool, the metadata LV is not created > with striped > target. Is this designed on purpose, or just the command doesn't handle > this case very > well for now? > > My main concern here is, if the metadata LV use stripped target, can > thin_check/thin_repair

Re: [linux-lvm] Question about thin-pool/thin LV with stripes

2019-01-25 Thread Eric Ren
Hi, As you can see, only "mythinpool_tdata" LV has 2 stripes. Is that OK? > If I want to benefit performance from stripes, will it works for me? Or, > should I create dataLV, metadata LV, thinpool and thin LV using > step-by-step way > and specify "--stripes 2" in every steps? > With single

[linux-lvm] Question about thin-pool/thin LV with stripes

2019-01-23 Thread Eric Ren
thin-LV* when out of space? Any suggestion would be very appreciated, thanks in advance! Regards, Eric Ren ___ linux-lvm mailing list linux-lvm@redhat.com https://www.redhat.com/mailman/listinfo/linux-lvm read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/

Re: [linux-lvm] Unsync-ed LVM Mirror

2018-02-05 Thread Eric Ren
d3d959d20. ╭─eric@ws ~/workspace/linux  ‹master› ╰─$ git describe cd15fb64ee56192760ad5c1e2ad97a65e735b18b v4.12-rc5-2-gcd15fb64ee56 """ Eric Warm regards, Liwei On 5 Feb 2018 15:27, "Eric Ren" <z...@suse.com <mailto:z...@suse.com>> wrote: Hi, Your LVM version and kernel version plea

Re: [linux-lvm] Unsync-ed LVM Mirror

2018-02-04 Thread Eric Ren
Hi, Your LVM version and kernel version please? like: # lvm version   LVM version: 2.02.177(2) (2017-12-18)   Library version: 1.03.01 (2017-12-18)   Driver version:  4.35.0 # uname -a Linux sle15-c1-n1 4.12.14-9.1-default #1 SMP Fri Jan 19 09:13:51 UTC 2018 (849a2fe) x86_64 x86_64

Re: [linux-lvm] lvmlockd: about the limitation on lvresizing the LV active on multiple nodes

2018-01-11 Thread Eric Ren
Hi David, IIRC, you mean we can consider to use cluster raid1 as the underlaying DM target to support pvmove used in cluster, since currect pvmove is using mirror target now? That's what I imagined could be done, but I've not thought about it in detail. IMO pvmove under a shared LV is too

Re: [linux-lvm] lvmlockd: about the limitation on lvresizing the LV active on multiple nodes

2018-01-09 Thread Eric Ren
Hi David, On 01/09/2018 11:42 PM, David Teigland wrote: On Tue, Jan 09, 2018 at 10:42:27AM +0800, Eric Ren wrote: I've tested your patch and it works very well.  Thanks very much. Could you please consider to push this patch upstream? OK Thanks very  much! So, can we update the `man 8

Re: [linux-lvm] lvmlockd: about the limitation on lvresizing the LV active on multiple nodes

2018-01-08 Thread Eric Ren
Hi David, On 01/04/2018 05:06 PM, Eric Ren wrote: David, On 01/03/2018 11:07 PM, David Teigland wrote: On Wed, Jan 03, 2018 at 11:52:34AM +0800, Eric Ren wrote: 1. one one node: lvextend --lockopt skip -L+1G VG/LV That option doesn't exist, but illustrates the point that some new

Re: [linux-lvm] lvmlockd: about the limitation on lvresizing the LV active on multiple nodes

2018-01-04 Thread Eric Ren
David, On 01/03/2018 11:07 PM, David Teigland wrote: On Wed, Jan 03, 2018 at 11:52:34AM +0800, Eric Ren wrote: 1. one one node: lvextend --lockopt skip -L+1G VG/LV That option doesn't exist, but illustrates the point that some new option could be used to skip the incompatible LV

Re: [linux-lvm] lvmlockd: about the limitation on lvresizing the LV active on multiple nodes

2018-01-02 Thread Eric Ren
Hello David, Happy new year! On 01/03/2018 01:10 AM, David Teigland wrote: * resizing an LV that is active in the shared mode on multiple hosts It seems a big limitation to use lvmlockd in cluster: Only in the case where the LV is active on multiple hosts at once, i.e. a cluster fs, which is

Re: [linux-lvm] lvmlockd: about the limitation on lvresizing the LV active on multiple nodes

2018-01-02 Thread Eric Ren
" to perform lvresize command? Regards, Eric On 12/28/2017 06:42 PM, Eric Ren wrote: Hi David, I see there is a limitation on lvesizing the LV active on multiple node. From `man lvmlockd`: """ limitations of lockd VGs ... * resizing an LV that is active in the shared mode

Re: [linux-lvm] Migrate volumes to new laptop

2017-12-30 Thread Eric Ren
Hi, Not sure if you are looking for vgexport/vgimport (man 8 vgimport)? Eric On 12/21/2017 07:36 PM, Boyd Kelly wrote: Hi, I've searched hi/low and not found a howto or general suggestions for migrating volume groups from an old to new laptop.  I've found mostly server scenarios for

[linux-lvm] lvmlockd: about the limitation on lvresizing the LV active on multiple nodes

2017-12-28 Thread Eric Ren
Hi David, I see there is a limitation on lvesizing the LV active on multiple node. From `man lvmlockd`: """ limitations of lockd VGs ... * resizing an LV that is active in the shared mode on multiple hosts """ It seems a big limitation to use lvmlockd in cluster: """ c1-n1:~ # lvresize

[linux-lvm] lvmlockd manpage: prevent concurrent activation of logical volumes?

2017-12-28 Thread Eric Ren
Hi David, I's afraid the statement below in description section of lvmlockd manpage: " · prevent concurrent activation of logical volumes " is easy for normal user to mistake it as: wow, lvmlockd doesn't support active-active LV on multiple nodes? What I interpret from it is: with clvmd,

Re: [linux-lvm] Shared VG, Separate LVs

2017-11-23 Thread Eric Ren
Hi, /"I noticed you didn't configure LVM resource agent to manage your VG's (de)activation task, not sure if it can always work as expect, so have more exceptional checking :)" /              Strangely the Pacemaker active-passive configuration example shows VG controlled by Pacemaker,

Re: [linux-lvm] lvmlockd: how to convert lock_type from sanlock to dlm?

2017-11-20 Thread Eric Ren
David, First you'll need this recent fix: https://sourceware.org/git/?p=lvm2.git;a=commitdiff;h=f611b68f3c02b9af2521d7ea61061af3709fe87c --force was broken at some point, and the option is now --lockopt force. Thanks! To change between lock types, you are supposed to be able to change to a

[linux-lvm] lvmlockd: how to convert lock_type from sanlock to dlm?

2017-11-20 Thread Eric Ren
Hello David, On my testing cluster,  the lvmlockd was firstly used with sanlock and everything was OK. After some play, I want to change the "sanlock" lock_type of a VG into "dlm" locktype. With dlm_controld running, I tried as following, but still failed. 1.  Performed as the "Changing dlm

Re: [linux-lvm] When and why vgs command can change metadata and incur old metadata to be backed up?

2017-11-04 Thread Eric Ren
Hi Alasdair, Very simply if the metadata the command has just read in does not match the last backup stored in the local filesystem and the process is able and configured to write a new backup. The command that made the metadata change might not have written a backup if it crashed, was

[linux-lvm] Does cmirror can tolerate one faulty PV?

2017-10-31 Thread Eric Ren
Hi all, I performed a fault tolerance testing on cmirrored LV in cluster with lvm2-2.02.98. The result really surprises me: a cmirrored LV cannot continue to work after disabling its one leg PV. Is this a known issue? Or am I doing something wrong? The steps follows: # clvmd and cmirrord

Re: [linux-lvm] When and why vgs command can change metadata and incur old metadata to be backed up?

2017-10-31 Thread Eric Ren
Hi all, > > Interesting. Eric, can you show the *before* and *after* vgs textual > metadata (you should find them in /etc/lvm/archive)? > Ah, I think no need to show the archives now. Alasdair and David have given us a very good explanation, thanks for them! Regards, Eric

[linux-lvm] When and why vgs command can change metadata and incur old metadata to be backed up?

2017-10-30 Thread Eric Ren
Hi all, Sometimes, I see the following message in the VG metadata backups under /etc/lvm/archive: """ contents = "Text Format Volume Group" version = 1 description = "Created *before* executing 'vgs'" """ I'm wondering when and why the new backups will be created by reporting command like

Re: [linux-lvm] Shared VG, Separate LVs

2017-10-16 Thread Eric Ren
Hi, On 10/13/2017 06:40 PM, Indivar Nair wrote: Thanks Eric, I want to keep a single VG so that I can get the bandwidth (LVM Striping) of all the disks (PVs)   PLUS the flexibility to adjust the space allocation between both LVs. Each LV will be used by  different departments. With 1 LV on

Re: [linux-lvm] Reserve space for specific thin logical volumes

2017-09-11 Thread Eric Ren
Hi David and Zdenek, On 09/12/2017 01:41 AM, David Teigland wrote: [...snip...] Hi Eric, this is a good question. The lvm project has done a poor job at this sort of thing. A new homepage has been in the works for a long time, but I think stalled in the review/feedback stage. It should be

Re: [linux-lvm] clvm: failed to activate logical volumes sometimes

2017-04-20 Thread Eric Ren
need "clvmd -R" on one of the nodes. BTW, my versions: lvm2-clvm-2.02.120-72.8.x86_64 lvm2-2.02.120-72.8.x86_64 Regards, Eric 2017-04-20 10:06 GMT+02:00 Eric Ren <z...@suse.com>: Hi! This issue can be replicated by the following steps: 1. setup two-node HA cluster with d

Re: [linux-lvm] clvm: failed to activate logical volumes sometimes

2017-04-20 Thread Eric Ren
in some auto scripts, it's boring to put "clvmd -R" before some lvm commands everywhere. So, is there an option to enable full scan every time when lvm is invoked in cluster scenario? Thanks in advance:) Regards, Eric On 04/14/2017 06:27 PM, Eric Ren wrote: Hi! In cluster environ

Re: [linux-lvm] is it right to specify '-l' with all the free PE in VG when creating a thin pool?

2017-03-09 Thread Eric Ren
On 03/09/2017 07:46 PM, Zdenek Kabelac wrote: [snip] while it works when specifying '-l' this way: # lvcreate -l 100%FREE --thinpool thinpool0 vgtest Logical volume "thinpool0" created. Is this something by design? or something may be wrong? I can replicate this on both: Hi Yes this is

Re: [linux-lvm] New features for using lvm on shared storage

2017-01-10 Thread Eric Ren
Hi David! On 01/10/2017 11:30 PM, David Teigland wrote: On Tue, Jan 10, 2017 at 09:02:36PM +0800, Eric Ren wrote: Hi David, Sorry for faking this reply because I'm not in the maillist before I noticed this email (quoted blow) you posted for a while. I have a questions about "lvm