Re: [linux-lvm] discussion about activation/auto_activation_volume_list

2020-11-18 Thread Gang He

Hi David,

On 2020/11/19 2:23, David Teigland wrote:

On Wed, Nov 18, 2020 at 09:28:21AM +0800, Gang He wrote:

I prefer to use a metadata flag for each VG or LV to skip auto-activation.
Otherwise, it is not easy for the pacemaker cluster to manager a local
VG(e.g. local or systemid type) in a cluster via active-passive mode.


I created a bug for this:
https://bugzilla.redhat.com/show_bug.cgi?id=1899214

Thank for your follow-up.
More comments here,
Should we keep the default behavior like before? e.g. VG/LV should be 
auto-activated by default like before.Otherwise, some users will feel

strange after lvm upgrade.

Second, how to keep the compatibility with the existed VG/LV? since we 
can upgrade lvm2 version, but VG/LV is possible old. I wonder if there 
are some Reserved Bits in lvm meta-data layout to use? if yes, I feel 
this proposal is very perfect.


Thanks
Gang



Dave

___
linux-lvm mailing list
linux-lvm@redhat.com
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/



___
linux-lvm mailing list
linux-lvm@redhat.com
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/



[linux-lvm] lvresize cannot refresh LV size on on other hosts when extending LV with a shared lock

2020-09-29 Thread Gang He
Hello List,

I am using lvm2 v2.03.10(or v2.03.05), I setup a lvm2-lockd based (three nodes) 
cluster.
I created PV, VG and LV, formated LV with a cluster file system (e.g. ocfs2).
So far, all the things work well, I can write the files from each node.
Next, I extended the online LV from node1, e.g.
ghe-tw-nd1# lvresize -L+1024M vg1/lv1
  WARNING: extending LV with a shared lock, other hosts may require LV refresh.
  Size of logical volume vg1/lv1 changed from 13.00 GiB (3328 extents) to 14.00 
GiB (3584 extents).
  Logical volume vg1/lv1 successfully resized.
  Refreshing LV /dev//vg1/lv1 on other hosts...

But, the other nodes cannot aware this LV size was changed, e.g.
2020-09-29 16:01:48  ssh ghe-tw-nd3 lsblk
load pubkey "/root/.ssh/id_rsa": invalid format
NAMEMAJ:MIN RM SIZE RO TYPE MOUNTPOINT
vda 253:00  40G  0 disk
├─vda1  253:10   8M  0 part
├─vda2  253:20  38G  0 part /
└─vda3  253:30   2G  0 part [SWAP]
vdb 253:16   0  80G  0 disk
├─vdb1  253:17   0  10G  0 part
├─vdb2  253:18   0  20G  0 part
│ └─vg1-lv1 254:00  13G  0 lvm  /mnt/shared   <<== here
└─vdb3  253:19   0  50G  0 part

2020-09-29 16:01:49  ssh ghe-tw-nd2 lsblk
load pubkey "/root/.ssh/id_rsa": invalid format
NAMEMAJ:MIN RM SIZE RO TYPE MOUNTPOINT
vda 253:00  40G  0 disk
├─vda1  253:10   8M  0 part
├─vda2  253:20  38G  0 part /
└─vda3  253:30   2G  0 part [SWAP]
vdb 253:16   0  80G  0 disk
├─vdb1  253:17   0  10G  0 part
├─vdb2  253:18   0  20G  0 part
│ └─vg1-lv1 254:00  13G  0 lvm  /mnt/shared   <<== here
└─vdb3  253:19   0  50G  0 part

2020-09-29 16:01:49  ssh ghe-tw-nd1 lsblk
load pubkey "/root/.ssh/id_rsa": invalid format
NAMEMAJ:MIN RM SIZE RO TYPE MOUNTPOINT
vda 253:00  40G  0 disk
├─vda1  253:10   8M  0 part
├─vda2  253:20  38G  0 part /
└─vda3  253:30   2G  0 part [SWAP]
vdb 253:16   0  80G  0 disk
├─vdb1  253:17   0  10G  0 part
├─vdb2  253:18   0  20G  0 part
│ └─vg1-lv1 254:00  14G  0 lvm  /mnt/shared  <<== LV size was changed on 
node1
└─vdb3  253:19   0  50G  0 part

This behavior breaks our cluster high availability, we have to 
de-activate/activate LV to get LV size refresh.
Is this behavior by-design? 
Could we extend the online LV automatically on each node (when any node 
triggers a LV resize command)?


Thanks
Gang




___
linux-lvm mailing list
linux-lvm@redhat.com
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/

Re: [linux-lvm] About online pvmove/lvresize on shared VG

2020-07-08 Thread Gang He

Hi David,

Thank for your reply.
more questions,

On 7/9/2020 12:05 AM, David Teigland wrote:

On Wed, Jul 08, 2020 at 03:55:55AM +, Gang He wrote:

but I cannot do online LV reduce from one node,
the workaround is to switch VG activation_mode to exclusive, run lvreduce 
command on the node where VG is activated.
Does this behaviour is by-design? or a bug?


It was intentional since shrinking the cluster fs and LV isn't very common
(not supported for gfs2).

OK, thank for confirmation.




For pvmove command, I cannot do online pvmove from one node,
The workaround is to switch VG activation_mode to exclusive, run pvmove command 
on the node where VG is activated.
Does this behaviour is by-design? do we do some enhancements in the furture?
or any workaround to run pvmove under shared  activation_mode? e.g. --lockopt 
option can help this situation?


pvmove is implemented with mirroring, so that mirroring would need to be
replaced with something that works with concurrent access, e.g. cluster md
raid1.  I suspect there are better approaches than pvmove to solve the
broader problem.

Sorry, I am a little confused.
In the future, we can do online pvmove when VG is activated in shared 
mode? from man page, I feel these limitations are temporary (or Not Yet 
Complete).
By the way, --lockopt option can help this situation? I cannot find the 
detailed description for this option in manpage.


Thanks
Gang



Dave



___
linux-lvm mailing list
linux-lvm@redhat.com
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/



[linux-lvm] About online pvmove/lvresize on shared VG

2020-07-07 Thread Gang He
Hello List,

I use lvm2-2.03.05, I am looking at online pvmove/lvresize on shared VG, since 
there are some problems in old code.
Now, I setup three node cluster, and one shared VG/LV, and a cluster file 
system on top of LV.
e.g.
primitive ocfs2-2 Filesystem \
params device="/dev/vg1/lv1" directory="/mnt/ocfs2" fstype=ocfs2 options=acl \
op monitor interval=20 timeout=40
primitive vg1 LVM-activate \
params vgname=vg1 vg_access_mode=lvmlockd activation_mode=shared \
op start timeout=90s interval=0 \
op stop timeout=90s interval=0 \
op monitor interval=30s timeout=90s \
meta target-role=Started
group base-group dlm lvmlockd vg1 ocfs2-2

Now, I can do online LV extend from one node (good),
but I cannot do online LV reduce from one node, 
the workaround is to switch VG activation_mode to exclusive, run lvreduce 
command on the node where VG is activated.
Does this behaviour is by-design? or a bug?

For pvmove command, I cannot do online pvmove from one node,
The workaround is to switch VG activation_mode to exclusive, run pvmove command 
on the node where VG is activated.
Does this behaviour is by-design? do we do some enhancements in the furture?
or any workaround to run pvmove under shared  activation_mode? e.g. --lockopt 
option can help this situation?

Thanks a lot.
Gang


___
linux-lvm mailing list
linux-lvm@redhat.com
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/



Re: [linux-lvm] can we change cluster_ringid_seq in libdlm from uint32_t to uint64_t?

2020-05-15 Thread Gang He
Hi David,

I did some testing with this patch.
It looks OK. Do you plan to commit the patch into the git tree?


Thanks
Gang

> -Original Message-
> From: David Teigland [mailto:teigl...@redhat.com]
> Sent: 2020年4月27日 23:04
> To: Gang He 
> Cc: LVM general discussion and development 
> Subject: Re: [linux-lvm] can we change cluster_ringid_seq in libdlm from
> uint32_t to uint64_t?
> 
> On Sun, Apr 26, 2020 at 07:24:33AM +, Gang He wrote:
> > Hello List,
> >
> > In libdlm code, cluster_ringid_seq variable is defined with uint32_t
> > in dlm_controld/dlm_daemon.h, the corosync API returns uinit64_t ring_id,
> in the current code, we use type cast to get the low-32bit ring-id.
> > But, in some case, the corosync returns a huge ring-id (greater than 32bit),
> the DLM daemon does not work normally (looks stuck).
> > Then, I want to know if we can change cluster_ringid_seq in libdlm from
> uint32_t to uint64_t?
> 
> That looks ok, please try the attached patch.
> Dave


___
linux-lvm mailing list
linux-lvm@redhat.com
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/

[linux-lvm] can we change cluster_ringid_seq in libdlm from uint32_t to uint64_t?

2020-04-26 Thread Gang He
Hello List,

In libdlm code, cluster_ringid_seq variable is defined with uint32_t in 
dlm_controld/dlm_daemon.h,
the corosync API returns uinit64_t ring_id, in the current code, we use type 
cast to get the low-32bit ring-id.
But, in some case, the corosync returns a huge ring-id (greater than 32bit), 
the DLM daemon does not work normally (looks stuck).
Then, I want to know if we can change cluster_ringid_seq in libdlm from 
uint32_t to uint64_t?

If you'd like to know the details about why corosync ran into such a huge 
ringid, 
you could check the info from: 
https://github.com/corosync/corosync/pull/532#issuecomment-617647233

Thanks
Gang


___
linux-lvm mailing list
linux-lvm@redhat.com
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/



[linux-lvm] Do we have a way to clone the customer lvm2 environment?

2020-03-11 Thread Gang He
Hi Guys,

Sometimes we encountered some lvm2 meta-data damage (by lvm2 commands) problems.
Do we have a way to clone (or dump) the customer lvm2 environment? such like 
file system e2image tool.
for example, we can dump each pv(meta data + data(with file hole)) to a file, 
next we can rebuild the cloned lvm2 environment in local machine.
then, we can verify the fix in local machine with the tentative rpms before 
give them to the customers.  

Any suggestions for this kind of case?

Thanks
Gang


___
linux-lvm mailing list
linux-lvm@redhat.com
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/



Re: [linux-lvm] devices/dir configuration option in /etc/lvm/lvm.conf can be edited?

2019-10-14 Thread Gang He
Hi Alasdair,

Thank for comments.
That means we should update the comments to the latest, this can help to tell 
the user "this is advanced option".

Thanks
Gang

> -Original Message-
> From: linux-lvm-boun...@redhat.com
> [mailto:linux-lvm-boun...@redhat.com] On Behalf Of Alasdair G Kergon
> Sent: 2019年10月14日 21:20
> To: LVM general discussion and development 
> Subject: Re: [linux-lvm] devices/dir configuration option in /etc/lvm/lvm.conf
> can be edited?
> 
> On Mon, Oct 14, 2019 at 10:52:02AM +, Gang He wrote:
> > That means we should tell the user, you should not edit this option (dir =
> "/dev") in the lvm.conf, right?
> 
> The existing comment is incomplete and should be updated to mention the
> other effects.  The option pre-dates udev and so the "newer"
> interlocking there ought to be mentioned.  The description of 'advanced'
> could also be updated to explain that that means you should not change it
> unless you know exactly what you are doing!  The option exists to simplify
> some development, test and support scenarios where you want to have two or
> more distinct userspace LVM instances running on a single machine.  For
> example, to try to reproduce a certain type of user-reported bug you might set
> up a temporary /dev in a non-default location with contents that match that
> user's system and point the tools at that using this option.
> 
> Alasdair
> 
> ___
> linux-lvm mailing list
> linux-lvm@redhat.com
> https://www.redhat.com/mailman/listinfo/linux-lvm
> read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/


___
linux-lvm mailing list
linux-lvm@redhat.com
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/

Re: [linux-lvm] devices/dir configuration option in /etc/lvm/lvm.conf can be edited?

2019-10-14 Thread Gang He
Hi Zdenek,

Thanks for explaining.
That means we should tell the user, you should not edit this option (dir = 
"/dev") in the lvm.conf, right?

Thanks
Gang

> -Original Message-
> From: Zdenek Kabelac [mailto:zkabe...@redhat.com]
> Sent: 2019年10月14日 18:40
> To: LVM general discussion and development ; Gang
> He 
> Subject: Re: devices/dir configuration option in /etc/lvm/lvm.conf can be
> edited?
> 
> Dne 14. 10. 19 v 10:00 Gang He napsal(a):
> > Hello List,
> >
> > By default, devices/dir configuration option in /etc/lvm/lvm.conf is 
> > "=/dev".
> > But, if I edit this configuration option, e.g. dir = "/dev/lvm", then 
> > lvcreate
> command will fail.
> >
> > sles12sp4-node:/dev # lvcreate -L2G -ay -n testlv vgdata
> >/dev/lvm/vgdata/testlv: not found: device not cleared
> >Aborting. Failed to wipe start of new LV.
> >
> > I am using lvm 2.02.183(or 180), this option can be edit individually?
> > or any other option will affect this option, then lead to lvcreate failure.
> 
> Hi
> 
> The option is not so easy to explain:
> 
> In the first place on a today's system you shall never ever need to change 
> this
> setting - as majority of systems runs 'udev' or something similar in a fixed
> position /dev.
> 
> So symlinks & devices appears at this directory (and without 'lvm2' direct
> work) only as a consequence of configured udev rules.
> 
> So now surely comes the obvious question -  why the 'setting' even exists
> when you should always use '/dev' anyway right ;) ?
> 
> And here the answer is longer -  lvm2 is very 'oldish' project from 'dark' era
> before udev took control over devices - and in this old era you could have
> configured different device directory for devices created by lvm2 - since it's
> been lvm2 physically creating these devices.
> 
> The usability for normal users is relatively questionable since almost every
> user wants his devices in /dev dir anyway, but couple wanted to maintain
> separate dir for lvm2 devices.
> 
> The 'other' use-case is for testing - where i.e. lvm2 test suite is/(or was) 
> able to
> run its tests in completely isolated device directory.
> 
> But to be able to use this 'capability' - one has to enable other lvm.conf
> setting:  'activation/verify_udev_operations=1'  - when enabled lvm2 will
> ensure devices are in give directory.
> 
> But  (and it's BIG BUT) this shall never be enabled on a system with 
> running
> udevd and /dev  dir set - as basically nothing else then udevd is supposed to
> be creating anything in /dev dir.
> 
> So hopefully this explains most of the question you may have about this
> setting.
> 
> Regards
> 
> Zdenek
> 
> 


___
linux-lvm mailing list
linux-lvm@redhat.com
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/

[linux-lvm] devices/dir configuration option in /etc/lvm/lvm.conf can be edited?

2019-10-14 Thread Gang He
Hello List,

By default, devices/dir configuration option in /etc/lvm/lvm.conf is "=/dev".
But, if I edit this configuration option, e.g. dir = "/dev/lvm", then lvcreate 
command will fail.

sles12sp4-node:/dev # lvcreate -L2G -ay -n testlv vgdata
  /dev/lvm/vgdata/testlv: not found: device not cleared
  Aborting. Failed to wipe start of new LV.

I am using lvm 2.02.183(or 180), this option can be edit individually? 
or any other option will affect this option, then lead to lvcreate failure.

Thanks
Gang

___
linux-lvm mailing list
linux-lvm@redhat.com
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/


Re: [linux-lvm] pvresize will cause a meta-data corruption with error message "Error writing device at 4096 length 512"

2019-10-11 Thread Gang He
Hello David,

Based on the information from Heming, do you think this is a new bug? Or we can 
fix it with the existing patches.
Now, the user want to restore the LVM2 meta-data back to the original status, 
do you have any suggestions?

Thanks
Gang

> -Original Message-
> From: David Teigland [mailto:teigl...@redhat.com]
> Sent: 2019年10月11日 23:14
> To: Heming Zhao 
> Cc: linux-lvm@redhat.com; Gang He 
> Subject: Re: [linux-lvm] pvresize will cause a meta-data corruption with error
> message "Error writing device at 4096 length 512"
> 
> On Fri, Oct 11, 2019 at 08:11:29AM +, Heming Zhao wrote:
> 
> > I analyze this issue for some days. It looks a new bug.
> 
> Yes, thanks for the thorough analysis.
> 
> > In user machine, this write action was failed, the PV header data
> > (first
> > 4K) save in bcache (cache->errored list), and then write (by
> > bcache_flush) to another disk (f748).
> 
> It looks like we need to get rid of cache->errored completely.
> 
> > If dev_write_bytes failed, the bcache never clean last_byte. and the
> > fd is closed at same time, but cache->errored still have errored fd's data.
> > later lvm open new disk, the fd may reuse the old-errored fd number,
> > error data will be written when later lvm call bcache_flush.
> 
> That's a bad bug.
> 
> > 2> duplicated pv header.
> > as <1> description, fc68 metadata was overwritten to f748.
> > this cause by lvm bug (I said in <1>).
> >
> > 3> device not correct
> > I don't know why the disk
> scsi-360060e80072a67302a67fc68 has below wrong metadata:
> >
> > pre_pvr/scsi-360060e80072a67302a67fc68
> > (please also read the comments in below metadata area.) ```
> >  vgpocdbcdb1_r2 {
> >  id = "PWd17E-xxx-oANHbq"
> >  seqno = 20
> >  format = "lvm2"
> >  status = ["RESIZEABLE", "READ", "WRITE"]
> >  flags = []
> >  extent_size = 65536
> >  max_lv = 0
> >  max_pv = 0
> >  metadata_copies = 0
> >
> >  physical_volumes {
> >
> >  pv0 {
> >  id = "3KTOW5--8g0Rf2"
> >  device =
> "/dev/disk/by-id/scsi-360060e80072a66302a66f768"
> >
> Wrong!! ^
> >   I don't know why there is f768, please ask
> customer
> >  status = ["ALLOCATABLE"]
> >  flags = []
> >  dev_size = 860160
> >  pe_start = 2048
> >  pe_count = 13
> >  }
> >  }
> > ```
> > fc68 => f768  the 'c' (b1100) change to '7' (b0111).
> > maybe disk bit overturn, maybe lvm has bug. I don't know & have no
> idea.
> 
> Is scsi-360060e80072a66302a66f768 the correct device for PVID
> 3KTOW5...?  If so, then it's consistent.  If not, then I suspect this is a 
> result of
> duplicating the PVID on multiple devices above.
> 
> 
> > On 9/11/19 5:17 PM, Gang He wrote:
> > > Hello List,
> > >
> > > Our user encountered a meta-data corruption problem, when run
> pvresize command after upgrading to LVM2 v2.02.180 from v2.02.120.
> > >
> > > The details are as below,
> > > we have following environment:
> > > - Storage: HP XP7 (SAN) - LUN's are presented to ESX via RDM
> > > - VMWare ESXi 6.5
> > > - SLES 12 SP 4 Guest
> > >
> > > Resize happened this way (is our standard way since years) - however
> > > - this is our first resize after upgrading SLES 12 SP3 to SLES 12 SP4 - 
> > > until
> this upgrade, we never had a problem like this:
> > > - split continous access on storage box, resize lun on XP7
> > > - recreate ca on XP7
> > > - scan on ESX
> > > - rescan-scsi-bus.sh -s on SLES VM
> > > - pvresize  ( at this step the error happened)
> > >
> > > huns1vdb01:~ # pvresize
> > > /dev/disk/by-id/scsi-360060e80072a66302a663274
> >
> > ___
> > linux-lvm mailing list
> > linux-lvm@redhat.com
> > https://www.redhat.com/mailman/listinfo/linux-lvm
> > read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/


___
linux-lvm mailing list
linux-lvm@redhat.com
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/

[linux-lvm] questions about dlm: add TCP multihoming/failover support

2019-09-24 Thread Gang He
Hello List,

When I tried to upgrade libdlm to v4.0.9, I can see one commit for add TCP 
multihoming/failover support. 
commit 7a273b8714da400d292d6c9762acedcde1997e52
Author: David Windsor 
Date:   Tue May 7 09:56:53 2019 -0400
dlm_controld: bind to all interfaces for failover 

According to the commit description, I can find there are some kernel related 
patches(fs/dlm in kernel space).
e.g. 
https://www.redhat.com/archives/cluster-devel/2019-January/msg9.html
https://www.redhat.com/archives/cluster-devel/2019-January/msg00010.html

But, so far, these kernel patches have not been in Torvalds kernel git tree.
That means we cannot use this feature, although we upgrade libdlm to v4.0.9?

Second, how to use this feature after these kernel patch are available? There 
is any document to guide the use to use(or test) this feature?

Third, There is any differences between TCP multihoming/failover feature and 
STCP feature? Please give some comments when choice two features?

Thanks
Gang


___
linux-lvm mailing list
linux-lvm@redhat.com
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/


Re: [linux-lvm] pvresize will cause a meta-data corruption with error message "Error writing device at 4096 length 512"

2019-09-11 Thread Gang He
Hi Ingo and Ilia,

Thank for your helps.

> -Original Message-
> From: Ingo Franzki [mailto:ifran...@linux.ibm.com]
> Sent: 2019年9月11日 18:11
> To: Ilia Zykov ; LVM general discussion and development
> ; Gang He 
> Subject: Re: [linux-lvm] pvresize will cause a meta-data corruption with error
> message "Error writing device at 4096 length 512"
> 
> On 11.09.2019 12:03, Ilia Zykov wrote:
> > Maybe this?
> >
> > Please note that this problem can also happen in other cases, such as
> > mixing disks with different block sizes (e.g. SCSI disks with 512
> > bytes and s390x-DASDs with 4096 block size).
> >
> >
> https://www.redhat.com/archives/linux-lvm/2019-February/msg00018.html
> 
> And the fix for this is already available upstream (Thanks David!):
> https://sourceware.org/git/?p=lvm2.git;a=commit;h=0404539edb25e4a9d34
> 56bb3e6b402aa2767af6b
This commit can fix the problem thoroughly? Do we need any other patches based 
on v2.02.180?

Thanks
Gang

> >
> >
> > On 11.09.2019 12:17, Gang He wrote:
> >> Hello List,
> >>
> >> Our user encountered a meta-data corruption problem, when run pvresize
> command after upgrading to LVM2 v2.02.180 from v2.02.120.
> >>
> >> The details are as below,
> >> we have following environment:
> >> - Storage: HP XP7 (SAN) - LUN's are presented to ESX via RDM
> >> - VMWare ESXi 6.5
> >> - SLES 12 SP 4 Guest
> >>
> >> Resize happened this way (is our standard way since years) - however
> >> - this is our first resize after upgrading SLES 12 SP3 to SLES 12 SP4 - 
> >> until
> this upgrade, we never had a problem like this:
> >> - split continous access on storage box, resize lun on XP7
> >> - recreate ca on XP7
> >> - scan on ESX
> >> - rescan-scsi-bus.sh -s on SLES VM
> >> - pvresize  ( at this step the error happened)
> >>
> >> huns1vdb01:~ # pvresize
> >> /dev/disk/by-id/scsi-360060e80072a66302a663274
> >>  Error writing device /dev/sdaf at 4096 length 512.
> >>  Failed to write mda header to /dev/sdaf fd -1  Failed to update old
> >> PV extension headers in VG vghundbhulv_ar.
> >>  Error writing device
> /dev/disk/by-id/scsi-360060e80072a66302a6631ec at 4096 length
> 512.
> >>  Failed to write mda header to
> >> /dev/disk/by-id/scsi-360060e80072a66302a6631ec fd -1
> Failed to update old PV extension headers in VG vghundbhulk_ar.
> >>  VG info not found after rescan of vghundbhulv_r2  VG info not found
> >> after rescan of vghundbhula_r1  VG info not found after rescan of
> >> vghundbhuco_ar  Error writing device
> >> /dev/disk/by-id/scsi-360060e80072a66302a6631e8 at 4096
> length 512.
> >>  Failed to write mda header to
> >> /dev/disk/by-id/scsi-360060e80072a66302a6631e8 fd -1
> Failed to update old PV extension headers in VG vghundbhula_ar.
> >>  VG info not found after rescan of vghundbhuco_r2  Error writing
> >> device /dev/disk/by-id/scsi-360060e80072a66302a66300b at
> 4096 length 512.
> >>  Failed to write mda header to
> >> /dev/disk/by-id/scsi-360060e80072a66302a66300b fd -1
> Failed to update old PV extension headers in VG vghundbhunrm02_r2.
> >>
> >> Any idea for this bug?
> >>
> >> Thanks a lot.
> >> Gang
> >>
> >>
> >> ___
> >> linux-lvm mailing list
> >> linux-lvm@redhat.com
> >> https://www.redhat.com/mailman/listinfo/linux-lvm
> >> read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/
> >>
> >
> >
> >
> > ___
> > linux-lvm mailing list
> > linux-lvm@redhat.com
> > https://www.redhat.com/mailman/listinfo/linux-lvm
> > read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/
> >
> 
> 
> --
> Ingo Franzki
> eMail: ifran...@linux.ibm.com
> Tel: ++49 (0)7031-16-4648
> Fax: ++49 (0)7031-16-3456
> Linux on IBM Z Development, Schoenaicher Str. 220, 71032 Boeblingen,
> Germany
> 
> IBM Deutschland Research & Development GmbH / Vorsitzender des
> Aufsichtsrats: Matthias Hartmann
> Geschäftsführung: Dirk Wittkopp
> Sitz der Gesellschaft: Böblingen / Registergericht: Amtsgericht Stuttgart, HRB
> 243294 IBM DATA Privacy Statement: https://www.ibm.com/privacy/us/en/
> 


___
linux-lvm mailing list
linux-lvm@redhat.com
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/

[linux-lvm] pvresize will cause a meta-data corruption with error message "Error writing device at 4096 length 512"

2019-09-11 Thread Gang He
Hello List,

Our user encountered a meta-data corruption problem, when run pvresize command 
after upgrading to LVM2 v2.02.180 from v2.02.120.

The details are as below,
we have following environment:
- Storage: HP XP7 (SAN) - LUN's are presented to ESX via RDM
- VMWare ESXi 6.5
- SLES 12 SP 4 Guest

Resize happened this way (is our standard way since years) - however - this is 
our first resize after upgrading SLES 12 SP3 to SLES 12 SP4 - until this 
upgrade, we
never had a problem like this:
- split continous access on storage box, resize lun on XP7
- recreate ca on XP7
- scan on ESX
- rescan-scsi-bus.sh -s on SLES VM
- pvresize  ( at this step the error happened)

huns1vdb01:~ # pvresize /dev/disk/by-id/scsi-360060e80072a66302a663274
 Error writing device /dev/sdaf at 4096 length 512.
 Failed to write mda header to /dev/sdaf fd -1
 Failed to update old PV extension headers in VG vghundbhulv_ar.
 Error writing device /dev/disk/by-id/scsi-360060e80072a66302a6631ec at 
4096 length 512.
 Failed to write mda header to 
/dev/disk/by-id/scsi-360060e80072a66302a6631ec fd -1
 Failed to update old PV extension headers in VG vghundbhulk_ar.
 VG info not found after rescan of vghundbhulv_r2
 VG info not found after rescan of vghundbhula_r1
 VG info not found after rescan of vghundbhuco_ar
 Error writing device /dev/disk/by-id/scsi-360060e80072a66302a6631e8 at 
4096 length 512.
 Failed to write mda header to 
/dev/disk/by-id/scsi-360060e80072a66302a6631e8 fd -1
 Failed to update old PV extension headers in VG vghundbhula_ar.
 VG info not found after rescan of vghundbhuco_r2
 Error writing device /dev/disk/by-id/scsi-360060e80072a66302a66300b at 
4096 length 512.
 Failed to write mda header to 
/dev/disk/by-id/scsi-360060e80072a66302a66300b fd -1
 Failed to update old PV extension headers in VG vghundbhunrm02_r2.

Any idea for this bug?

Thanks a lot.
Gang


___
linux-lvm mailing list
linux-lvm@redhat.com
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/


Re: [linux-lvm] dlm userspace tool source tarball name format has been changed to dlm-dlm-4.0.x.tar.gz

2019-08-07 Thread Gang He
Hi David,

If I download the source code tar-ball from https://releases.pagure.org/dlm/, 
the tar-ball file names looks the same with the before.
If I download the source code tar-ball from https://pagure.io/dlm/releases 
(e.g. https://pagure.io/dlm/archive/dlm-4.0.9/dlm-dlm-4.0.9.tar.gz),
the tar-ball file name format looks changed.
So, which link is the official dlm release download link?

Thanks
Gang 

> -Original Message-
> From: David Teigland [mailto:teigl...@redhat.com]
> Sent: 2019年8月8日 1:05
> To: Gang He 
> Cc: LVM general discussion and development 
> Subject: Re: [linux-lvm] dlm userspace tool source tarball name format has
> been changed to dlm-dlm-4.0.x.tar.gz
> 
> On Wed, Aug 07, 2019 at 04:33:24AM +, Gang He wrote:
> > Hi David,
> >
> > Today, I downloaded the dlm user-space tool source code tar-ball from
> https://pagure.io/dlm.
> > I found the tar-ball name format has been changed to dlm-dlm-4.0.x.tar.gz,
> the newly added "dlm-" prefix name is on purpose?
> 
> I uploaded a correctly named file to https://releases.pagure.org/dlm/ pagure
> automatically adds the redundant prefix to generated files.
> 


___
linux-lvm mailing list
linux-lvm@redhat.com
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/

[linux-lvm] dlm userspace tool source tarball name format has been changed to dlm-dlm-4.0.x.tar.gz

2019-08-06 Thread Gang He
Hi David,

Today, I downloaded the dlm user-space tool source code tar-ball from 
https://pagure.io/dlm.
I found the tar-ball name format has been changed to dlm-dlm-4.0.x.tar.gz, the 
newly added "dlm-" prefix name is on purpose?


Thanks
Gang 

___
linux-lvm mailing list
linux-lvm@redhat.com
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/


Re: [linux-lvm] Which lvm2 code branches are important?

2019-07-10 Thread Gang He
Hello Marian,

Thank for your reply.
About the detailed changes for V2.03, where do we can find the related 
documents from the external link?

Thanks
Gang

-Original Message-
From: linux-lvm-boun...@redhat.com [mailto:linux-lvm-boun...@redhat.com] On 
Behalf Of Marian Csontos
Sent: 2019年7月10日 20:48
To: LVM general discussion and development ; Gang He 

Subject: Re: [linux-lvm] Which lvm2 code branches are important?

On 7/10/19 8:42 AM, Gang He wrote:
> Hello List,
> 
> After you clone the code from git://sourceware.org/git/lvm2.git, you can find 
> lots of remote code branches.
> But which code branches are important for the third party users/ developer? 
> That means we should monitor these code branches.
> For example,
> Which code branches are main (or long-active) code branches?
> Which code branches are used for which lvm2 products (big versions)?

master - 2.03 branch, new features land here,
stable-2.02 - legacy 2.02 branch, bug fixes mostly.

> What are the main differences between different lvm2 products? E.g. some 
> features are added/removed.

2.03 vs 2.03:

- dropped lvmetad and clvmd,
- added handling of writecache and VDO targets.

> 
> 
> Thanks a lot.
> Gang
> 
> ___
> linux-lvm mailing list
> linux-lvm@redhat.com
> https://www.redhat.com/mailman/listinfo/linux-lvm
> read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/
> 

___
linux-lvm mailing list
linux-lvm@redhat.com
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/


___
linux-lvm mailing list
linux-lvm@redhat.com
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/

[linux-lvm] Which lvm2 code branches are important?

2019-07-10 Thread Gang He
Hello List,

After you clone the code from git://sourceware.org/git/lvm2.git, you can find 
lots of remote code branches.
But which code branches are important for the third party users/ developer? 
That means we should monitor these code branches.
For example, 
Which code branches are main (or long-active) code branches?
Which code branches are used for which lvm2 products (big versions)?
What are the main differences between different lvm2 products? E.g. some 
features are added/removed.


Thanks a lot.
Gang

___
linux-lvm mailing list
linux-lvm@redhat.com
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/


Re: [linux-lvm] "Unknown feature in status" message when running lvs/lvdisplay against cached LVs

2019-07-10 Thread Gang He
Hi Marian,

Thank for your helps, the patch is effective to fix this error message.

Thanks
Gang

-Original Message-
From: Marian Csontos [mailto:mcson...@redhat.com] 
Sent: 2019年7月8日 16:37
To: LVM general discussion and development ; Gang He 

Subject: Re: [linux-lvm] "Unknown feature in status" message when running 
lvs/lvdisplay against cached LVs

On 7/5/19 11:20 AM, Gang He wrote:
> Hi Guys,
> 
> I uses lvm2-2.02.180, I got an error message when running lvs/lvdisplay 
> against cached LVs.
> e.g.
> linux-v5ay:~ # lvs
>Unknown feature in status: 8 1324/8192 128 341/654272 129 151 225 1644 0 
> 341 0 3 metadata2 writethrough no_discard_passdown 2 migration_threshold 2048 
> smq 0 rw -
>LV   VG Attr   LSize  Pool   Origin   Data%  Meta%  Move 
> Log Cpy%Sync Convert
>home system -wi-ao 17.99g
>root system Cwi-aoC--- 20.00g [lvc_root] [root_corig] 0.05   16.16 
>   0.00
>swap system -wi-ao  2.00g

You need this patch:

https://sourceware.org/git/?p=lvm2.git;a=commit;h=adf9bf80a32500b45b37eb24b98fa7c2c933019e

Kernel introduced a new feature which is not recognized by lvm versions 
released before the kernel (which is anything up to latest 2.02.185). 
This will be included in the next lvm release.

-- Marian

> 
> The bug can be reproduced stably with the below steps.
> Have a (virtual) machine with 2 disk drives Install os with LVM as 
> root volume Then, add the second disk after os is installed.
> # pvcreate /dev/sdb
> # vgextend system /dev/sdb
> # lvcreate --type cache-pool -l 100%FREE -n lvc_root system /dev/sdb # 
> lvconvert --type cache --cachepool system/lvc_root system/root then 
> run `lvs` or `lvdisplay` command to trigger the issue
> 
> 
> Thanks
> Gang
> 
> ___
> linux-lvm mailing list
> linux-lvm@redhat.com
> https://www.redhat.com/mailman/listinfo/linux-lvm
> read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/
> 



___
linux-lvm mailing list
linux-lvm@redhat.com
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/

[linux-lvm] "Unknown feature in status" message when running lvs/lvdisplay against cached LVs

2019-07-05 Thread Gang He
Hi Guys,

I uses lvm2-2.02.180, I got an error message when running lvs/lvdisplay against 
cached LVs.
e.g. 
linux-v5ay:~ # lvs
  Unknown feature in status: 8 1324/8192 128 341/654272 129 151 225 1644 0 341 
0 3 metadata2 writethrough no_discard_passdown 2 migration_threshold 2048 smq 0 
rw -
  LV   VG Attr   LSize  Pool   Origin   Data%  Meta%  Move Log 
Cpy%Sync Convert
  home system -wi-ao 17.99g
  root system Cwi-aoC--- 20.00g [lvc_root] [root_corig] 0.05   16.16   
0.00
  swap system -wi-ao  2.00g

The bug can be reproduced stably with the below steps.
Have a (virtual) machine with 2 disk drives
Install os with LVM as root volume
Then, add the second disk after os is installed.
# pvcreate /dev/sdb
# vgextend system /dev/sdb
# lvcreate --type cache-pool -l 100%FREE -n lvc_root system /dev/sdb
# lvconvert --type cache --cachepool system/lvc_root system/root
then run `lvs` or `lvdisplay` command to trigger the issue


Thanks
Gang

___
linux-lvm mailing list
linux-lvm@redhat.com
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/


Re: [linux-lvm] Can't remove the snap LV of root volume on multipath disk PV

2019-07-03 Thread Gang He
Hi Zdenek,

Thank for your reply, the bug is opened at 
https://bugzilla.redhat.com/show_bug.cgi?id=1726524
I feel maybe RHEL lvm2 also has the similar problem.
The problem looks to related with multipath disk PV. 

Thanks
Gang

-Original Message-
From: Zdenek Kabelac [mailto:zkabe...@redhat.com] 
Sent: 2019年7月1日 18:23
To: LVM general discussion and development ; Gang He 

Subject: Re: Can't remove the snap LV of root volume on multipath disk PV

Dne 01. 07. 19 v 8:31 Gang He napsal(a):
> Hello List,
> 
> I am using lvm2-2.02.180 on SLES12SP4, I cannot remove the snap LV of root 
> volume, which is based on multipath disk PV.
> e.g.
> linux-kkay:/ # lvremove /dev/system/snap_root
>WARNING: Reading VG system from disk because lvmetad metadata is invalid.
> Do you really want to remove active logical volume system/snap_root? [y/n]: y
>device-mapper: reload ioctl on  (254:3) failed: Invalid argument
>Failed to refresh root without snapshot.
> 
> But, I can remove the snap LV of data volume successfully, e.g.
> linux-kkay:/ # lvremove /dev/system/data_snap
>WARNING: Reading VG system from disk because lvmetad metadata is invalid.
> Do you really want to remove active logical volume system/data_snap? [y/n]: y
>Logical volume "data_snap" successfully removed
> 
> If I use the ordinary disk as PV (rather than multipath disk), I cannot 
> encounter this problem (both snap LVs can be removed).
>

Hi

Please open new BZ case and provide full - traces of failing command and 
also possibly attache  'dmsetup table' 'dmsetup ls --tree' and 'dmsetup info 
-c' 'dmsetup status'


Regards

Zdenek


___
linux-lvm mailing list
linux-lvm@redhat.com
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/

[linux-lvm] Can't remove the snap LV of root volume on multipath disk PV

2019-07-01 Thread Gang He
Hello List,

I am using lvm2-2.02.180 on SLES12SP4, I cannot remove the snap LV of root 
volume, which is based on multipath disk PV.
e.g. 
linux-kkay:/ # lvremove /dev/system/snap_root
  WARNING: Reading VG system from disk because lvmetad metadata is invalid.
Do you really want to remove active logical volume system/snap_root? [y/n]: y
  device-mapper: reload ioctl on  (254:3) failed: Invalid argument
  Failed to refresh root without snapshot.

But, I can remove the snap LV of data volume successfully, e.g.
linux-kkay:/ # lvremove /dev/system/data_snap
  WARNING: Reading VG system from disk because lvmetad metadata is invalid.
Do you really want to remove active logical volume system/data_snap? [y/n]: y
  Logical volume "data_snap" successfully removed

If I use the ordinary disk as PV (rather than multipath disk), I cannot 
encounter this problem (both snap LVs can be removed).

The disk layout is as below,
linux-kkay:/ # lsblk
NAME   MAJ:MIN RM  SIZE RO TYPE  MOUNTPOINT
sda  8:00   30G  0 disk
├─sda1   8:10   30G  0 part
└─mpd_root 254:00   30G  0 mpath
  └─mpd_root-part1 254:10   30G  0 part
├─system-root-real 254:20   10G  0 lvm
│ ├─system-root254:30   10G  0 lvm   /
│ └─system-snap_root   254:50   10G  0 lvm
├─system-snap_root-cow 254:405G  0 lvm
│ └─system-snap_root   254:50   10G  0 lvm
├─system-swap  254:602G  0 lvm   [SWAP]
├─system-data-real 254:804G  0 lvm
│ ├─system-data254:704G  0 lvm   /data
│ └─system-data_snap   254:10   04G  0 lvm
└─system-data_snap-cow 254:90  840M  0 lvm
  └─system-data_snap   254:10   04G  0 lvm
sdb  8:16   0   30G  0 disk
├─sdb1   8:17   0   30G  0 part
└─mpd_root 254:00   30G  0 mpath
  └─mpd_root-part1 254:10   30G  0 part
├─system-root-real 254:20   10G  0 lvm
│ ├─system-root254:30   10G  0 lvm   /
│ └─system-snap_root   254:50   10G  0 lvm
├─system-snap_root-cow 254:405G  0 lvm
│ └─system-snap_root   254:50   10G  0 lvm
├─system-swap  254:602G  0 lvm   [SWAP]
├─system-data-real 254:804G  0 lvm
│ ├─system-data254:704G  0 lvm   /data
│ └─system-data_snap   254:10   04G  0 lvm
└─system-data_snap-cow 254:90  840M  0 lvm
  └─system-data_snap   254:10   04G  0 lvm

Thanks
Gang

___
linux-lvm mailing list
linux-lvm@redhat.com
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/

Re: [linux-lvm] lvm snapshot cannot be removed with the error "Failed to refresh without snapshot"

2019-06-19 Thread Gang He
Hello Guys,

Any ideas? or what information do I need to provide to help this problem?

Thanks a lot.
Gang

>>> On 2019/6/19 at 17:26, in message
<5d09ffc902f90006d...@prv1-mh.provo.novell.com>, "Gang He" 
wrote:
> Hello List,
> 
> Our user is using lvm2-2.02.180, he attempted to remove a snapshot LV, but 
> failed with the below error,
> lvremove /dev/evg00/snap_var
>   WARNING: Failed to connect to lvmetad. Falling back to device scanning.
> Do you really want to remove active logical volume evg00/snap_var? [y/n]: y
>   device-mapper: reload ioctl on  (254:36) failed: Invalid argument
>   Failed to refresh var without snapshot.
> 
> But I can not reproduce in local, how do I find the root cause behind this 
> problem?
> I also can see one related error message in /var/log/messages
> Jun 12 09:00:22 gtunxlnb04864 sudo:   buss18 : TTY=pts/0 ; 
> PWD=/PZIR/users/buss18 ; USER=root ; COMMAND=/sbin/lvremove 
> /dev/evg00/snap_opt
> Jun 12 09:00:22 gtunxlnb04864 sudo[25821]: pam_unix(sudo:session): session 
> opened for user root by buss18(uid=0)
> Jun 12 09:00:22 gtunxlnb04864 kernel: [1492631.077551] device-mapper: ioctl: 
> can't change device type after initial table load.  <<== here
> Jun 12 09:00:22 gtunxlnb04864 sudo[25821]: pam_unix(sudo:session): session 
> closed for user root
> Jun 12 09:01:01 gtunxlnb04864 cron[25932]: pam_unix(crond:session): session 
> opened for user ggmonux1 by (uid=0)
> 
> Thanks
> Gang
> 
> 
> 
> ___
> linux-lvm mailing list
> linux-lvm@redhat.com 
> https://www.redhat.com/mailman/listinfo/linux-lvm 
> read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/


___
linux-lvm mailing list
linux-lvm@redhat.com
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/


[linux-lvm] lvm snapshot cannot be removed with the error "Failed to refresh without snapshot"

2019-06-19 Thread Gang He
Hello List,

Our user is using lvm2-2.02.180, he attempted to remove a snapshot LV, but 
failed with the below error,
lvremove /dev/evg00/snap_var
  WARNING: Failed to connect to lvmetad. Falling back to device scanning.
Do you really want to remove active logical volume evg00/snap_var? [y/n]: y
  device-mapper: reload ioctl on  (254:36) failed: Invalid argument
  Failed to refresh var without snapshot.

But I can not reproduce in local, how do I find the root cause behind this 
problem?
I also can see one related error message in /var/log/messages
Jun 12 09:00:22 gtunxlnb04864 sudo:   buss18 : TTY=pts/0 ; 
PWD=/PZIR/users/buss18 ; USER=root ; COMMAND=/sbin/lvremove /dev/evg00/snap_opt
Jun 12 09:00:22 gtunxlnb04864 sudo[25821]: pam_unix(sudo:session): session 
opened for user root by buss18(uid=0)
Jun 12 09:00:22 gtunxlnb04864 kernel: [1492631.077551] device-mapper: ioctl: 
can't change device type after initial table load.  <<== here
Jun 12 09:00:22 gtunxlnb04864 sudo[25821]: pam_unix(sudo:session): session 
closed for user root
Jun 12 09:01:01 gtunxlnb04864 cron[25932]: pam_unix(crond:session): session 
opened for user ggmonux1 by (uid=0)

Thanks
Gang



___
linux-lvm mailing list
linux-lvm@redhat.com
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/


[linux-lvm] "lvconvert --mirrors 1 --stripes 3 vgtest/lvtest" command succeeds, but no stripes

2019-05-16 Thread Gang He
Hello Guys,

I found lvconvert command (in lvm lvm2-2.02.120) did not handle "--stripes" 
option correctly.
The reproduce steps are as below,
# vgcreate vgtest /dev/vdb /dev/vdc /dev/vdd /dev/vde /dev/vdf /dev/vdg
# lvcreate -n lvtest -L 8G vgtest
# lvconvert --mirrors 1 --stripes 3 vgtest/lvtest
# lvs -o+stripes
  LV VG Attr   LSize Pool Origin Data%  Meta%  Move Log Cpy%Sync 
Convert #Str
  lvtest vgtest rwi-a-r--- 8.00g12.21   
2

But if you create LV using lvcreate command directly, the command succeeds with 
stripes.
e.g.
# lvcreate --mirrors 1 --stripes 3 -L 4G vgtest
# lvs -o+stripes
  LV VG Attr   LSize Pool Origin Data%  Meta%  Move Log Cpy%Sync 
Convert #Str
  lvol0  vgtest rwi-a-r--- 4.01g8.67
6

I also use lvm2-2.02.180 to reproduce this issue, the lvconvert command works 
like before,
but the command will print out a message, like that,
# lvconvert --mirrors 1 --stripes 3 vgtest/lvtest
  Command does not accept option: --stripes 3.

Why do we can not use lvconvert to do like that, design issue? but lvcreate can 
do that.
What limitations are there between mirrors and stripes?

Thanks
Gang



___
linux-lvm mailing list
linux-lvm@redhat.com
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/


Re: [linux-lvm] Does LVM2 support building with LTO enablement?

2019-05-14 Thread Gang He
Hi Zdenek,

The main motivation is to build the most opensuse rpms with LTO enablement, 
LVM2 is in the list.
I will try to compile the package with LTO enablement too.
Of course, if LVM2 does not support building with LTO enablement thoroughly, I 
think we need not to try.

Thanks
Gang

>>> On 2019/5/14 at 16:00, in message
<59ef89f8-9642-4455-cef0-450816eac...@redhat.com>, Zdenek Kabelac
 wrote:
> Dne 14. 05. 19 v 8:25 Gang He napsal(a):
>> Hello Guys,
>> 
>> Anybody touched this area?
>> 
>> Thanks
>> Gang
> 
> 
> Hi
> 
> I'll take a look - although it looks like the problem is possibly with libaio 
> ?
> 
> Is libaio usable with -flto ?
> 
> ATM libaio is mandatory for building lvm2.
> 
> BTW - why do you need to use this option - lvm2 isn't really CPU cycle 
> bounded 
> 
>   - if there is something slow it's typically some design issue - -flto will 
> not
> really improve things here...
> 
> Zdenek


___
linux-lvm mailing list
linux-lvm@redhat.com
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/


Re: [linux-lvm] Does LVM2 support building with LTO enablement?

2019-05-14 Thread Gang He
Hello Guys,

Anybody touched this area? 

Thanks
Gang

>>> On 2019/5/10 at 11:00, in message
<5cd4e95002f900064...@prv1-mh.provo.novell.com>, "Gang He" 
wrote:
> Hello List,
> 
> Our build team wants to build LVM2 with LTO enablement, but looks failed.
> The error message likes,
> 
> [   30s] gcc -fmessage-length=0 -grecord-gcc-switches -O2 -Wall 
> -D_FORTIFY_SOURCE=2 
> -fstack-protector-strong -funwind-tables -fasynchronous-unwind-tables 
> -fstack-clash-protection -flto=160 -g -fPIC  -L../libdm -L../lib 
> -L../libdaemon/client -L../daemons/dmeventd -Wl,-z,relro,-z,now -pie -fPIE 
> -Wl,--export-dynamic -o lvm command.o dumpconfig.o formats.o lvchange.o 
> lvconvert.o lvconvert_poll.o lvcreate.o lvdisplay.o lvextend.o lvmcmdline.o 
> lvmdiskscan.o lvreduce.o lvremove.o lvrename.o lvresize.o lvscan.o 
> polldaemon.o pvchange.o pvck.o pvcreate.o pvdisplay.o pvmove.o pvmove_poll.o 
> pvremove.o pvresize.o pvscan.o reporter.o segtypes.o tags.o toollib.o 
> vgcfgbackup.o vgcfgrestore.o vgchange.o vgck.o vgcreate.o vgconvert.o 
> vgdisplay.o vgexport.o vgextend.o vgimport.o vgmerge.o vgmknodes.o lvpoll.o 
> vgimportclone.o vgreduce.o vgremove.o vgrename.o vgscan.o vgsplit.o  lvm.o \
> [   30s]  -llvm-internal -ldevmapper-event -ldaemonclient  -ludev -ldl 
> -lblkid 
> -ldevmapper -laio -lreadline 
> [   30s] 
> /usr/lib64/gcc/x86_64-suse-linux/9/../../../../x86_64-suse-linux/bin/ld: 
> /tmp/lvm.vKOJS8.ltrans0.ltrans.o: in function `_async_wait':
> [   30s] 
> /home/abuild/rpmbuild/BUILD/LVM2.2.02.180/tools/device/bcache.c:268: 
> undefined reference to `io_getevents'
> [   30s] 
> /usr/lib64/gcc/x86_64-suse-linux/9/../../../../x86_64-suse-linux/bin/ld: 
> /usr/lib64/gcc/x86_64-suse-linux/9/../../../../lib64/libaio.so: undefined 
> reference to `io_cancel'
> [   30s] collect2: error: ld returned 1 exit status
> [   30s] make[1]: *** [Makefile:143: lvm] Error 1
> [   30s] make[1]: Leaving directory 
> '/home/abuild/rpmbuild/BUILD/LVM2.2.02.180/tools'
> [   30s] make[1]: *** Waiting for unfinished jobs
> [   30s] make[1]: Entering directory 
> '/home/abuild/rpmbuild/BUILD/LVM2.2.02.180/tools'
> [   30s] [CC] liblvm2cmd.so
> 
> About LTO information, you can look at the links as below,
> https://lists.opensuse.org/opensuse-factory/2019-04/msg00320.html 
> https://en.opensuse.org/openSUSE:LTO 
> 
> 
> Thanks
> Gang
> 
> 
> 
> ___
> linux-lvm mailing list
> linux-lvm@redhat.com 
> https://www.redhat.com/mailman/listinfo/linux-lvm 
> read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/


___
linux-lvm mailing list
linux-lvm@redhat.com
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/


[linux-lvm] Does LVM2 support building with LTO enablement?

2019-05-09 Thread Gang He
Hello List,

Our build team wants to build LVM2 with LTO enablement, but looks failed.
The error message likes,

[   30s] gcc -fmessage-length=0 -grecord-gcc-switches -O2 -Wall 
-D_FORTIFY_SOURCE=2 -fstack-protector-strong -funwind-tables 
-fasynchronous-unwind-tables -fstack-clash-protection -flto=160 -g -fPIC  
-L../libdm -L../lib -L../libdaemon/client -L../daemons/dmeventd 
-Wl,-z,relro,-z,now -pie -fPIE -Wl,--export-dynamic -o lvm command.o 
dumpconfig.o formats.o lvchange.o lvconvert.o lvconvert_poll.o lvcreate.o 
lvdisplay.o lvextend.o lvmcmdline.o lvmdiskscan.o lvreduce.o lvremove.o 
lvrename.o lvresize.o lvscan.o polldaemon.o pvchange.o pvck.o pvcreate.o 
pvdisplay.o pvmove.o pvmove_poll.o pvremove.o pvresize.o pvscan.o reporter.o 
segtypes.o tags.o toollib.o vgcfgbackup.o vgcfgrestore.o vgchange.o vgck.o 
vgcreate.o vgconvert.o vgdisplay.o vgexport.o vgextend.o vgimport.o vgmerge.o 
vgmknodes.o lvpoll.o vgimportclone.o vgreduce.o vgremove.o vgrename.o vgscan.o 
vgsplit.o  lvm.o \
[   30s]-llvm-internal -ldevmapper-event -ldaemonclient  -ludev -ldl 
-lblkid -ldevmapper -laio -lreadline 
[   30s] 
/usr/lib64/gcc/x86_64-suse-linux/9/../../../../x86_64-suse-linux/bin/ld: 
/tmp/lvm.vKOJS8.ltrans0.ltrans.o: in function `_async_wait':
[   30s] /home/abuild/rpmbuild/BUILD/LVM2.2.02.180/tools/device/bcache.c:268: 
undefined reference to `io_getevents'
[   30s] 
/usr/lib64/gcc/x86_64-suse-linux/9/../../../../x86_64-suse-linux/bin/ld: 
/usr/lib64/gcc/x86_64-suse-linux/9/../../../../lib64/libaio.so: undefined 
reference to `io_cancel'
[   30s] collect2: error: ld returned 1 exit status
[   30s] make[1]: *** [Makefile:143: lvm] Error 1
[   30s] make[1]: Leaving directory 
'/home/abuild/rpmbuild/BUILD/LVM2.2.02.180/tools'
[   30s] make[1]: *** Waiting for unfinished jobs
[   30s] make[1]: Entering directory 
'/home/abuild/rpmbuild/BUILD/LVM2.2.02.180/tools'
[   30s] [CC] liblvm2cmd.so

About LTO information, you can look at the links as below,
https://lists.opensuse.org/opensuse-factory/2019-04/msg00320.html 
https://en.opensuse.org/openSUSE:LTO 


Thanks
Gang



___
linux-lvm mailing list
linux-lvm@redhat.com
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/


Re: [linux-lvm] pvscan: /dev/sdc: open failed: No medium found

2019-04-29 Thread Gang He
Hello David,

Sorry for delayed reply.
the verbose log (lvm2-2.02.180) looks like,

#device/dev-cache.c:763   Found dev 11:0 
/dev/disk/by-id/ata-TSSTcorp_DVDWBD_SH-B123L_R84A6GDC10003F - new alias.
#device/dev-cache.c:763   Found dev 11:0 
/dev/disk/by-label/SLE-12-SP4-Server-DVD-x86_640456 - new alias.
#device/dev-cache.c:763   Found dev 11:0 
/dev/disk/by-path/pci-:00:1f.2-ata-2 - new alias.
#device/dev-cache.c:763   Found dev 11:0 
/dev/disk/by-uuid/2018-11-07-14-08-50-00 - new alias.
#device/dev-cache.c:763   Found dev 11:0 /dev/dvd - new alias.
#device/dev-cache.c:763   Found dev 11:0 /dev/dvdrw - new alias.
#cache/lvmetad.c:1420  Asking lvmetad for complete list of known PVs
#device/dev-io.c:609   Opened /dev/sda RO O_DIRECT
#device/dev-io.c:359 /dev/sda: size is 268435456 sectors
#device/dev-io.c:658   Closed /dev/sda
#filters/filter-partitioned.c:37/dev/sda: Skipping: Partition table 
signature found
#filters/filter-type.c:27/dev/cdrom: Skipping: Unrecognised LVM 
device type 11
#device/dev-io.c:609   Opened /dev/sda1 RO O_DIRECT
#device/dev-io.c:359 /dev/sda1: size is 4206592 sectors
#device/dev-io.c:658   Closed /dev/sda1
#filters/filter-mpath.c:196   /dev/sda1: Device is a partition, using 
primary device sda for mpath component detection
#device/dev-io.c:336 /dev/sda1: using cached size 4206592 sectors
#filters/filter-persistent.c:346   filter caching good /dev/sda1
#device/dev-io.c:609   Opened /dev/root RO O_DIRECT
#device/dev-io.c:359 /dev/root: size is 264226816 sectors
#device/dev-io.c:658   Closed /dev/root
#filters/filter-mpath.c:196   /dev/root: Device is a partition, using 
primary device sda for mpath component detection
#device/dev-io.c:336 /dev/root: using cached size 264226816 sectors
#filters/filter-persistent.c:346   filter caching good /dev/root
#device/dev-io.c:567 /dev/sdb: open failed: No medium found 
   <<== here
#device/dev-io.c:343   
#filters/filter-usable.c:32/dev/sdb: Skipping: dev_get_size failed
#toollib.c:4377  Processing PVs in VG #orphans_lvm2
#locking/locking.c:331   Dropping cache for #orphans.
#misc/lvm-flock.c:202 Locking /run/lvm/lock/P_orphans RB
#misc/lvm-flock.c:100   _do_flock /run/lvm/lock/P_orphans:aux WB
#misc/lvm-flock.c:47_undo_flock /run/lvm/lock/P_orphans:aux
#misc/lvm-flock.c:100   _do_flock /run/lvm/lock/P_orphans RB
#cache/lvmcache.c:751   lvmcache has no info for vgname "#orphans".
#metadata/metadata.c:3764Reading VG #orphans_lvm2
#locking/locking.c:331   Dropping cache for #orphans.
#misc/lvm-flock.c:70  Unlocking /run/lvm/lock/P_orphans
#misc/lvm-flock.c:47_undo_flock /run/lvm/lock/P_orphans
#cache/lvmcache.c:751   lvmcache has no info for vgname "#orphans".
#locking/locking.c:331   Dropping cache for #orphans.

Thanks
Gang 

>>> On 4/24/2019 at 11:08 pm, in message <20190424150858.ga3...@redhat.com>, 
>>> David
Teigland  wrote:
> On Tue, Apr 23, 2019 at 09:23:29PM -0600, Gang He wrote:
>> Hello Peter and David,
>> 
>> Thank for your quick responses.
>> How do we handle this behavior further?
>> Fix it as an issue, filter this kind of disk silently. 
>> or keep the current error message printing, looking a bit unfriendly, but 
> the logic is not wrong.
> 
> Hi, 
> 
> I'd like to figure out what the old code was doing differently to avoid
> this.  Part of the problem is that I don't have a device that reports
> these same errors.  Could you send me the output of pvscan - so I can
> see which open is causing the error?
> Thanks


___
linux-lvm mailing list
linux-lvm@redhat.com
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/


Re: [linux-lvm] pvscan: /dev/sdc: open failed: No medium found

2019-04-23 Thread Gang He
Hello Peter and David,

Thank for your quick responses.
How do we handle this behavior further?
Fix it as an issue, filter this kind of disk silently. 
or keep the current error message printing, looking a bit unfriendly, but the 
logic is not wrong.


Thanks
Gang




>>> On 2019/4/23 at 23:24, in message
<9cd91b48-408b-f7a9-c4bc-df05d5537...@redhat.com>, Peter Rajnoha
 wrote:
> On 4/23/19 7:15 AM, Gang He wrote:
>> Hello List,
>> 
>> One user complained this error message.
>> The user has a usb sd card reader with no media present.  When they issue a 
> pvscan under lvm2-2.02.180 the device is opened which results in 'No medium 
> found' being reported. 
>> But lvm2-2.02.120 did not do this (the device appears to get filtered out 
> earlier). The customer views the 'No medium found' message as an issue/bug.
>> Any suggest/comments for this error message?
>> 
>> The detailed information is as below,
>> lvm2 2.02.180-9.4.2
>> OS: SLES12 SP4
>> Kernel 4.12.14-95.3-default
>> Hardware: HP ProLiant DL380 Gen10
>> 
>> After upgrade from sles12SP3 to SP4, customer is reporting the following 
> error message:
>> 
>>  # pvscan
>>  /dev/sdc: open failed: No medium found
>>  PV /dev/sdb   VG Q11vg10 lvm2 [5.24 TiB / 2.00 TiB free]
>>  Total: 1 [5.24 TiB] / in use: 1 [5.24 TiB] / in no VG: 0 [0   ]
>> 
>> 
> 
> See also https://github.com/lvmteam/lvm2/issues/13 
> 
> -- 
> Peter
> 
> ___
> linux-lvm mailing list
> linux-lvm@redhat.com 
> https://www.redhat.com/mailman/listinfo/linux-lvm 
> read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/


___
linux-lvm mailing list
linux-lvm@redhat.com
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/


[linux-lvm] pvscan: /dev/sdc: open failed: No medium found

2019-04-22 Thread Gang He
Hello List,

One user complained this error message.
The user has a usb sd card reader with no media present.  When they issue a 
pvscan under lvm2-2.02.180 the device is opened which results in 'No medium 
found' being reported. 
But lvm2-2.02.120 did not do this (the device appears to get filtered out 
earlier). The customer views the 'No medium found' message as an issue/bug.
Any suggest/comments for this error message?

The detailed information is as below,
lvm2 2.02.180-9.4.2
OS: SLES12 SP4
Kernel 4.12.14-95.3-default
Hardware: HP ProLiant DL380 Gen10

After upgrade from sles12SP3 to SP4, customer is reporting the following error 
message:

 # pvscan
 /dev/sdc: open failed: No medium found
 PV /dev/sdb   VG Q11vg10 lvm2 [5.24 TiB / 2.00 TiB free]
 Total: 1 [5.24 TiB] / in use: 1 [5.24 TiB] / in no VG: 0 [0   ]


Thanks
Gang



___
linux-lvm mailing list
linux-lvm@redhat.com
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/


[linux-lvm] lvmetad cannot handle md device with big minor number

2018-12-24 Thread Gang He
Hello Guys,

I create a md device with a big minor number, and add this md device as PV disk.
Then, pvdisplay can not display this PV.
But, if I disable lvmetad via changing the configuration setting "use_lvmetad = 
1" to "use_lvmetad = 0",
pvdisplay can display this PV, then I can do the next things, e.g. create a VG 
on this PV, etc.
It looks that lvmetad cannot handle md device with big minor number (e.g. 
/dev/md606099).
If I create a md device like "/dev/md0", lvmetad can work.

The whole reproduce steps are as below,
1) # mdadm -C /dev/md606099 -l 1 -n 2 /dev/sdb2 /dev/sda2

2) # pvcreate /dev/md606099
  Physical volume "/dev/md606099" successfully created.
# lsblk
NAME MAJ:MINRM  SIZE RO TYPE  MOUNTPOINT
sda8:0   0  160G  0 disk
├─sda1 8:1   0  120G  0 part
├─sda2 8:2   0   30G  0 part
│ └─md606099   9:606099  0   30G  0 raid1
└─sda3 8:3   0   10G  0 part
sdb8:16  0  500G  0 disk
├─sdb1 8:17  0  120G  0 part
├─sdb2 8:18  0   30G  0 part
│ └─md606099   9:606099  0   30G  0 raid1
└─sdb3 8:19  0  350G  0 part

3) # pvdisplay
  WARNING: Device for PV GIAOmw-SOu7-8ZaH-qWBw-MwfR-Jb6j-wxSM1a not found or 
rejected by a filter. <<== here, the PV
can not be displayed

4) # lvm version
  LVM version: 2.02.180(2) (2018-07-19)
  Library version: 1.03.01 (2018-07-19)
  Driver version:  4.39.0
  Configuration:   ./configure --host=x86_64-suse-linux-gnu 
--build=x86_64-suse-linux-gnu --program-prefix=
--disable-dependency-tracking --prefix=/usr --exec-prefix=/usr 
--bindir=/usr/bin --sbindir=/usr/sbin --sysconfdir=/etc
--datadir=/usr/share --includedir=/usr/include --libdir=/usr/lib64 
--libexecdir=/usr/lib --localstatedir=/var
--sharedstatedir=/var/lib --mandir=/usr/share/man --infodir=/usr/share/info 
--disable-dependency-tracking
--enable-dmeventd --enable-cmdlib --enable-udev_rules --enable-udev_sync 
--with-udev-prefix=/usr/ --enable-selinux
--enable-pkgconfig --with-usrlibdir=/usr/lib64 --with-usrsbindir=/usr/sbin 
--with-default-dm-run-dir=/run
--with-tmpfilesdir=/usr/lib/tmpfiles.d --with-thin=internal --with-device-gid=6 
--with-device-mode=0640
--with-device-uid=0 --with-dmeventd-path=/usr/sbin/dmeventd 
--with-thin-check=/usr/sbin/thin_check
--with-thin-dump=/usr/sbin/thin_dump --with-thin-repair=/usr/sbin/thin_repair 
--enable-applib --enable-blkid_wiping
--enable-cmdlib --enable-lvmetad --enable-lvmpolld --enable-realtime 
--with-cache=internal
--with-default-locking-dir=/run/lock/lvm --with-default-pid-dir=/run 
--with-default-run-dir=/run/lvm --enable-cmirrord

5) # tb1213-nd1:/sys/dev/block # ll
total 0
lrwxrwxrwx 1 root root 0 Dec 25 13:35 11:0 ->
../../devices/pci:00/:00:01.1/ata1/host0/target0:0:0/0:0:0:0/block/sr0
lrwxrwxrwx 1 root root 0 Dec 25 13:35 253:0 -> 
../../devices/pci:00/:00:04.0/virtio1/block/vda
lrwxrwxrwx 1 root root 0 Dec 25 13:35 253:1 -> 
../../devices/pci:00/:00:04.0/virtio1/block/vda/vda1
lrwxrwxrwx 1 root root 0 Dec 25 13:35 253:2 -> 
../../devices/pci:00/:00:04.0/virtio1/block/vda/vda2
lrwxrwxrwx 1 root root 0 Dec 25 13:35 253:3 -> 
../../devices/pci:00/:00:04.0/virtio1/block/vda/vda3
lrwxrwxrwx 1 root root 0 Dec 25 13:35 8:0 -> 
../../devices/platform/host2/session1/target2:0:0/2:0:0:0/block/sda
lrwxrwxrwx 1 root root 0 Dec 25 13:35 8:1 -> 
../../devices/platform/host2/session1/target2:0:0/2:0:0:0/block/sda/sda1
lrwxrwxrwx 1 root root 0 Dec 25 13:35 8:16 -> 
../../devices/platform/host2/session1/target2:0:0/2:0:0:1/block/sdb
lrwxrwxrwx 1 root root 0 Dec 25 13:35 8:17 -> 
../../devices/platform/host2/session1/target2:0:0/2:0:0:1/block/sdb/sdb1
lrwxrwxrwx 1 root root 0 Dec 25 13:35 8:18 -> 
../../devices/platform/host2/session1/target2:0:0/2:0:0:1/block/sdb/sdb2
lrwxrwxrwx 1 root root 0 Dec 25 13:35 8:19 -> 
../../devices/platform/host2/session1/target2:0:0/2:0:0:1/block/sdb/sdb3
lrwxrwxrwx 1 root root 0 Dec 25 13:35 8:2 -> 
../../devices/platform/host2/session1/target2:0:0/2:0:0:0/block/sda/sda2
lrwxrwxrwx 1 root root 0 Dec 25 13:35 8:3 -> 
../../devices/platform/host2/session1/target2:0:0/2:0:0:0/block/sda/sda3
lrwxrwxrwx 1 root root 0 Dec 25 13:35 9:606099 -> 
../../devices/virtual/block/md606099   <<== this md device

Thanks
Gang

___
linux-lvm mailing list
linux-lvm@redhat.com
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/

[linux-lvm] About --mirrorlog mirrored option in the latest lvm2

2018-10-31 Thread Gang He
Hello List,

As you know, in the past versions(e.g. v2.02.120), lvcreate supports creating a 
mirror type LV with "--mirrorlog mirrored" option, 
But in the latest versions(e.g. v2.02.180), lvm2 says "mirrored is a persistent 
log that is itself mirrored, but should be avoided. Instead, use the raid1 type 
for log redundancy."
Then,  my questions are as below,
1)The latest lvm2 does not allow creating this kind of LV, or just not 
recommended?
In my test environment, it looks that LVM2 can not create this kind of LV.
tb0307-nd1:/ # lvcreate --type mirror -m1 --mirrorlog mirrored -L 2G -n mirr-lv 
cluster-vg2
  Log type, "mirrored", is unavailable to cluster mirrors.

2)If we can not create this kind of LV, how can we migrate these existing LVs 
after we upgrade LVM2 to the latest version (e.g. v2.02.180)?
e.g. we need to convert this kind of LV to "RAID1 type LV, I want to know if 
there is a suggested guide for this scenario.

Thanks
Gang

 




___
linux-lvm mailing list
linux-lvm@redhat.com
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/


[linux-lvm] About --mirrorlog mirrored option in the latest lvm2

2018-10-31 Thread Gang He
Hello List,

As you know, in the past versions(e.g. v2.02.120), lvcreate supports
creating a mirror type LV with "--mirrorlog mirrored" option,
But in the latest versions(e.g. v2.02.180), lvm2 says "mirrored is a
persistent log that is itself mirrored, but should be avoided. Instead, use
the raid1 type for log redundancy."
Then,  my questions are as below,
1)The latest lvm2 does not allow creating this kind of LV, or just not
recommended?
In my test environment, it looks that LVM2 can not create this kind of LV.
tb0307-nd1:/ # lvcreate --type mirror -m1 --mirrorlog mirrored -L 2G -n
mirr-lv cluster-vg2
  Log type, "mirrored", is unavailable to cluster mirrors.
If I try to create this kind of LV in local VG (not cluster VG), this
operation will be supported?

2)If we can not create this kind of LV, how can we migrate these existing
LVs after we upgrade LVM2 to the latest version (e.g. v2.02.180)?
e.g. we need to convert this kind of LV to "RAID1 type LV, I want to know
if there is a suggested guide for this scenario.

Thanks
Gang
___
linux-lvm mailing list
linux-lvm@redhat.com
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/

Re: [linux-lvm] Fails to create LVM volume on the top of RAID1 after upgrade lvm2 to v2.02.180

2018-10-23 Thread Gang He
Hello David,

I am sorry, I can not understand your reply quickly.

>>> On 2018/10/23 at 23:04, in message <20181023150436.gb8...@redhat.com>, David
Teigland  wrote:
> On Mon, Oct 22, 2018 at 08:19:57PM -0600, Gang He wrote:
>>   Process: 815 ExecStart=/usr/sbin/lvm pvscan --cache --activate ay 9:126 
> (code=exited, status=5)
>> 
>> Oct 22 07:34:56 linux-dnetctw lvm[815]:   WARNING: Not using device 
> /dev/md126 for PV qG1QRz-Ivm1-QVwq-uaHV-va9w-wwXh-lIIOhV.
>> Oct 22 07:34:56 linux-dnetctw lvm[815]:   WARNING: PV 
> qG1QRz-Ivm1-QVwq-uaHV-va9w-wwXh-lIIOhV prefers device /dev/sdb2 because 
> of previous preference.
>> Oct 22 07:34:56 linux-dnetctw lvm[815]:   Cannot activate LVs in VG vghome 
> while PVs appear on duplicate devices.
> 
> I'd try disabling lvmetad, I've not been testing these with lvmetad on.
your means is, I should let the user disable lvmetad? 

> We may need to make pvscan read both the start and end of every disk to
> handle these md 1.0 components, and I'm not sure how to do that yet
> without penalizing every pvscan.
What can we do for now? it looks there needs add more code implement this logic.

Thanks
Gang

> 
> Dave


___
linux-lvm mailing list
linux-lvm@redhat.com
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/


Re: [linux-lvm] The node was fenced in the cluster when cmirrord was enabled on LVM2.2.02.120

2018-10-22 Thread Gang He
Hello Guys,

Did you see this problem before? 
it looks there was a similar problem from the Redhat website 
https://access.redhat.com/solutions/1421123#.


Thanks
Gang 

>>> On 2018/10/19 at 17:05, in message
<5bc99e5602f90003b...@prv1-mh.provo.novell.com>, "Gang He" 
wrote:
> Hello List,
> 
> I got a bug report from the customer, which said the node was fenced in the 
> cluster when they enabled cmirrord.
> Before the node was fenced, we can see some log printed as below,
> 
> 2018-09-25T12:55:26.555018+02:00 qu1ci11 cmirrord[6253]: cpg_mcast_joined 
> error: 2
> 2018-09-25T12:55:31.604832+02:00 qu1ci11 sbd[2865]:  warning: 
> inquisitor_child: /dev/disk/by-id/scsi-0QEMU_QEMU_HARDDISK_drive-scsi2-0-0-2 
> requested a reset
> 2018-09-25T12:55:31.608112+02:00 qu1ci11 sbd[2865]:emerg: do_exit: 
> Rebooting system: reboot
> 2018-09-25T12:55:33.202189+02:00 qu1ci11 kernel: [ 4750.932328] 
> device-mapper: 
> dm-log-userspace: [LYuPIux2] Request timed out: [15/93273] - retrying
> 2018-09-25T12:55:35.186091+02:00 qu1ci11 kernel: [ 4752.916268] 
> device-mapper: 
> dm-log-userspace: [LYuPIux2] Request timed out: [9/93274] - retrying
> 2018-09-25T12:55:41.382129+02:00 qu1ci11 kernel: [ 4759.112231] 
> device-mapper: 
> dm-log-userspace: [LYuPIux2] Request timed out: [15/93275] - retrying
> 2018-09-25T12:55:41.382157+02:00 qu1ci11 kernel: [ 4759.116237] 
> device-mapper: 
> dm-log-userspace: [LYuPIux2] Request timed out: [15/93276] - retrying
> 2018-09-25T12:55:41.534092+02:00 qu1ci11 kernel: [ 4759.264201] 
> device-mapper: 
> dm-log-userspace: [LYuPIux2] Request timed out: [15/93278] - retrying
> 2018-09-25T12:55:41.534117+02:00 qu1ci11 kernel: [ 4759.264274] 
> device-mapper: 
> dm-log-userspace: [LYuPIux2] Request timed out: [15/93279] - retrying
> 2018-09-25T12:55:41.534119+02:00 qu1ci11 kernel: [ 4759.264278] 
> device-mapper: 
> dm-log-userspace: [LYuPIux2] Request timed out: [15/93277] - retrying
>  ...
> 
> 2018-09-25T12:56:26.439557+02:00 qu1ci11 lrmd[3795]:  warning: 
> rsc_VG_ASCS_monitor_6 process (PID 4467) timed out
> 2018-09-25T12:56:26.439974+02:00 qu1ci11 lrmd[3795]:  warning: 
> rsc_VG_ASCS_monitor_6:4467 - timed out after 6ms
> 2018-09-25T12:56:26.534104+02:00 qu1ci11 kernel: [ 4804.264240] 
> device-mapper: 
> dm-log-userspace: [LYuPIux2] Request timed out: [15/93321] - retrying
> 2018-09-25T12:56:26.534122+02:00 qu1ci11 kernel: [ 4804.264287] 
> device-mapper: 
> dm-log-userspace: [LYuPIux2] Request timed out: [15/93320] - retrying
> 2018-09-25T12:56:26.534124+02:00 qu1ci11 kernel: [ 4804.264311] 
> device-mapper: 
> dm-log-userspace: [LYuPIux2] Request timed out: [15/93322] - retrying
> 
> Did you guys encounter the similar issue before? I can find the similar bug 
> report at 
> http://lists.linux-ha.org/pipermail/linux-ha/2014-December/048427.html 
> 
> If you know the root cause, please let me know. 
> 
> 
> Thanks
> Gang
>  
>   
>
> 
> 
> 
> ___
> linux-lvm mailing list
> linux-lvm@redhat.com 
> https://www.redhat.com/mailman/listinfo/linux-lvm 
> read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/


___
linux-lvm mailing list
linux-lvm@redhat.com
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/


Re: [linux-lvm] Fails to create LVM volume on the top of RAID1 after upgrade lvm2 to v2.02.180

2018-10-22 Thread Gang He
Hello David,

The user installed the lvm2 (v2.02.180) rpms with the below three patches, it 
looked there were still some problems in the user machine.
The feedback is as below from the user,

In a first round I installed lvm2-2.02.180-0.x86_64.rpm 
liblvm2cmd2_02-2.02.180-0.x86_64.rpm and liblvm2app2_2-2.02.180-0.x86_64.rpm - 
but no luck - after reboot still the same problem with ending up in the 
emergency console.
I additionally installed in the next round 
libdevmapper-event1_03-1.02.149-0.x86_64.rpm, 
./libdevmapper1_03-1.02.149-0.x86_64.rpm and 
device-mapper-1.02.149-0.x86_64.rpm, again - ending up in the emergency console
systemctl status lvm2-pvscan@9:126 output: 
lvm2-pvscan@9:126.service - LVM2 PV scan on device 9:126
   Loaded: loaded (/usr/lib/systemd/system/lvm2-pvscan@.service; static; vendor 
preset: disabled)
   Active: failed (Result: exit-code) since Mon 2018-10-22 07:34:56 CEST; 5min 
ago
 Docs: man:pvscan(8)
  Process: 815 ExecStart=/usr/sbin/lvm pvscan --cache --activate ay 9:126 
(code=exited, status=5)
 Main PID: 815 (code=exited, status=5)

Oct 22 07:34:55 linux-dnetctw lvm[815]:   WARNING: Autoactivation reading from 
disk instead of lvmetad.
Oct 22 07:34:56 linux-dnetctw lvm[815]:   /dev/sde: open failed: No medium found
Oct 22 07:34:56 linux-dnetctw lvm[815]:   WARNING: Not using device /dev/md126 
for PV qG1QRz-Ivm1-QVwq-uaHV-va9w-wwXh-lIIOhV.
Oct 22 07:34:56 linux-dnetctw lvm[815]:   WARNING: PV 
qG1QRz-Ivm1-QVwq-uaHV-va9w-wwXh-lIIOhV prefers device /dev/sdb2 because of 
previous preference.
Oct 22 07:34:56 linux-dnetctw lvm[815]:   Cannot activate LVs in VG vghome 
while PVs appear on duplicate devices.
Oct 22 07:34:56 linux-dnetctw lvm[815]:   0 logical volume(s) in volume group 
"vghome" now active
Oct 22 07:34:56 linux-dnetctw lvm[815]:   vghome: autoactivation failed.
Oct 22 07:34:56 linux-dnetctw systemd[1]: lvm2-pvscan@9:126.service: Main 
process exited, code=exited, status=5/NOTINSTALLED
Oct 22 07:34:56 linux-dnetctw systemd[1]: lvm2-pvscan@9:126.service: Failed 
with result 'exit-code'.
Oct 22 07:34:56 linux-dnetctw systemd[1]: Failed to start LVM2 PV scan on 
device 9:126.

What should we do in the next step for this case? 
or we have to accept the fact, to modify the related configurations manually to 
work around.  

Thanks
Gang


>>> On 2018/10/19 at 1:59, in message <20181018175923.gc28...@redhat.com>, David
Teigland  wrote:
> On Thu, Oct 18, 2018 at 11:01:59AM -0500, David Teigland wrote:
>> On Thu, Oct 18, 2018 at 02:51:05AM -0600, Gang He wrote:
>> > If I include this patch in lvm2 v2.02.180,
>> > LVM2 can activate LVs on the top of RAID1 automatically? or we still have 
> to set "allow_changes_with_duplicate_pvs=1" in lvm.conf?
>> 
>> I didn't need any config changes when testing this myself, but there may
>> be other variables I've not encountered.
> 
> See these three commits:
> d1b652143abc tests: add new test for lvm on md devices
> e7bb50880901 scan: enable full md filter when md 1.0 devices are present
> de2863739f2e scan: use full md filter when md 1.0 devices are present
> 
> at 
> https://sourceware.org/git/?p=lvm2.git;a=shortlog;h=refs/heads/2018-06-01-sta 
> ble
> 
> (I was wrong earlier; allow_changes_with_duplicate_pvs is not correct in
> this case.)


___
linux-lvm mailing list
linux-lvm@redhat.com
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/


[linux-lvm] The node was fenced in the cluster when cmirrord was enabled on LVM2.2.02.120

2018-10-19 Thread Gang He
Hello List,

I got a bug report from the customer, which said the node was fenced in the 
cluster when they enabled cmirrord.
Before the node was fenced, we can see some log printed as below,

2018-09-25T12:55:26.555018+02:00 qu1ci11 cmirrord[6253]: cpg_mcast_joined 
error: 2
2018-09-25T12:55:31.604832+02:00 qu1ci11 sbd[2865]:  warning: inquisitor_child: 
/dev/disk/by-id/scsi-0QEMU_QEMU_HARDDISK_drive-scsi2-0-0-2 requested a reset
2018-09-25T12:55:31.608112+02:00 qu1ci11 sbd[2865]:emerg: do_exit: 
Rebooting system: reboot
2018-09-25T12:55:33.202189+02:00 qu1ci11 kernel: [ 4750.932328] device-mapper: 
dm-log-userspace: [LYuPIux2] Request timed out: [15/93273] - retrying
2018-09-25T12:55:35.186091+02:00 qu1ci11 kernel: [ 4752.916268] device-mapper: 
dm-log-userspace: [LYuPIux2] Request timed out: [9/93274] - retrying
2018-09-25T12:55:41.382129+02:00 qu1ci11 kernel: [ 4759.112231] device-mapper: 
dm-log-userspace: [LYuPIux2] Request timed out: [15/93275] - retrying
2018-09-25T12:55:41.382157+02:00 qu1ci11 kernel: [ 4759.116237] device-mapper: 
dm-log-userspace: [LYuPIux2] Request timed out: [15/93276] - retrying
2018-09-25T12:55:41.534092+02:00 qu1ci11 kernel: [ 4759.264201] device-mapper: 
dm-log-userspace: [LYuPIux2] Request timed out: [15/93278] - retrying
2018-09-25T12:55:41.534117+02:00 qu1ci11 kernel: [ 4759.264274] device-mapper: 
dm-log-userspace: [LYuPIux2] Request timed out: [15/93279] - retrying
2018-09-25T12:55:41.534119+02:00 qu1ci11 kernel: [ 4759.264278] device-mapper: 
dm-log-userspace: [LYuPIux2] Request timed out: [15/93277] - retrying
 ...

2018-09-25T12:56:26.439557+02:00 qu1ci11 lrmd[3795]:  warning: 
rsc_VG_ASCS_monitor_6 process (PID 4467) timed out
2018-09-25T12:56:26.439974+02:00 qu1ci11 lrmd[3795]:  warning: 
rsc_VG_ASCS_monitor_6:4467 - timed out after 6ms
2018-09-25T12:56:26.534104+02:00 qu1ci11 kernel: [ 4804.264240] device-mapper: 
dm-log-userspace: [LYuPIux2] Request timed out: [15/93321] - retrying
2018-09-25T12:56:26.534122+02:00 qu1ci11 kernel: [ 4804.264287] device-mapper: 
dm-log-userspace: [LYuPIux2] Request timed out: [15/93320] - retrying
2018-09-25T12:56:26.534124+02:00 qu1ci11 kernel: [ 4804.264311] device-mapper: 
dm-log-userspace: [LYuPIux2] Request timed out: [15/93322] - retrying

Did you guys encounter the similar issue before? I can find the similar bug 
report at 
http://lists.linux-ha.org/pipermail/linux-ha/2014-December/048427.html 
If you know the root cause, please let me know. 


Thanks
Gang


  



___
linux-lvm mailing list
linux-lvm@redhat.com
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/


Re: [linux-lvm] Fails to create LVM volume on the top of RAID1 after upgrade lvm2 to v2.02.180

2018-10-18 Thread Gang He
Hello David,

Thank for your help.
If I include this patch in lvm2 v2.02.180,
LVM2 can activate LVs on the top of RAID1 automatically? or we still have to 
set "allow_changes_with_duplicate_pvs=1" in lvm.conf?


Thanks
Gang 

>>> On 2018/10/18 at 2:42, in message <20181017184204.gc14...@redhat.com>, David
Teigland  wrote:
> On Wed, Oct 17, 2018 at 09:10:25AM -0500, David Teigland wrote:
>> Check if the version you are using has this commit:
>> 
> https://sourceware.org/git/?p=lvm2.git;a=commit;h=09fcc8eaa8eb7fa4fcd7c6611bf 
> bfb83f726ae38
> 
> I see that this commit is missing from the stable branch:
> https://sourceware.org/git/?p=lvm2.git;a=commit;h=3fd75d1bcd714b02fb2b843d19 
> 28b2a875402f37
> 
> I'll backport that one.
> 
> Dave


___
linux-lvm mailing list
linux-lvm@redhat.com
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/


Re: [linux-lvm] Fails to create LVM volume on the top of RAID1 after upgrade lvm2 to v2.02.180

2018-10-16 Thread Gang He
Hello David,

>>> On 2018/10/15 at 23:26, in message <20181015152648.gb29...@redhat.com>, 
>>> David
Teigland  wrote:
> On Sun, Oct 14, 2018 at 11:39:20PM -0600, Gang He wrote:
>> >> [  147.371698] linux-472a dracut-initqueue[391]: Cannot activate LVs in 
>> >> VG 
>> > vghome while PVs appear on duplicate devices.
>> > 
>> > Do these warnings only appear from "dracut-initqueue"?  Can you run and
>> > send 'vgs -' from the command line?  If they don't appear from the
>> > command line, then is "dracut-initqueue" using a different lvm.conf?
>> > lvm.conf settings can effect this (filter, md_component_detection,
>> > external_device_info_source).
>> 
>> mdadm --detail --scan -vvv
>> /dev/md/linux:0:
>>Version : 1.0
> 
> It has the old superblock version 1.0 located at the end of the device, so
> lvm will not always see it.  (lvm will look for it when it's writing to
> new devices to ensure it doesn't clobber an md component.)
> 
> (Also keep in mind that this md superblock is no longer recommended:
> raid.wiki.kernel.org/index.php/RAID_superblock_formats)
> 
> There are various ways to make lvm handle this:
> 
> - allow_changes_with_duplicate_pvs=1
> - external_device_info_source="udev"
> - reject sda2, sdb2 in lvm filter
> 
There are some feedback as below from our user's environment (since I can not 
reproduce this problem in my local
environment).

I tested one by one option in the lvm.conf.

The good news - enabling 
- external_device_info_source="udev"
- reject sda2, sdb2 in lvm filter

both work! The system enables the proper lvm raid1 device again.

The first option does not work.
systemctl status lvm2-pvscan@9:126 results in:

● lvm2-pvscan@9:126.service - LVM2 PV scan on device 9:126
   Loaded: loaded (/usr/lib/systemd/system/lvm2-pvscan@.service; static; vendor 
preset: disabled)
   Active: failed (Result: exit-code) since Tue 2018-10-16 22:53:57 CEST; 3min 
4s ago
 Docs: man:pvscan(8)
  Process: 849 ExecStart=/usr/sbin/lvm pvscan --cache --activate ay 9:126 
(code=exited, status=5)
 Main PID: 849 (code=exited, status=5)

Oct 16 22:53:57 linux-dnetctw lvm[849]:   WARNING: Not using device /dev/md126 
for PV
qG1QRz-Ivm1-QVwq-uaHV-va9w-wwXh-lIIOhV.
Oct 16 22:53:57 linux-dnetctw lvm[849]:   WARNING: PV 
qG1QRz-Ivm1-QVwq-uaHV-va9w-wwXh-lIIOhV prefers device /dev/sdb2
because of previous preference.
Oct 16 22:53:57 linux-dnetctw lvm[849]:   WARNING: PV 
qG1QRz-Ivm1-QVwq-uaHV-va9w-wwXh-lIIOhV prefers device /dev/sdb2
because of previous preference.
Oct 16 22:53:57 linux-dnetctw lvm[849]:   device-mapper: reload ioctl on  
(254:0) failed: Device or resource busy
Oct 16 22:53:57 linux-dnetctw lvm[849]:   device-mapper: reload ioctl on  
(254:0) failed: Device or resource busy
Oct 16 22:53:57 linux-dnetctw lvm[849]:   0 logical volume(s) in volume group 
"vghome" now active
Oct 16 22:53:57 linux-dnetctw lvm[849]:   vghome: autoactivation failed.
Oct 16 22:53:57 linux-dnetctw systemd[1]: lvm2-pvscan@9:126.service: Main 
process exited, code=exited,
status=5/NOTINSTALLED
Oct 16 22:53:57 linux-dnetctw systemd[1]: lvm2-pvscan@9:126.service: Failed 
with result 'exit-code'.
Oct 16 22:53:57 linux-dnetctw systemd[1]: Failed to start LVM2 PV scan on 
device 9:126.

pvs shows:
  /dev/sde: open failed: No medium found
  WARNING: found device with duplicate /dev/sdc2
  WARNING: found device with duplicate /dev/md126
  WARNING: Disabling lvmetad cache which does not support duplicate PVs.
  WARNING: Scan found duplicate PVs.
  WARNING: Not using lvmetad because cache update failed.
  /dev/sde: open failed: No medium found
  WARNING: Not using device /dev/sdc2 for PV 
qG1QRz-Ivm1-QVwq-uaHV-va9w-wwXh-lIIOhV.
  WARNING: Not using device /dev/md126 for PV 
qG1QRz-Ivm1-QVwq-uaHV-va9w-wwXh-lIIOhV.
  WARNING: PV qG1QRz-Ivm1-QVwq-uaHV-va9w-wwXh-lIIOhV prefers device /dev/sdb2 
because of previous preference.
  WARNING: PV qG1QRz-Ivm1-QVwq-uaHV-va9w-wwXh-lIIOhV prefers device /dev/sdb2 
because of previous preference.
  PV VG Fmt  Attr PSize PFree  
  /dev/sdb2  vghome lvm2 a--  1.82t 202.52g

My questions are as follows,
1) why did the solution 1 not work? since the method looks more close to fix 
this problem. 
2) Could we back-port some code from v2.02.177 source files to keep the 
compatibility? to avoid modifying some items
manually. 
   or, we have to accept this problem from v2.02.180 (maybe 178?) due to 
by-design? 

Thanks
Gang

>> > It could be, since the new scanning changed how md detection works.  The
>> > md superblock version effects how lvm detects this.  md superblock 1.0 (at
>> > the end of the device) is not detected as easily as newer md versions
>> > (1.1, 1.2) where the superblock is at the beginning.  Do you know which
>> > this is?

___
linux-lvm mailing list
linux-lvm@redhat.com
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/

Re: [linux-lvm] Fails to create LVM volume on the top of RAID1 after upgrade lvm2 to v2.02.180

2018-10-14 Thread Gang He
Hell David,

>>> On 2018/10/8 at 23:00, in message <20181008150016.gb21...@redhat.com>, David
Teigland  wrote:
> On Mon, Oct 08, 2018 at 04:23:27AM -0600, Gang He wrote:
>> Hello List
>> 
>> The system uses lvm based on raid1. 
>> It seems that the PV of the raid1 is found also on the single disks that 
> build the raid1 device:
>> [  147.121725] linux-472a dracut-initqueue[391]: WARNING: PV 
> qG1QRz-Ivm1-QVwq-uaHV-va9w-wwXh-lIIOhV on /dev/sda2 was already found on 
> /dev/md1.
>> [  147.123427] linux-472a dracut-initqueue[391]: WARNING: PV 
> qG1QRz-Ivm1-QVwq-uaHV-va9w-wwXh-lIIOhV on /dev/sdb2 was already found on 
> /dev/md1.
>> [  147.369863] linux-472a dracut-initqueue[391]: WARNING: PV 
> qG1QRz-Ivm1-QVwq-uaHV-va9w-wwXh-lIIOhV prefers device /dev/md1 because device 
> size is correct.
>> [  147.370597] linux-472a dracut-initqueue[391]: WARNING: PV 
> qG1QRz-Ivm1-QVwq-uaHV-va9w-wwXh-lIIOhV prefers device /dev/md1 because device 
> size is correct.
>> [  147.371698] linux-472a dracut-initqueue[391]: Cannot activate LVs in VG 
> vghome while PVs appear on duplicate devices.
> 
> Do these warnings only appear from "dracut-initqueue"?  Can you run and
> send 'vgs -' from the command line?  If they don't appear from the
> command line, then is "dracut-initqueue" using a different lvm.conf?
> lvm.conf settings can effect this (filter, md_component_detection,
> external_device_info_source).

mdadm --detail --scan -vvv
/dev/md/linux:0:
   Version : 1.0
 Creation Time : Sun Jul 22 22:49:21 2012
Raid Level : raid1
Array Size : 513012 (500.99 MiB 525.32 MB)
 Used Dev Size : 513012 (500.99 MiB 525.32 MB)
  Raid Devices : 2
 Total Devices : 2
   Persistence : Superblock is persistent

 Intent Bitmap : Internal

   Update Time : Mon Jul 16 00:29:19 2018
 State : clean 
Active Devices : 2
   Working Devices : 2
Failed Devices : 0
 Spare Devices : 0

Consistency Policy : bitmap

  Name : linux:0
  UUID : 160998c8:7e21bcff:9cea0bbc:46454716
Events : 469

Number   Major   Minor   RaidDevice State
   0   8   170  active sync   /dev/sdb1
   1   8   331  active sync   /dev/sdc1
/dev/md/linux:1:
   Version : 1.0
 Creation Time : Sun Jul 22 22:49:22 2012
Raid Level : raid1
Array Size : 1953000312 (1862.53 GiB 1999.87 GB)
 Used Dev Size : 1953000312 (1862.53 GiB 1999.87 GB)
  Raid Devices : 2
 Total Devices : 2
   Persistence : Superblock is persistent

 Intent Bitmap : Internal

   Update Time : Fri Oct 12 20:16:25 2018
 State : clean 
Active Devices : 2
   Working Devices : 2
Failed Devices : 0
 Spare Devices : 0

Consistency Policy : bitmap

  Name : linux:1
  UUID : 17426969:03d7bfa7:5be33b0b:8171417a
Events : 326248

Number   Major   Minor   RaidDevice State
   0   8   180  active sync   /dev/sdb2
   1   8   341  active sync   /dev/sdc2

Thanks
Gang

> 
>> This is a regression bug? since the user did not encounter this problem with 
> lvm2 v2.02.177.
> 
> It could be, since the new scanning changed how md detection works.  The
> md superblock version effects how lvm detects this.  md superblock 1.0 (at
> the end of the device) is not detected as easily as newer md versions
> (1.1, 1.2) where the superblock is at the beginning.  Do you know which
> this is?


___
linux-lvm mailing list
linux-lvm@redhat.com
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/


[linux-lvm] Fails to create LVM volume on the top of RAID1 after upgrade lvm2 to v2.02.180

2018-10-08 Thread Gang He
Hello List

The system uses lvm based on raid1. 
It seems that the PV of the raid1 is found also on the single disks that build 
the raid1 device:
[  147.121725] linux-472a dracut-initqueue[391]: WARNING: PV 
qG1QRz-Ivm1-QVwq-uaHV-va9w-wwXh-lIIOhV on /dev/sda2 was already found on 
/dev/md1.
[  147.123427] linux-472a dracut-initqueue[391]: WARNING: PV 
qG1QRz-Ivm1-QVwq-uaHV-va9w-wwXh-lIIOhV on /dev/sdb2 was already found on 
/dev/md1.
[  147.369863] linux-472a dracut-initqueue[391]: WARNING: PV 
qG1QRz-Ivm1-QVwq-uaHV-va9w-wwXh-lIIOhV prefers device /dev/md1 because device 
size is correct.
[  147.370597] linux-472a dracut-initqueue[391]: WARNING: PV 
qG1QRz-Ivm1-QVwq-uaHV-va9w-wwXh-lIIOhV prefers device /dev/md1 because device 
size is correct.
[  147.371698] linux-472a dracut-initqueue[391]: Cannot activate LVs in VG 
vghome while PVs appear on duplicate devices.

This is a regression bug? since the user did not encounter this problem with 
lvm2 v2.02.177.


Thanks
Gang



___
linux-lvm mailing list
linux-lvm@redhat.com
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/


[linux-lvm] lvm2: state is not persisted to disk if a snapshot becomes invalid immediately before reboot

2018-09-18 Thread Gang He
Hello List,

I use a old version LVM2 2.02.98 on kernel 3.0.101, I encountered such a 
problem.
The detailed steps are as below,
I am doing an update with the following strategy for updates

make LVM Snapshot of root ("/")  file system partition

Run zypper updates ->

Run a script to check certain stuff is OK ->

On failure of that script (if any of the check fail, e.g one rpm update 
failed), merge LVM -> 

Reboot (because you have to for an LVM merge if it's the root volume)

And then, sometimes as the system comes back up, Reiserfs sometimes goes 
read-only / corrupt.

which shows in the dmesg as

[   29.794593] BIOS EDD facility v0.16 2004-Jun-25, 1 devices found
[   30.412919] MCE: In-kernel MCE decoding enabled.
[   35.729081] eth0: no IPv6 routers present
[   35.805434] REISERFS error (device dm-1): PAP-5660 reiserfs_do_truncate: 
wrong result -1 of search for [99955 99957 0xfff DIRECT]
[   35.805445] REISERFS (device dm-1): Remounting filesystem read-only
[397338.410475] show_signal_msg: 21 callbacks suppressed
[397338.410484] esmd[1687]: segfault at 46 ip f7582f15 sp 
ff9b7c70 error 4 in libc-2.11.3.so[f7543000+16c000]


Did you meet this problem? this problem was fixed in the latest version?

Thanks a lot.
Gang



___
linux-lvm mailing list
linux-lvm@redhat.com
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/


[linux-lvm] LVM2 commands sometimes were hanged after upgrade to v2.02.180 from v2.02.120

2018-08-09 Thread Gang He
Hello List,

I am using LVM2 v2.02.180 in a clvmd based cluster (three nodes), but sometimes 
I encountered lvm2 command hang problem.
When the comand is hanged, all lvm2 related commands from each node will be 
hanged.
For example,
First command was hanged at node 2,
sle12sp4b2-nd2:/ # pvmove -i 5 -v /dev/vdb /dev/vdc
Archiving volume group "cluster-vg2" metadata (seqno 34).
Creating logical volume pvmove1
Moving 2560 extents of logical volume cluster-vg2/test-lv.

sle12sp4b2-nd2:/ # cat /proc/15074/stack
[] unix_stream_read_generic+0x66b/0x870
[] unix_stream_recvmsg+0x45/0x50
[] sock_read_iter+0x86/0xd0
[] __vfs_read+0xd9/0x140
[] vfs_read+0x87/0x130
[] SyS_read+0x42/0x90
[] do_syscall_64+0x74/0x150
[] entry_SYSCALL_64_after_hwframe+0x3d/0xa2
[] 0x

Then, I run "lvs command on each node, the command was hanged, like
sle12sp4b2-nd2:/ # lvs

sle12sp4b2-nd2:/ # cat /proc/15553/stack
[] dm_consult_userspace+0x1e8/0x490 [dm_log_userspace]
[] userspace_do_request.isra.3+0x53/0x140 [dm_log_userspace]
[] userspace_status+0xa7/0x1c0 [dm_log_userspace]
[] mirror_status+0x1a9/0x370 [dm_mirror]
[] retrieve_status+0xad/0x1c0 [dm_mod]
[] table_status+0x51/0x80 [dm_mod]
[] ctl_ioctl+0x1d8/0x450 [dm_mod]
[] dm_ctl_ioctl+0xa/0x10 [dm_mod]
[] do_vfs_ioctl+0x92/0x5e0
[] SyS_ioctl+0x74/0x80
[] do_syscall_64+0x74/0x150
[] entry_SYSCALL_64_after_hwframe+0x3d/0xa2
[] 0x


Since I am trying to upgrade LVM2 from vv2.02.120 to v2.02.180, 
I do not know what is the real cause to this problem? 
maybe it is related to /etc/lvm/lvm.conf file? since the item raid_region_size 
= 2048 in the new conf, but in the past conf raid_region_size = 512,
or any other configuration items? 

To fix this problem, I have to make the configuration file is consistent on 
each node, and reboot all the nodes.

Thanks
Gang



___
linux-lvm mailing list
linux-lvm@redhat.com
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/


Re: [linux-lvm] How to upgrade LVM2/CLVM to v2.02.180 from old versions

2018-08-05 Thread Gang He
Hello Marian,

Thank for your explanation, it's very helpful to me.

Thanks
Gang


>>> On 2018/8/3 at 22:05, in message
, Marian Csontos
 wrote:
> On 08/03/2018 11:40 AM, Gang He wrote:
>> Hello List,
>> 
>> I want to upgrade LVM2/CLVM to v2.02.180 from v2.02.120, but there are some 
> problems which I want to confirm with you guys.
>> 1) How to migrate /etc/lvm/lvm.conf? since the existing configuration file 
> does not include some new attributes,
>> could you help to figure out which new key attributes, which should be 
> considered during the upgrade installation?
> 
> Have a look at lvmconfig command.
> 
> `lvmconfig --type diff`[1] can help you figure out what you changed.
> 
> And `lvmconfig --type new --sinceversion 2.02.120`[2] might be what you 
> are looking for.
> 
> [1]: Introduced in 2.02.119 so this should work even on your old 
> installation.
> [2]: New in 2.02.136.
> 
>> 
>> 2) lvmpolld is necessary in LVM2/CLVM v2.02.180? I can find the related 
> files in the installation file list
>> /sbin/lvmpolld
>> /usr/lib/systemd/system/lvm2-lvmpolld.service
>> /usr/lib/systemd/system/lvm2-lvmpolld.socket
>> this daemon should be enable by default? if we disable this daemon, 
> LVM2/CLVM related features (e.g. pvmove) will be affected, or not?
> 
> pvmove should work mostly fine even without lvmpolld, the testsuite runs 
> both with and without lvmpolld.
> 
> I think the manpage of lvmpolld is good start:
> 
> The  purpose of lvmpolld is to reduce
> the number of spawned background pro‐
> cesses  per  otherwise unique polling
> operation. There should be only  one.
> It also eliminates the possibility of
> unsolicited termination of background
> process by external factors.
> 
> Process receiving SIGHUP was one of the "external factors".
> 
>> 
>> 3) any other places (e.g. configuration files, binary files, features, 
> etc.), which should be considered during the upgrade?
> 
> lvm1 and pool format were removed in 2.02.178. There is a branch adding 
> them back if you need them - *2018-05-17-put-format1-and-pool-back*. At 
> least it does apply almost cleanly :-)
> 
> -- Martian
> 
> 
>> 
>> Thanks a lot.
>> Gang
>> 
>> 
>> 
>> ___
>> linux-lvm mailing list
>> linux-lvm@redhat.com 
>> https://www.redhat.com/mailman/listinfo/linux-lvm 
>> read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/ 
>> 
> 
> ___
> linux-lvm mailing list
> linux-lvm@redhat.com 
> https://www.redhat.com/mailman/listinfo/linux-lvm 
> read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/

___
linux-lvm mailing list
linux-lvm@redhat.com
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/

[linux-lvm] How to upgrade LVM2/CLVM to v2.02.180 from old versions

2018-08-03 Thread Gang He
Hello List,

I want to upgrade LVM2/CLVM to v2.02.180 from v2.02.120, but there are some 
problems which I want to confirm with you guys.
1) How to migrate /etc/lvm/lvm.conf? since the existing configuration file does 
not include some new attributes, 
could you help to figure out which new key attributes, which should be 
considered during the upgrade installation?

2) lvmpolld is necessary in LVM2/CLVM v2.02.180? I can find the related files 
in the installation file list
/sbin/lvmpolld
/usr/lib/systemd/system/lvm2-lvmpolld.service
/usr/lib/systemd/system/lvm2-lvmpolld.socket
this daemon should be enable by default? if we disable this daemon, LVM2/CLVM 
related features (e.g. pvmove) will be affected, or not?

3) any other places (e.g. configuration files, binary files, features, etc.), 
which should be considered during the upgrade?

Thanks a lot.
Gang  



___
linux-lvm mailing list
linux-lvm@redhat.com
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/


[linux-lvm] vgchange command can not activate mirrored LV in VG

2018-07-17 Thread Gang He
Hello List,

I encountered a problem with the source code 2.02.178,  
vgchange command did not activate mirrored LV in VG, the error messages were as 
below,

NAME   MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sr0 11:01 1024M  0 rom
vda254:00   40G  0 disk
├─vda1 254:104G  0 part [SWAP]
├─vda2 254:20 23.6G  0 part /
└─vda3 254:30 12.4G  0 part /home
vdb254:16   0   40G  0 disk
vdc254:32   0   40G  0 disk
tb0307-nd1:~ # vgchange -aay cluster-vg2
  Error locking on node a431337: Shared cluster mirrors are not available. <<= 
here
  2 logical volume(s) in volume group "cluster-vg2" now active
tb0307-nd1:~ # lsblk
NAME MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sr0   11:01 1024M  0 rom
vda  254:00   40G  0 disk
├─vda1   254:104G  0 part [SWAP]
├─vda2   254:20 23.6G  0 part /
└─vda3   254:30 12.4G  0 part /home
vdb  254:16   0   40G  0 disk
└─cluster--vg2-test--lv3 253:10   10G  0 lvm
vdc  254:32   0   40G  0 disk
└─cluster--vg2-test--lv  253:00   10G  0 lvm

But, I built the lvm2 related rpms with the same code (include configure 
options ), 
vgchange command can work well, 
tb0307-nd2:/ # vgchange -aay cluster-vg2
  3 logical volume(s) in volume group "cluster-vg2" now active
tb0307-nd2:/ # lsblk
NAME  MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sr011:01  3.9G  0 rom  
/run/media/ghe/openSUSE-Tumbleweed-DVD-x86_6423
vda   254:00   40G  0 disk
├─vda1254:104G  0 part [SWAP]
├─vda2254:20 23.6G  0 part /
└─vda3254:30 12.4G  0 part /home
vdb   254:16   0   40G  0 disk
├─cluster--vg2-test--lv2_mimage_0 253:208G  0 lvm
│ └─cluster--vg2-test--lv2253:408G  0 lvm
└─cluster--vg2-test--lv3  253:50   10G  0 lvm
vdc   254:32   0   40G  0 disk
├─cluster--vg2-test--lv   253:00   10G  0 lvm
├─cluster--vg2-test--lv2_mlog 253:104M  0 lvm
│ └─cluster--vg2-test--lv2253:408G  0 lvm
└─cluster--vg2-test--lv2_mimage_1 253:308G  0 lvm
  └─cluster--vg2-test--lv2253:408G  0 lvm

Do you know the root cause? what is the possible reason?

Thanks
Gang




___
linux-lvm mailing list
linux-lvm@redhat.com
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/

Re: [linux-lvm] Some questions about the upcoming LVM v2_02_178

2018-06-12 Thread Gang He
Hello Joe,


>>> On 2018/6/12 at 22:22, in message <20180612142219.ixpzqxqws3qiwqbm@reti>, 
>>> Joe
Thornber  wrote:
> On Tue, Jun 12, 2018 at 03:01:27AM -0600, Gang He wrote:
>> Hello List,
>> 
>> I saw there was a tag "v2_02_178-rc1" for LVM2, then I have some questions 
> about  the upcoming LVM v2_02_178.
>> 1) Will there be the version v2_02_178 for LVM2? since I saw some text about 
> Version 3.0.0 in the git change logs.
> 
> Yes there will be.  We've had no bug reports for the -rc, so the final
> release will be the same as the -rc.
Between LVM2 v2_02_178-rc1 and  LVM2 v2_02_177, 
there will not be any components/features remove, except include some bug 
fixes, right?

Thanks a lot.
Gang 

> 
>> 2) For the next LVM2 version, which components will be affected? since I saw 
> that clvmd related code has been removed.
> 
> We've decided to bump the version number to 3.0.0 for the release
> *after* 2.02.178.  This change in version number indicates the *start*
> of some large changes to lvm.
> 
> Obviously the release notes for v3.0.0 will go into this more.  But,
> initially the most visible changes will be removal of a couple of
> features:
> 
> clvmd
> -
> 
> The locking required to provide this feature was quite pervasive and
> was restricting the adding of new features (for instance, I'd like to
> be able to allocate from any LV not just PVs).  With Dave Teigland's
> lvmlockd I think the vast majority of use cases are covered.  Those
> that are wedded to clvmd can continue to use LVM2.02.*
> 
> Also, testing cluster software is terribly expensive, we just don't
> have the resources to provide two solutions.
> 
> lvmapi
> --
> 
> This library has been deprecated for a while in favour of the dbus api.
> 
> 
> - Joe
> 
> ___
> linux-lvm mailing list
> linux-lvm@redhat.com 
> https://www.redhat.com/mailman/listinfo/linux-lvm 
> read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/


___
linux-lvm mailing list
linux-lvm@redhat.com
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/


Re: [linux-lvm] pvmove does not work at all with version 2.02.177(2)

2018-06-11 Thread Gang He
Hi Martion,

>>> On 2018/6/11 at 21:20, in message
, Marian Csontos
 wrote:
> On 06/11/2018 08:13 AM, Gang He wrote:
>> Hi Martian,
>> 
>>>>> On 2018/5/30 at 18:37, in message
>> <2397dd2b-deef-2bf2-47ca-51fb6f880...@redhat.com>, Marian Csontos
>>  wrote:
>>> On 05/30/2018 11:23 AM, Gang He wrote:
>>>> Hello List,
>>>>
>>>> As you know, I ever reported that lvcreate could not create a mirrored LV,
>>> the root cause is a configure building item "--enable-cmirrord" was missed.
>>>> Now, I encounter another problem, pvmove does not work at all.
>>>> The detailed information/procedure is as below,
>>>> sle-nd1:/ # pvs
>>>> PV VG  Fmt  Attr PSize   PFree
>>>> /dev/sda1  cluster-vg2 lvm2 a--  120.00g 120.00g
>>>> /dev/sda2  cluster-vg2 lvm2 a--   30.00g  20.00g
>>>> /dev/sdb   cluster-vg2 lvm2 a--   40.00g  30.00g
>>>> sle-nd1:/ # vgs
>>>> VG  #PV #LV #SN Attr   VSize   VFree
>>>> cluster-vg2   3   2   0 wz--nc 189.99g 169.99g
>>>> sle-nd1:/ # lvs
>>>> LV   VG  Attr   LSize  Pool Origin Data%  Meta%  Move 
> Log
>>> Cpy%Sync Convert
>>>> test-lv2 cluster-vg2 -wi-a- 10.00g
>>>> sle-nd1:/ # lsblk
>>>> NAME MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
>>>> sda8:00  160G  0 disk
>>>> ├─sda1 8:10  120G  0 part
>>>> ├─sda2 8:20   30G  0 part
>>>> └─sda3 8:30   10G  0 part
>>>> sdb8:16   0   40G  0 disk
>>>> └─cluster--vg2-test--lv2 254:00   10G  0 lvm
>>>> vda  253:00   40G  0 disk
>>>> ├─vda1   253:104G  0 part [SWAP]
>>>> ├─vda2   253:20 23.6G  0 part /
>>>> └─vda3   253:30 12.4G  0 part /home
>>>>
>>>> sle-nd1:/ # pvmove -i 5 -v /dev/sdb /dev/sda1
>>>>   Executing: /sbin/modprobe dm-mirror
>>>>   Executing: /sbin/modprobe dm-log-userspace
>>>>   Wiping internal VG cache
>>>>   Wiping cache of LVM-capable devices
>>>>   Archiving volume group "cluster-vg2" metadata (seqno 19).
>>>>   Creating logical volume pvmove0
>>>>   Moving 2560 extents of logical volume cluster-vg2/test-lv2.
>>>> Increasing mirror region size from 0to 8.00 KiB
>>>> Error locking on node a431232: Device or resource busy
>>>> Failed to activate cluster-vg2/test-lv2
>>>>
>>>> sle-nd1:/ # lvm version
>>>> LVM version: 2.02.177(2) (2017-12-18)
>>>> Library version: 1.03.01 (2017-12-18)
>>>> Driver version:  4.37.0
>>>> Configuration:   ./configure --host=x86_64-suse-linux-gnu
>>> --build=x86_64-suse-linux-gnu --program-prefix= 
> --disable-dependency-tracking
>>> --prefix=/usr --exec-prefix=/usr --bindir=/usr/bin --sbindir=/usr/sbin
>>> --sysconfdir=/etc --datadir=/usr/share --includedir=/usr/include
>>> --libdir=/usr/lib64 --libexecdir=/usr/lib --localstatedir=/var
>>> --sharedstatedir=/var/lib --mandir=/usr/share/man --infodir=/usr/share/info
>>> --disable-dependency-tracking --enable-dmeventd --enable-cmdlib
>>> --enable-udev_rules --enable-udev_sync --with-udev-prefix=/usr/
>>> --enable-selinux --enable-pkgconfig --with-usrlibdir=/usr/lib64
>>> --with-usrsbindir=/usr/sbin --with-default-dm-run-dir=/run
>>> --with-tmpfilesdir=/usr/lib/tmpfiles.d --with-thin=internal
>>> --with-device-gid=6 --with-device-mode=0640 --with-device-uid=0
>>> --with-dmeventd-path=/usr/sbin/dmeventd
>>> --with-thin-check=/usr/sbin/thin_check --with-thin-dump=/usr/sbin/thin_dump
>>> --with-thin-repair=/usr/sbin/thin_repair --enable-applib
>>> --enable-blkid_wiping
>>>> --enable-cmdlib --enable-lvmetad --enable-lvmpolld --enable-realtime
>>> --with-default-locking-dir=/run/lock/lvm --with-default-pid-dir=/run
>>> --with-default-run-dir=/run/lvm --with-clvmd=corosync 
> --with-cluster=internal
>>> --enable-cmirrord --enable-lvmlockd-dlm
>>>>
>>>> So, I want to know if this problem is also a configuration problem when
>>> building lvm2? or this problem is caused by the source code?
>>>
>>> Hi Gang, it is an issue with the codebase, where

Re: [linux-lvm] pvmove does not work at all with version 2.02.177(2)

2018-06-11 Thread Gang He
Hi Martian,

>>> On 2018/5/30 at 18:37, in message
<2397dd2b-deef-2bf2-47ca-51fb6f880...@redhat.com>, Marian Csontos
 wrote:
> On 05/30/2018 11:23 AM, Gang He wrote:
>> Hello List,
>> 
>> As you know, I ever reported that lvcreate could not create a mirrored LV, 
> the root cause is a configure building item "--enable-cmirrord" was missed.
>> Now, I encounter another problem, pvmove does not work at all.
>> The detailed information/procedure is as below,
>> sle-nd1:/ # pvs
>>PV VG  Fmt  Attr PSize   PFree
>>/dev/sda1  cluster-vg2 lvm2 a--  120.00g 120.00g
>>/dev/sda2  cluster-vg2 lvm2 a--   30.00g  20.00g
>>/dev/sdb   cluster-vg2 lvm2 a--   40.00g  30.00g
>> sle-nd1:/ # vgs
>>VG  #PV #LV #SN Attr   VSize   VFree
>>cluster-vg2   3   2   0 wz--nc 189.99g 169.99g
>> sle-nd1:/ # lvs
>>LV   VG  Attr   LSize  Pool Origin Data%  Meta%  Move Log 
> Cpy%Sync Convert
>>test-lv2 cluster-vg2 -wi-a- 10.00g
>> sle-nd1:/ # lsblk
>> NAME MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
>> sda8:00  160G  0 disk
>> ├─sda1 8:10  120G  0 part
>> ├─sda2 8:20   30G  0 part
>> └─sda3 8:30   10G  0 part
>> sdb8:16   0   40G  0 disk
>> └─cluster--vg2-test--lv2 254:00   10G  0 lvm
>> vda  253:00   40G  0 disk
>> ├─vda1   253:104G  0 part [SWAP]
>> ├─vda2   253:20 23.6G  0 part /
>> └─vda3   253:30 12.4G  0 part /home
>> 
>> sle-nd1:/ # pvmove -i 5 -v /dev/sdb /dev/sda1
>>  Executing: /sbin/modprobe dm-mirror
>>  Executing: /sbin/modprobe dm-log-userspace
>>  Wiping internal VG cache
>>  Wiping cache of LVM-capable devices
>>  Archiving volume group "cluster-vg2" metadata (seqno 19).
>>  Creating logical volume pvmove0
>>  Moving 2560 extents of logical volume cluster-vg2/test-lv2.
>>Increasing mirror region size from 0to 8.00 KiB
>>Error locking on node a431232: Device or resource busy
>>Failed to activate cluster-vg2/test-lv2
>> 
>> sle-nd1:/ # lvm version
>>LVM version: 2.02.177(2) (2017-12-18)
>>Library version: 1.03.01 (2017-12-18)
>>Driver version:  4.37.0
>>Configuration:   ./configure --host=x86_64-suse-linux-gnu 
> --build=x86_64-suse-linux-gnu --program-prefix= --disable-dependency-tracking 
> --prefix=/usr --exec-prefix=/usr --bindir=/usr/bin --sbindir=/usr/sbin 
> --sysconfdir=/etc --datadir=/usr/share --includedir=/usr/include 
> --libdir=/usr/lib64 --libexecdir=/usr/lib --localstatedir=/var 
> --sharedstatedir=/var/lib --mandir=/usr/share/man --infodir=/usr/share/info 
> --disable-dependency-tracking --enable-dmeventd --enable-cmdlib 
> --enable-udev_rules --enable-udev_sync --with-udev-prefix=/usr/ 
> --enable-selinux --enable-pkgconfig --with-usrlibdir=/usr/lib64 
> --with-usrsbindir=/usr/sbin --with-default-dm-run-dir=/run 
> --with-tmpfilesdir=/usr/lib/tmpfiles.d --with-thin=internal 
> --with-device-gid=6 --with-device-mode=0640 --with-device-uid=0 
> --with-dmeventd-path=/usr/sbin/dmeventd 
> --with-thin-check=/usr/sbin/thin_check --with-thin-dump=/usr/sbin/thin_dump 
> --with-thin-repair=/usr/sbin/thin_repair --enable-applib 
> --enable-blkid_wiping
>> --enable-cmdlib --enable-lvmetad --enable-lvmpolld --enable-realtime 
> --with-default-locking-dir=/run/lock/lvm --with-default-pid-dir=/run 
> --with-default-run-dir=/run/lvm --with-clvmd=corosync --with-cluster=internal 
> --enable-cmirrord --enable-lvmlockd-dlm
>> 
>> So, I want to know if this problem is also a configuration problem when 
> building lvm2? or this problem is caused by the source code?
> 
> Hi Gang, it is an issue with the codebase, where exclusive activation 
> was required where it should not.
> 
> You will need to backport some additional patches - see CentOS SRPM. And 
> I should do the same for Fedora.
Could you help to paste the links, which are related to this back-port?

Thanks a lot.
Gang 

> 
> -- Martian
> 
>> 
>> Thanks
>> Gang
>> 
>> ___
>> linux-lvm mailing list
>> linux-lvm@redhat.com 
>> https://www.redhat.com/mailman/listinfo/linux-lvm 
>> read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/ 
>> 
> 
> ___
> linux-lvm mailing list
> linux-lvm@redhat.com 
> https://www.redhat.com/mailman/listinfo/linux-lvm 
> read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/

___
linux-lvm mailing list
linux-lvm@redhat.com
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/

Re: [linux-lvm] [RELEASE] 2.02.178

2018-05-30 Thread Gang He
Hi Marian,

Thank for your explanation, I will try 2.02.178.

Thanks
Gang

>>> On 5/30/2018 at  6:42 pm, in message
<80318f1a-2b2e-9d0f-ae16-3b1eb0af2...@redhat.com>, Marian Csontos
 wrote:
> On 05/25/2018 09:45 AM, Gang He wrote:
>> Hello Joe,
>> 
>> When will the formal LVM 2.02.178 be released?
> 
> Within two weeks. I have pushed the RC1 to rawhide, and unless problems 
> are reported, we will be releasing shortly.
> 
>> In LVM2.02.178, online pvmove can work well under the cluster environment?
> 
> Yes, the clustered pvmove issues are fixed in 2.02.178.
> 
> You need either:
> - exclusive activation,
> - wait for 2.02.178,
> - or patch .177
> 
> -- Marian
> 
> ___
> linux-lvm mailing list
> linux-lvm@redhat.com 
> https://www.redhat.com/mailman/listinfo/linux-lvm 
> read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/


___
linux-lvm mailing list
linux-lvm@redhat.com
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/


Re: [linux-lvm] lvcreate cannot create a mirrored LV with version 2.02.177(2)

2018-05-27 Thread Gang He
Hi Zdenek,

>>> Zdenek Kabelac <zkabe...@redhat.com> 2018/5/25 16:14 >>>
Dne 25.5.2018 v 09:37 Gang He napsal(a):
> Hello List,
> 
> I am using lvm version 2.02.177(2), tried to create a mirrored LV, but failed 
> with the errors,
> tb0307-nd1:~ # pvs
>PV VG  Fmt  Attr PSize  PFree
>/dev/vdb   cluster-vg2 lvm2 a--  40.00g 36.00g
>/dev/vdc   cluster-vg2 lvm2 a--  40.00g 40.00g
> tb0307-nd1:~ # vgs
>VG  #PV #LV #SN Attr   VSize  VFree
>cluster-vg2   2   1   0 wz--nc 79.99g 75.99g
> tb0307-nd1:~ # lvs   <<==  a linear LV can be created
>LV  VG  Attr   LSize Pool Origin Data%  Meta%  Move Log 
> Cpy%Sync Convert
>test-lv cluster-vg2 -wi-a- 4.00g
> tb0307-nd1:~ # lvcreate --type mirror -m1 -L 500m -n my-lv cluster-vg2
>Shared cluster mirrors are not available.  <<== failed to create a 
> mirrored LV


Clustered old mirror needs 'cmirrord' to be installed and running.


> 
> Could you help to take a look at this problem? from lvcreate man page, the 
> mirrored LV is supported by lvcreate.
> my lvm2 version/configuration is as below,
> tb0307-nd1:~ # lvm version
>LVM version: 2.02.177(2) (2017-12-18)
>Library version: 1.03.01 (2017-12-18)
>Driver version:  4.37.0
>Configuration:   ./configure --host=x86_64-suse-linux-gnu 
> --build=x86_64-suse-linux-gnu --program-prefix= --disable-dependency-tracking 
> --prefix=/usr --exec-prefix=/usr --bindir=/usr/bin --sbindir=/usr/sbin 
> --sysconfdir=/etc --datadir=/usr/share --includedir=/usr/include 
> --libdir=/usr/lib64 --libexecdir=/usr/lib --localstatedir=/var 
> --sharedstatedir=/var/lib --mandir=/usr/share/man --infodir=/usr/share/info 
> --disable-dependency-tracking --enable-dmeventd --enable-cmdlib 
> --enable-udev_rules --enable-udev_sync --with-udev-prefix=/usr/ 
> --enable-selinux --enable-pkgconfig --with-usrlibdir=/usr/lib64 
> --with-usrsbindir=/usr/sbin --with-default-dm-run-dir=/run 
> --with-tmpfilesdir=/usr/lib/tmpfiles.d --with-thin=internal 
> --with-device-gid=6 --with-device-mode=0640 --with-device-uid=0 --with-dmeve
>   ntd-path=/usr/sbin/dmeventd --with-thin-check=/usr/sbin/thin_check 
> --with-thin-dump=/usr/sbin/thin_dump --with-thin-repair=/usr/sbin/thin_repair 
> --enable-applib --enable-blkid_wiping --enable-cmdlib
>--enable-lvmetad --enable-lvmpolld --enable-realtime --with-cache=internal 
> --with-default-locking-dir=/run/lock/lvm --with-default-pid-dir=/run 
> --with-default-run-dir=/run/lvm
> tb0307-nd1:~ #

--enable-cmirrord
Yes, the configuration error led my LVM2 mis-function.

Thanks a lot.
Gang


Regards

Zdenek



___
linux-lvm mailing list
linux-lvm@redhat.com
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/


Re: [linux-lvm] [RELEASE] 2.02.178

2018-05-25 Thread Gang He
Hello Joe,

When will the formal LVM 2.02.178 be released?
In LVM2.02.178, online pvmove can work well under the cluster environment?

Thanks
Gang 

>>> Joe Thornber  2018/5/25 15:03 >>>
ftp://sources.redhat.com/pub/lvm2/LVM2.2.02.178-rc1.tgz 


Version 2.02.178


There are going to be some large changes to the lvm2 codebase
over the next year or so.  Starting with this release.  These
changes should be internal rather than having a big effect on
the command line.  Inevitably these changes will increase the
chance of bugs, so please be on the alert.


Remove support for obsolete metadata formats


Support for the GFS pool format, and format used by the
original 1990's version of LVM1 have been removed.

Use asynchronous IO
---

Almost all IO uses libaio now.

Rewrite label scanning
--

Dave Teigland has reworked the label scanning and metadata reading
logic to minimise the amount of IOs issued.  Combined with the aio changes 
this can greatly improve scanning speed for some systems.

./configure options
---

We're going to try and remove as many options from ./configure as we
can.  Each option multiplies the number of possible configurations
that we should test (this testing is currently not occurring).

The first batch to be removed are:

  --enable-testing
  --with-snapshots
  --with-mirrors
  --with-raid
  --with-thin
  --with-cache

Stable targets that are in the upstream kernel will just be supported.

In future optional target flags will be given in two situations:

1) The target is experimental, or not upstream at all (eg, vdo).
2) The target is deprecated and support will be removed at some future date.

This decision could well be contentious, so could distro maintainers feel
free to comment.

___
linux-lvm mailing list
linux-lvm@redhat.com 
https://www.redhat.com/mailman/listinfo/linux-lvm 
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/ 



___
linux-lvm mailing list
linux-lvm@redhat.com
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/


[linux-lvm] lvcreate cannot create a mirrored LV with version 2.02.177(2)

2018-05-25 Thread Gang He
Hello List,

I am using lvm version 2.02.177(2), tried to create a mirrored LV, but failed 
with the errors,
tb0307-nd1:~ # pvs
  PV VG  Fmt  Attr PSize  PFree
  /dev/vdb   cluster-vg2 lvm2 a--  40.00g 36.00g
  /dev/vdc   cluster-vg2 lvm2 a--  40.00g 40.00g
tb0307-nd1:~ # vgs
  VG  #PV #LV #SN Attr   VSize  VFree
  cluster-vg2   2   1   0 wz--nc 79.99g 75.99g
tb0307-nd1:~ # lvs   <<==  a linear LV can be created
  LV  VG  Attr   LSize Pool Origin Data%  Meta%  Move Log 
Cpy%Sync Convert
  test-lv cluster-vg2 -wi-a- 4.00g
tb0307-nd1:~ # lvcreate --type mirror -m1 -L 500m -n my-lv cluster-vg2
  Shared cluster mirrors are not available.  <<== failed to create a mirrored LV

Could you help to take a look at this problem? from lvcreate man page, the 
mirrored LV is supported by lvcreate.
my lvm2 version/configuration is as below,
tb0307-nd1:~ # lvm version
  LVM version: 2.02.177(2) (2017-12-18)
  Library version: 1.03.01 (2017-12-18)
  Driver version:  4.37.0
  Configuration:   ./configure --host=x86_64-suse-linux-gnu 
--build=x86_64-suse-linux-gnu --program-prefix= --disable-dependency-tracking 
--prefix=/usr --exec-prefix=/usr --bindir=/usr/bin --sbindir=/usr/sbin 
--sysconfdir=/etc --datadir=/usr/share --includedir=/usr/include 
--libdir=/usr/lib64 --libexecdir=/usr/lib --localstatedir=/var 
--sharedstatedir=/var/lib --mandir=/usr/share/man --infodir=/usr/share/info 
--disable-dependency-tracking --enable-dmeventd --enable-cmdlib 
--enable-udev_rules --enable-udev_sync --with-udev-prefix=/usr/ 
--enable-selinux --enable-pkgconfig --with-usrlibdir=/usr/lib64 
--with-usrsbindir=/usr/sbin --with-default-dm-run-dir=/run 
--with-tmpfilesdir=/usr/lib/tmpfiles.d --with-thin=internal --with-device-gid=6 
--with-device-mode=0640 --with-device-uid=0 
--with-dmeventd-path=/usr/sbin/dmeventd --with-thin-check=/usr/sbin/thin_check 
--with-thin-dump=/usr/sbin/thin_dump --with-thin-repair=/usr/sbin/thin_repair 
--enable-applib --enable-blkid_wiping --enable-cmdlib
  --enable-lvmetad --enable-lvmpolld --enable-realtime --with-cache=internal 
--with-default-locking-dir=/run/lock/lvm --with-default-pid-dir=/run 
--with-default-run-dir=/run/lvm
tb0307-nd1:~ # 


Thanks
Gang





___
linux-lvm mailing list
linux-lvm@redhat.com
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/


Re: [linux-lvm] Pvmove can work on cluster with LVM 2.02.120(2) (2015-05-15), or not?

2018-04-27 Thread Gang He
Hello Zdenek,

Thank for your reply.
As you said some days ago, you will release LVM 2.02.178,
which will be a stable version? include some PVmove bug fixes?


Thanks
Gang


>>> 
> Dne 26.4.2018 v 04:07 Gang He napsal(a):
>> Hello Zdenek,
>> 
>> Do you remember LVM for this version supports PVmove, or not?
>> Since there is a user which is pinging this question.
>> 
> 
> Hi
> 
> lvm2 should be supporting clustered pvmove (in case cmirrord is fully 
> functional on your system) - but I've no idea if you are hitting some old bug 
> 
> or you see some still existing one.
> 
> For reporting bug use some more recent version of tools - nobody is likely 
> really going to hunt bugs in your 3 year old system here...
> 
> 
> Regards
> 
> Zdenek
> 
> 
> 
> ___
> linux-lvm mailing list
> linux-lvm@redhat.com 
> https://www.redhat.com/mailman/listinfo/linux-lvm 
> read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/


___
linux-lvm mailing list
linux-lvm@redhat.com
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/


Re: [linux-lvm] Pvmove can work on cluster with LVM 2.02.120(2) (2015-05-15), or not?

2018-04-25 Thread Gang He
Hello Zdenek,

Do you remember LVM for this version supports PVmove, or not?
Since there is a user which is pinging this question.

Thanks
Gang


>>> 
> Hello List,
> 
> This is another pvmove problem, the LVM version is 2.02.120(2) (2015-05-15).
> This bug can be reproduced (not each time, but very easy),
> the problem is, online pvmove brings the upper file system hang.
> the environment is a three-node cluster (CLVM+OCFS2).
> 1) create two PV, create one VG, create one LV.
> sles12sp3r1-nd1:/ # pvs
>   PV VG  Fmt  Attr PSize   PFree
>   /dev/sda1  cluster-vg2 lvm2 a--  120.00g 60.00g
>   /dev/sda2  lvm2 ---   30.00g 30.00g
>   /dev/sdb1  cluster-vg2 lvm2 a--  120.00g 60.00g
>   /dev/sdb2  lvm2 ---   30.00g 30.00g
> sles12sp3r1-nd1:/ # vgs
>   VG  #PV #LV #SN Attr   VSize   VFree
>   cluster-vg2   2   2   0 wz--nc 239.99g 119.99g
> sles12sp3r1-nd1:/ # lvs
>   LV   VG  Attr   LSize  Pool Origin Data%  Meta%  Move Log
> Cpy%Sync Convert
>   test-lv  cluster-vg2 -wI-ao 20.00g
> 
> 2) mkfs.ocfs2 test-lv LV, and mount this LV on each node.
> mkfs.ocfs2 -N 4 /dev/cluster-vg2/test-lv (on one node)
> mount /dev/cluster-vg2/test-lv  /mnt/shared (on each node)
> 
> 3) write/truncate some files in /mnt/shared from each node continually.
> 
> 4) run pvmove command on node1 while step 3) is in progress on each node.
> sles12sp3r1-nd1:/ # pvmove -i 5 /dev/sda1 /dev/sdb1
> Pvmove process will enter this stack,
> sles12sp3r1-nd1:/ # cat /proc/12748/stack
> [] hrtimer_nanosleep+0xaf/0x170
> [] SyS_nanosleep+0x56/0x70
> [] entry_SYSCALL_64_fastpath+0x12/0x6d
> [] 0x
> 
> 5)Then, I can encounter ocfs2 file system write/truncate process hang
> problem on each node,
> The root cause is blocked at getting journal lock.
> but the journal lock is being used by ocfs2_commit thread, this thread is
> blocked at flushing journal to the disk (LVM disk).
> sles12sp3r1-nd3:/ # cat /proc/2310/stack
> [] jbd2_log_wait_commit+0x8a/0xf0 [jbd2]
> [] jbd2_journal_flush+0x47/0x180 [jbd2]
> [] ocfs2_commit_thread+0xa1/0x350 [ocfs2]
> [] kthread+0xc7/0xe0
> [] ret_from_fork+0x3f/0x70
> [] kthread+0x0/0xe0
> [] 0x
> 
> So, I want to confirm if online pvmove is supported by LVM 2.02.120(2)
> (2015-05-15)?
> If yes, how to debug this bug? it looks ocfs2 journal thread can not flush
> data to the underlying LVM disk.
> 
> Thanks
> Gang


___
linux-lvm mailing list
linux-lvm@redhat.com
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/


[linux-lvm] Pvmove can work on cluster with LVM 2.02.120(2) (2015-05-15), or not?

2018-04-25 Thread Gang He
Hello List,

This is another pvmove problem, the LVM version is 2.02.120(2) (2015-05-15).
This bug can be reproduced (not each time, but very easy),
the problem is, online pvmove brings the upper file system hang.
the environment is a three-node cluster (CLVM+OCFS2).
1) create two PV, create one VG, create one LV.
sles12sp3r1-nd1:/ # pvs
  PV VG  Fmt  Attr PSize   PFree
  /dev/sda1  cluster-vg2 lvm2 a--  120.00g 60.00g
  /dev/sda2  lvm2 ---   30.00g 30.00g
  /dev/sdb1  cluster-vg2 lvm2 a--  120.00g 60.00g
  /dev/sdb2  lvm2 ---   30.00g 30.00g
sles12sp3r1-nd1:/ # vgs
  VG  #PV #LV #SN Attr   VSize   VFree
  cluster-vg2   2   2   0 wz--nc 239.99g 119.99g
sles12sp3r1-nd1:/ # lvs
  LV   VG  Attr   LSize  Pool Origin Data%  Meta%  Move Log
Cpy%Sync Convert
  test-lv  cluster-vg2 -wI-ao 20.00g

2) mkfs.ocfs2 test-lv LV, and mount this LV on each node.
mkfs.ocfs2 -N 4 /dev/cluster-vg2/test-lv (on one node)
mount /dev/cluster-vg2/test-lv  /mnt/shared (on each node)

3) write/truncate some files in /mnt/shared from each node continually.

4) run pvmove command on node1 while step 3) is in progress on each node.
sles12sp3r1-nd1:/ # pvmove -i 5 /dev/sda1 /dev/sdb1
Pvmove process will enter this stack,
sles12sp3r1-nd1:/ # cat /proc/12748/stack
[] hrtimer_nanosleep+0xaf/0x170
[] SyS_nanosleep+0x56/0x70
[] entry_SYSCALL_64_fastpath+0x12/0x6d
[] 0x

5)Then, I can encounter ocfs2 file system write/truncate process hang
problem on each node,
The root cause is blocked at getting journal lock.
but the journal lock is being used by ocfs2_commit thread, this thread is
blocked at flushing journal to the disk (LVM disk).
sles12sp3r1-nd3:/ # cat /proc/2310/stack
[] jbd2_log_wait_commit+0x8a/0xf0 [jbd2]
[] jbd2_journal_flush+0x47/0x180 [jbd2]
[] ocfs2_commit_thread+0xa1/0x350 [ocfs2]
[] kthread+0xc7/0xe0
[] ret_from_fork+0x3f/0x70
[] kthread+0x0/0xe0
[] 0x

So, I want to confirm if online pvmove is supported by LVM 2.02.120(2)
(2015-05-15)?
If yes, how to debug this bug? it looks ocfs2 journal thread can not flush
data to the underlying LVM disk.

Thanks
Gang
___
linux-lvm mailing list
linux-lvm@redhat.com
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/

Re: [linux-lvm] PVmove can not work on lvm2-2.02.177

2018-04-04 Thread Gang He
Hi Zdenek,

>>> Zdenek Kabelac <zkabe...@redhat.com> 04/03/18 4:11 PM >>>
Dne 3.4.2018 v 04:28 Gang He napsal(a):
> Hello list,
> 
> As you know, pvmove can run on old version (e.g. lvm2-2.02.120 on SLE12SP2),
> But with new version lvm2-2.02.177, I can not run pvmove successfully in the 
> cluster.
> Here, I paste some information from my test,
> if you can know the cause, please help to figure out.
> 
> tb0307-nd1:/ # cat /etc/issue
> Welcome to openSUSE Tumbleweed 20180307 - Kernel \r (\l).
> tb0307-nd1:/ # uname -r
> 4.15.7-1-default
> tb0307-nd1:/ # rpm -qa | grep lvm
> lvm2-cmirrord-2.02.177-4.1.x86_64
> liblvm2app2_2-2.02.177-4.1.x86_64
> liblvm2cmd2_02-2.02.177-4.1.x86_64
> lvm2-clvm-2.02.177-4.1.x86_64
> lvm2-lockd-2.02.177-4.1.x86_64
> lvm2-2.02.177-4.1.x86_64
> lvm2-testsuite-2.02.177-4.1.x86_64
> 
> tb0307-nd1:/ # pvs
>PV VG  Fmt  Attr PSize  PFree
>/dev/vdb   cluster-vg2 lvm2 a--  40.00g 20.00g
>/dev/vdc   cluster-vg2 lvm2 a--  40.00g 40.00g
> tb0307-nd1:/ # vgs
>VG  #PV #LV #SN Attr   VSize  VFree
>cluster-vg2   2   1   0 wz--nc 79.99g 59.99g
> tb0307-nd1:/ # lvs
>LV  VG  Attr   LSize  Pool Origin Data%  Meta%  Move Log 
> Cpy%Sync Convert
>test-lv cluster-vg2 -wi-ao 20.00g
> tb0307-nd1:/ # pvmove -i 5 /dev/vdb /dev/vdc
>Increasing mirror region size from 0to 16.00 KiB
>Error locking on node a4311a8: Device or resource busy
>Failed to activate cluster-vg2/test-lv   <<== Failed, but in fact, I can 
> mount cluster-vg2/test-lv with ocfs2 file system in the cluster, and 
> read/write files from each node.

Hi


Yep - it's a work in progress - if you take upstream from git - there are 
already fixes committed.  Hopefully next release (2.02.178) will address most 
regressions cause by newer (hopefully better) code.

Do you have a schedule for next release (2.02.178)? pvmove is a good feature, I 
hope we can support it in our next SLE release. 
By the way, some customers really used this feature, but it did not look very 
stable,
especially when users did pvmove during the (clustered) LV is being used for 
reading/writing.
Thanks a lot.
Gang

Good to see someone is really using it...

Regards

Zdenek

___
linux-lvm mailing list
linux-lvm@redhat.com
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/







___
linux-lvm mailing list
linux-lvm@redhat.com
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/


Re: [linux-lvm] compile lvm2 with --enable-lvmlockd-sanlock on Ubuntu 1710, configure failed. For help, thanks

2018-04-01 Thread Gang He
I can compile the code smoothly on openSUSE Tumbleweed, but I am not sure if my 
configuration is the same with yours.

tb0307-nd3:/usr/src/packages/BUILD/LVM2.2.02.177 # rpm -qa | grep -i san
sane-backends-1.0.27-3.3.x86_64
noto-sans-fonts-20170919-2.1.noarch
adobe-sourcesanspro-fonts-2.020-1.8.noarch
hplip-sane-3.17.9-2.2.x86_64
sanlock-devel-3.6.0-1.2.x86_64
libtsan0-gcc8-8.0.1+r257983-1.1.x86_64
libubsan0-7.3.1+r258025-1.3.x86_64
sane-backends-autoconfig-1.0.27-3.3.x86_64
libsanlock1-3.6.0-1.2.x86_64
ruby2.5-rubygem-rails-html-sanitizer-1.0.3-1.8.x86_64
libasan4-7.3.1+r258025-1.3.x86_64
liblsan0-gcc8-8.0.1+r257983-1.1.x86_64
sanlock-3.6.0-1.2.x86_64


Thanks
Gang


>>> 

> Hi, 
> 
> I want to have a test of lvmlockd, while try to compile the lvm2, which 
> version is LVM2.2.02.177, or 2.2.02.168. 
> 
> With the configure:  
> ./configure --enable-cmirrord --enable-debug --disable-devmapper 
> --enable-lvmetad 
> --enable-lvmpolld --enable-lvmlockd-sanlock --enable-dmeventd 
> --enable-udev_sync 
> --enable-cmdlib
> 
> But failed as below with configure error check: 
> 
> checking for LOCKD_SANLOCK... no
> configure: error: bailing out
> 
> The environment is Ubuntu 1710. 
> 
> And I had install the sanlock as below: 
> root@u610:~/LVM2.2.02.177# dpkg -l |grep sanlock
> ii  libsanlock-client1 3.3.0-2.1  
>  
>  amd64Shared storage lock manager (client library)
> ii  libsanlock-dev 3.3.0-2.1  
>  
>  amd64Shared storage lock manager (development files)
> ii  libsanlock13.3.0-2.1  
>   amd64Shared storage lock manager (shared library)
> ii  libvirt-sanlock3.6.0-1ubuntu5 
>  
>  amd64Sanlock plugin for virtlockd
> ii  python-sanlock 3.3.0-2.1  
>  
>  amd64Python bindings to shared storage lock manager
> ii  sanlock3.3.0-2.1  
>   amd64Shared storage lock manager
> 
> Is there something wrong with it? 
> 
> Thanks
> 
> ___
> linux-lvm mailing list
> linux-lvm@redhat.com 
> https://www.redhat.com/mailman/listinfo/linux-lvm 
> read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/


___
linux-lvm mailing list
linux-lvm@redhat.com
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/


Re: [linux-lvm] Can't work normally after attaching disk volumes originally in a VG on another machine

2018-03-27 Thread Gang He
Hi Zdenek,


>>> 
> Dne 27.3.2018 v 07:55 Gang He napsal(a):
>> Hi Fran,
>> 
>> 
>>>>>
>>> On 26 March 2018 at 08:04, Gang He <g...@suse.com> wrote:
>>>> It looks like each PV includes a copy meta data for VG, but if some PV has
>>> changed (e.g. removed, or moved to another VG),
>>>> the remained PV should have a method to check the integrity when each
>>> startup (activated?), to avoid such inconsistent problem automatically.
>>>
>>> Your workflow is strange. What are you trying to accomplish here?
>> I just reproduced a problem from the customer, since they did virtual disk 
> migration from one virtual machine  to another one.
>> According to your comments, this does not look like a LVM code problem,
>> the problem can be considered as LVM administer misoperation?
>> 
>> Thanks
>> Gang
> 
> 
> Ahh, so welcome Eric's replacement  :)
Yes, thank for your inputs.

> 
> Yes - this use scenario was improper usage of lvm2 - and lvm2 has cached the 
> user before he could ruing his data any further...
> 
> 
> Regards
> 
> 
> Zdenek


___
linux-lvm mailing list
linux-lvm@redhat.com
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/


Re: [linux-lvm] Can't work normally after attaching disk volumes originally in a VG on another machine

2018-03-26 Thread Gang He
Hi Fran,


>>> 
> On 26 March 2018 at 08:04, Gang He <g...@suse.com> wrote:
>> It looks like each PV includes a copy meta data for VG, but if some PV has 
> changed (e.g. removed, or moved to another VG),
>> the remained PV should have a method to check the integrity when each 
> startup (activated?), to avoid such inconsistent problem automatically.
> 
> Your workflow is strange. What are you trying to accomplish here?
I just reproduced a problem from the customer, since they did virtual disk 
migration from one virtual machine  to another one.
According to your comments, this does not look like a LVM code problem, 
the problem can be considered as LVM administer misoperation?

Thanks
Gang   

> 
> Your steps in 5 should be:
> 
> vgreduce vg01 /dev/vdc /dev/vdc
> pvremove /dev/vdc /dev/vdd
> 
> That way you ensure there's no leftover metadata in the PVs (specially
> if you need to attach those disks to a different system)
> 
> Again a usecase to understand your workflow would be beneficial...
> 
> Cheers
> 
> fran
> 
> ___
> linux-lvm mailing list
> linux-lvm@redhat.com 
> https://www.redhat.com/mailman/listinfo/linux-lvm 
> read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/


___
linux-lvm mailing list
linux-lvm@redhat.com
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/


Re: [linux-lvm] Can't work normally after attaching disk volumes originally in a VG on another machine

2018-03-26 Thread Gang He
Hi Xen,


>>> 
> Gang He schreef op 23-03-2018 9:30:
> 
>> 6) attach disk2 to VM2(tb0307-nd2), the vg on VM2 looks abnormal.
>> tb0307-nd2:~ # pvs
>>   WARNING: Device for PV JJOL4H-kc0j-jyTD-LDwl-71FZ-dHKM-YoFtNV not
>> found or rejected by a filter.
>>   PV VG  Fmt  Attr PSize  PFree
>>   /dev/vdc   vg2 lvm2 a--  20.00g 20.00g
>>   /dev/vdd   vg1 lvm2 a--  20.00g 20.00g
>>   [unknown]  vg1 lvm2 a-m  20.00g 20.00g
> 
> This is normal because /dev/vdd contains metadata for vg1 which includes 
> now missing disk /dev/vdc   as the PV is no longer the same.
> 
> 
> 
> 
>> tb0307-nd2:~ # vgs
>>   WARNING: Device for PV JJOL4H-kc0j-jyTD-LDwl-71FZ-dHKM-YoFtNV not
>> found or rejected by a filter.
>>   VG  #PV #LV #SN Attr   VSize  VFree
>>   vg1   2   0   0 wz-pn- 39.99g 39.99g
>>   vg2   1   0   0 wz--n- 20.00g 20.00g
> 
> This is normal because you haven't removed /dev/vdc from vg1 on 
> /dev/vdd, since it was detached while you operated on its vg.
> 
> 
>> 7) reboot VM2, the result looks worse (vdc disk belongs to two vg).
>> tb0307-nd2:/mnt/shared # pvs
>>   PV VG  Fmt  Attr PSize  PFree
>>   /dev/vdc   vg1 lvm2 a--  20.00g 0
>>   /dev/vdc   vg2 lvm2 a--  20.00g 10.00g
>>   /dev/vdd   vg1 lvm2 a--  20.00g  9.99g
> 
> When you removed vdd when it was not attached, the VG1 metadata on vdd 
> was not altered. The metadata resides on both disks, so you had 
> inconsistent metadata between both disks because you operated on the 
> shared volume group while one device was missing.
> 
> You also did not recreate PV on /dev/vdc so it has the same UUID as when 
> it was part of VG1, this is why VG1 when VDD is booted will still try to 
> include /dev/vdc because it was never removed from the volume group on 
> VDD.
> 
> So the state of affairs is:
> 
> /dev/vdc contains volume group info for VG2 and includes only /dev/vdc
> 
> /dev/vdd contains volume group info for VG1, and includes both /dev/vdc 
> and /dev/vdd by UUID for its PV, however, it is a bug that it should 
> include /dev/vdc even though the VG UUID is now different (and the name 
> as well).
It looks like each PV includes a copy meta data for VG, but if some PV has 
changed (e.g. removed, or moved to another VG),
the remained PV should have a method to check the integrity when each startup 
(activated?), to avoid such inconsistent problem automatically.

Thanks
Gang 

> 
> Regardless, from vdd's perspective /dev/vdc is still part of VG1.
> 
> ___
> linux-lvm mailing list
> linux-lvm@redhat.com 
> https://www.redhat.com/mailman/listinfo/linux-lvm 
> read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/


___
linux-lvm mailing list
linux-lvm@redhat.com
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/