Re: [linux-lvm] About online pvmove/lvresize on shared VG

2020-07-08 Thread Gang He

Hi David,

Thank for your reply.
more questions,

On 7/9/2020 12:05 AM, David Teigland wrote:

On Wed, Jul 08, 2020 at 03:55:55AM +, Gang He wrote:

but I cannot do online LV reduce from one node,
the workaround is to switch VG activation_mode to exclusive, run lvreduce 
command on the node where VG is activated.
Does this behaviour is by-design? or a bug?


It was intentional since shrinking the cluster fs and LV isn't very common
(not supported for gfs2).

OK, thank for confirmation.




For pvmove command, I cannot do online pvmove from one node,
The workaround is to switch VG activation_mode to exclusive, run pvmove command 
on the node where VG is activated.
Does this behaviour is by-design? do we do some enhancements in the furture?
or any workaround to run pvmove under shared  activation_mode? e.g. --lockopt 
option can help this situation?


pvmove is implemented with mirroring, so that mirroring would need to be
replaced with something that works with concurrent access, e.g. cluster md
raid1.  I suspect there are better approaches than pvmove to solve the
broader problem.

Sorry, I am a little confused.
In the future, we can do online pvmove when VG is activated in shared 
mode? from man page, I feel these limitations are temporary (or Not Yet 
Complete).
By the way, --lockopt option can help this situation? I cannot find the 
detailed description for this option in manpage.


Thanks
Gang



Dave



___
linux-lvm mailing list
linux-lvm@redhat.com
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/



Re: [linux-lvm] About online pvmove/lvresize on shared VG

2020-07-08 Thread David Teigland
On Wed, Jul 08, 2020 at 03:55:55AM +, Gang He wrote:
> but I cannot do online LV reduce from one node, 
> the workaround is to switch VG activation_mode to exclusive, run lvreduce 
> command on the node where VG is activated.
> Does this behaviour is by-design? or a bug?

It was intentional since shrinking the cluster fs and LV isn't very common
(not supported for gfs2).

> For pvmove command, I cannot do online pvmove from one node,
> The workaround is to switch VG activation_mode to exclusive, run pvmove 
> command on the node where VG is activated.
> Does this behaviour is by-design? do we do some enhancements in the furture?
> or any workaround to run pvmove under shared  activation_mode? e.g. --lockopt 
> option can help this situation?

pvmove is implemented with mirroring, so that mirroring would need to be
replaced with something that works with concurrent access, e.g. cluster md
raid1.  I suspect there are better approaches than pvmove to solve the
broader problem.

Dave

___
linux-lvm mailing list
linux-lvm@redhat.com
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/



[linux-lvm] About online pvmove/lvresize on shared VG

2020-07-07 Thread Gang He
Hello List,

I use lvm2-2.03.05, I am looking at online pvmove/lvresize on shared VG, since 
there are some problems in old code.
Now, I setup three node cluster, and one shared VG/LV, and a cluster file 
system on top of LV.
e.g.
primitive ocfs2-2 Filesystem \
params device="/dev/vg1/lv1" directory="/mnt/ocfs2" fstype=ocfs2 options=acl \
op monitor interval=20 timeout=40
primitive vg1 LVM-activate \
params vgname=vg1 vg_access_mode=lvmlockd activation_mode=shared \
op start timeout=90s interval=0 \
op stop timeout=90s interval=0 \
op monitor interval=30s timeout=90s \
meta target-role=Started
group base-group dlm lvmlockd vg1 ocfs2-2

Now, I can do online LV extend from one node (good),
but I cannot do online LV reduce from one node, 
the workaround is to switch VG activation_mode to exclusive, run lvreduce 
command on the node where VG is activated.
Does this behaviour is by-design? or a bug?

For pvmove command, I cannot do online pvmove from one node,
The workaround is to switch VG activation_mode to exclusive, run pvmove command 
on the node where VG is activated.
Does this behaviour is by-design? do we do some enhancements in the furture?
or any workaround to run pvmove under shared  activation_mode? e.g. --lockopt 
option can help this situation?

Thanks a lot.
Gang


___
linux-lvm mailing list
linux-lvm@redhat.com
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/