Re: [linux-lvm] The node was fenced in the cluster when cmirrord was enabled on LVM2.2.02.120

2018-10-22 Thread Gang He
Hello Guys,

Did you see this problem before? 
it looks there was a similar problem from the Redhat website 
https://access.redhat.com/solutions/1421123#.


Thanks
Gang 

>>> On 2018/10/19 at 17:05, in message
<5bc99e5602f90003b...@prv1-mh.provo.novell.com>, "Gang He" 
wrote:
> Hello List,
> 
> I got a bug report from the customer, which said the node was fenced in the 
> cluster when they enabled cmirrord.
> Before the node was fenced, we can see some log printed as below,
> 
> 2018-09-25T12:55:26.555018+02:00 qu1ci11 cmirrord[6253]: cpg_mcast_joined 
> error: 2
> 2018-09-25T12:55:31.604832+02:00 qu1ci11 sbd[2865]:  warning: 
> inquisitor_child: /dev/disk/by-id/scsi-0QEMU_QEMU_HARDDISK_drive-scsi2-0-0-2 
> requested a reset
> 2018-09-25T12:55:31.608112+02:00 qu1ci11 sbd[2865]:emerg: do_exit: 
> Rebooting system: reboot
> 2018-09-25T12:55:33.202189+02:00 qu1ci11 kernel: [ 4750.932328] 
> device-mapper: 
> dm-log-userspace: [LYuPIux2] Request timed out: [15/93273] - retrying
> 2018-09-25T12:55:35.186091+02:00 qu1ci11 kernel: [ 4752.916268] 
> device-mapper: 
> dm-log-userspace: [LYuPIux2] Request timed out: [9/93274] - retrying
> 2018-09-25T12:55:41.382129+02:00 qu1ci11 kernel: [ 4759.112231] 
> device-mapper: 
> dm-log-userspace: [LYuPIux2] Request timed out: [15/93275] - retrying
> 2018-09-25T12:55:41.382157+02:00 qu1ci11 kernel: [ 4759.116237] 
> device-mapper: 
> dm-log-userspace: [LYuPIux2] Request timed out: [15/93276] - retrying
> 2018-09-25T12:55:41.534092+02:00 qu1ci11 kernel: [ 4759.264201] 
> device-mapper: 
> dm-log-userspace: [LYuPIux2] Request timed out: [15/93278] - retrying
> 2018-09-25T12:55:41.534117+02:00 qu1ci11 kernel: [ 4759.264274] 
> device-mapper: 
> dm-log-userspace: [LYuPIux2] Request timed out: [15/93279] - retrying
> 2018-09-25T12:55:41.534119+02:00 qu1ci11 kernel: [ 4759.264278] 
> device-mapper: 
> dm-log-userspace: [LYuPIux2] Request timed out: [15/93277] - retrying
>  ...
> 
> 2018-09-25T12:56:26.439557+02:00 qu1ci11 lrmd[3795]:  warning: 
> rsc_VG_ASCS_monitor_6 process (PID 4467) timed out
> 2018-09-25T12:56:26.439974+02:00 qu1ci11 lrmd[3795]:  warning: 
> rsc_VG_ASCS_monitor_6:4467 - timed out after 6ms
> 2018-09-25T12:56:26.534104+02:00 qu1ci11 kernel: [ 4804.264240] 
> device-mapper: 
> dm-log-userspace: [LYuPIux2] Request timed out: [15/93321] - retrying
> 2018-09-25T12:56:26.534122+02:00 qu1ci11 kernel: [ 4804.264287] 
> device-mapper: 
> dm-log-userspace: [LYuPIux2] Request timed out: [15/93320] - retrying
> 2018-09-25T12:56:26.534124+02:00 qu1ci11 kernel: [ 4804.264311] 
> device-mapper: 
> dm-log-userspace: [LYuPIux2] Request timed out: [15/93322] - retrying
> 
> Did you guys encounter the similar issue before? I can find the similar bug 
> report at 
> http://lists.linux-ha.org/pipermail/linux-ha/2014-December/048427.html 
> 
> If you know the root cause, please let me know. 
> 
> 
> Thanks
> Gang
>  
>   
>
> 
> 
> 
> ___
> linux-lvm mailing list
> linux-lvm@redhat.com 
> https://www.redhat.com/mailman/listinfo/linux-lvm 
> read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/


___
linux-lvm mailing list
linux-lvm@redhat.com
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/


Re: [linux-lvm] Fails to create LVM volume on the top of RAID1 after upgrade lvm2 to v2.02.180

2018-10-22 Thread Gang He
Hello David,

The user installed the lvm2 (v2.02.180) rpms with the below three patches, it 
looked there were still some problems in the user machine.
The feedback is as below from the user,

In a first round I installed lvm2-2.02.180-0.x86_64.rpm 
liblvm2cmd2_02-2.02.180-0.x86_64.rpm and liblvm2app2_2-2.02.180-0.x86_64.rpm - 
but no luck - after reboot still the same problem with ending up in the 
emergency console.
I additionally installed in the next round 
libdevmapper-event1_03-1.02.149-0.x86_64.rpm, 
./libdevmapper1_03-1.02.149-0.x86_64.rpm and 
device-mapper-1.02.149-0.x86_64.rpm, again - ending up in the emergency console
systemctl status lvm2-pvscan@9:126 output: 
lvm2-pvscan@9:126.service - LVM2 PV scan on device 9:126
   Loaded: loaded (/usr/lib/systemd/system/lvm2-pvscan@.service; static; vendor 
preset: disabled)
   Active: failed (Result: exit-code) since Mon 2018-10-22 07:34:56 CEST; 5min 
ago
 Docs: man:pvscan(8)
  Process: 815 ExecStart=/usr/sbin/lvm pvscan --cache --activate ay 9:126 
(code=exited, status=5)
 Main PID: 815 (code=exited, status=5)

Oct 22 07:34:55 linux-dnetctw lvm[815]:   WARNING: Autoactivation reading from 
disk instead of lvmetad.
Oct 22 07:34:56 linux-dnetctw lvm[815]:   /dev/sde: open failed: No medium found
Oct 22 07:34:56 linux-dnetctw lvm[815]:   WARNING: Not using device /dev/md126 
for PV qG1QRz-Ivm1-QVwq-uaHV-va9w-wwXh-lIIOhV.
Oct 22 07:34:56 linux-dnetctw lvm[815]:   WARNING: PV 
qG1QRz-Ivm1-QVwq-uaHV-va9w-wwXh-lIIOhV prefers device /dev/sdb2 because of 
previous preference.
Oct 22 07:34:56 linux-dnetctw lvm[815]:   Cannot activate LVs in VG vghome 
while PVs appear on duplicate devices.
Oct 22 07:34:56 linux-dnetctw lvm[815]:   0 logical volume(s) in volume group 
"vghome" now active
Oct 22 07:34:56 linux-dnetctw lvm[815]:   vghome: autoactivation failed.
Oct 22 07:34:56 linux-dnetctw systemd[1]: lvm2-pvscan@9:126.service: Main 
process exited, code=exited, status=5/NOTINSTALLED
Oct 22 07:34:56 linux-dnetctw systemd[1]: lvm2-pvscan@9:126.service: Failed 
with result 'exit-code'.
Oct 22 07:34:56 linux-dnetctw systemd[1]: Failed to start LVM2 PV scan on 
device 9:126.

What should we do in the next step for this case? 
or we have to accept the fact, to modify the related configurations manually to 
work around.  

Thanks
Gang


>>> On 2018/10/19 at 1:59, in message <20181018175923.gc28...@redhat.com>, David
Teigland  wrote:
> On Thu, Oct 18, 2018 at 11:01:59AM -0500, David Teigland wrote:
>> On Thu, Oct 18, 2018 at 02:51:05AM -0600, Gang He wrote:
>> > If I include this patch in lvm2 v2.02.180,
>> > LVM2 can activate LVs on the top of RAID1 automatically? or we still have 
> to set "allow_changes_with_duplicate_pvs=1" in lvm.conf?
>> 
>> I didn't need any config changes when testing this myself, but there may
>> be other variables I've not encountered.
> 
> See these three commits:
> d1b652143abc tests: add new test for lvm on md devices
> e7bb50880901 scan: enable full md filter when md 1.0 devices are present
> de2863739f2e scan: use full md filter when md 1.0 devices are present
> 
> at 
> https://sourceware.org/git/?p=lvm2.git;a=shortlog;h=refs/heads/2018-06-01-sta 
> ble
> 
> (I was wrong earlier; allow_changes_with_duplicate_pvs is not correct in
> this case.)


___
linux-lvm mailing list
linux-lvm@redhat.com
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/


Re: [linux-lvm] Why doesn't the lvmcache support the discard (trim) command?

2018-10-22 Thread Zdenek Kabelac

Dne 19. 10. 18 v 19:00 Ilia Zykov napsal(a):


dm-writecache could be seen as 'extension' of your page-cache to held
longer list of dirty-pages...

Zdenek



Does it mean that the dm-writecache is always empty, after reboot?
Thanks.



No, writecache is journaled -  so after reboot used content is remembered and 
read for use.



Zdenek

___
linux-lvm mailing list
linux-lvm@redhat.com
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/


Re: [linux-lvm] [PATCH 0/2] boot to a mapped device

2018-10-22 Thread Helen Koike
Hi all,

Sorry the delay of my reply.

On 9/27/18 3:31 PM, Mike Snitzer wrote:
> On Thu, Sep 27 2018 at 12:36pm -0400,
> Kees Cook  wrote:
> 
>> On Thu, Sep 27, 2018 at 7:23 AM, Mike Snitzer  wrote:
>>> On Wed, Sep 26 2018 at  3:16am -0400,
>>> Richard Weinberger  wrote:
>>>
 Helen,

 On Wed, Sep 26, 2018 at 7:01 AM Helen Koike  
 wrote:
>
> This series is reviving an old patchwork.
> Booting from a mapped device requires an initramfs. This series is
> allows for device-mapper targets to be configured at boot time for
> use early in the boot process (as the root device or otherwise).

 What is the reason for this patch series?
 Setting up non-trivial root filesystems/storage always requires an
 initramfs, there is nothing
 wrong about this.
>>>
>>> Exactly.  If phones or whatever would benefit from this patchset then
>>> say as much.
>>
>> I think some of the context for the series was lost in commit logs,
>> but yes, both Android and Chrome OS do not use initramfs. The only
>> thing that was needed to do this was being able to configure dm
>> devices on the kernel command line, so the overhead of a full
>> initramfs was seen as a boot time liability, a boot image size
>> liability (e.g. Chrome OS has a limited amount of storage available
>> for the boot image that is covered by the static root of trust
>> signature), and a complexity risk: everything that is needed for boot
>> could be specified on the kernel command line, so better to avoid the
>> whole initramfs dance.
>>
>> So, instead, this plumbs the dm commands directly instead of bringing
>> up a full userspace and performing ioctls.

Sorry about the missing context, I should've added the change log and
worked a bit more in the cover letter with a more verbose explanation on
the reasons for this patch.

Just for reference (I'll describe better the changes in the next version):

v5: https://www.redhat.com/archives/dm-devel/2016-February/msg00112.html
v6: https://www.redhat.com/archives/dm-devel/2017-April/msg00316.html
v7: http://lkml.iu.edu/hypermail/linux/kernel/1705.2/02657.html
v8: https://www.redhat.com/archives/linux-lvm/2017-May/msg00055.html

>>
>>> I will not accept this patchset at this time.
>>>
> Example, the following could be added in the boot parameters.
> dm="lroot,,,rw, 0 4096 linear 98:16 0, 4096 4096 linear 98:32 0" 
> root=/dev/dm-0

 Hmmm, the new dm= parameter is anything but easy to get right.
>>>
>>> No, it isn't.. exposes way too much potential for users hanging
>>> themselves.
>>
>> IIRC, the changes in syntax were suggested back when I was trying to
>> drive this series:
>> https://www.redhat.com/archives/dm-devel/2016-February/msg00199.html
>>
>> And it matches the "concise" format in dmsetup:
>> https://sourceware.org/git/?p=lvm2.git;a=commit;h=827be01758ec5adb7b9d5ea75b658092adc65534

Exactly, this is the "concise" format from dmsetup, it also makes it
easier for users to copy and paste from "dmsetup --concise", which
doesn't mean this format is ideal, but imho keeping it consistent with
dmsetup is a good thing, please let me know if you have any other ideas.

>>
>> What do you feel are next steps?
> 
> There is quite a lot of init/ code, to handle parsing the concise DM
> format, that is being proposed for inclusion.  I question why that
> DM-specific code would be located in init/

The main reason was that, taking "md=" and "raid=" as a reference, its
command line arguments are parsed in init/do_mounts_md.c, I could move
the parsing logic to drivers/md/* but I was wondering if it wouldn't be
better to be consistent with init/do_mounts_md.c, what do you think?

> 
> There also needs to be a careful comparison done between the proposed
> init/ code to support consise DM format and the userspace lvm2
> equivalent (e.g. lvm2.git commit 827be0175)

Yes, I am taking a deeper look into the lvm2 parsing code, and actually
we can use almost the same logic for parsing, which seems better because
lvm2 is already using it, we already have some validation/review and it
also seems cleaner.
I'll update this in the next version.

> 
> That aside, the DM targets that are allowed to be supported by this dm=
> commandline boot interface must be constrained (there are serious risks
> in allowing activation of certain DM targets without first using
> userspace tools to check the validity of associated metadata, as is done
> by the DM thin and cache targets).  Also, all targets supported must be
> upstream.  "linear", "verity" and "bootcache" DM targets are referenced
> in Documentation, "bootcache" must be a Google target.  I'm not aware of
> it.
> 
> Mike
> 

I see, I can add this constraint and I'll clean up the documentation for
the next version.


Thank you all for your comments and reviews, I am working on the next
version of this patch series taking yours comments into consideration
and cleaning up several parts of the code and documentation.

Please let me