t; "/dev/sdX", which can pass the filter rule.
A bit more precisely: when running pvcreate (or vgcreate), by the time
the filter is called, "/dev/sdX" has been added to the list of aliases
and thus the device is accepted, whereas in a vgextend run, the list of
aliases has not
On Fri, 2019-09-06 at 05:01 +, Heming Zhao wrote:
> I just tried to only apply below patch (didn't partly backout commit
> 25b58310e3).
> The attrs of lvs output still have 'a' bit.
>
> ```patch
> +#if 0
> if (!_online_pvscan_one(cmd, dev, NULL,
> complete_vgnames, saved_
Hi David,
On Mon, 2019-09-09 at 09:09 -0500, David Teigland wrote:
> On Mon, Sep 09, 2019 at 11:42:17AM +, Heming Zhao wrote:
> > Hello David,
> >
> > You are right. Without calling _online_pvscan_one(), the pv/vg/lv
> > won't be actived.
> > The activation jobs will be done by systemd calli
On Tue, 2019-09-10 at 22:38 +0200, Zdenek Kabelac wrote:
> Dne 10. 09. 19 v 17:20 David Teigland napsal(a):
> > > > > _pvscan_aa
> > > > >vgchange_activate
> > > > > _activate_lvs_in_vg
> > > > > sync_local_dev_names
> > > > > fs_unlock
> > > > >dm_udev_wait <=== this poi
On Wed, 2019-09-11 at 11:13 +0200, Zdenek Kabelac wrote:
> Dne 11. 09. 19 v 9:17 Martin Wilck napsal(a):
> >
> > My idea was not to skip synchronization entirely, but to consider
> > moving it to a separate process / service. I surely don't want to
> > re-
On Wed, 2021-02-17 at 09:49 +0800, heming.z...@suse.com wrote:
> On 2/11/21 7:16 PM, Christian Hesse wrote:
> > From: Christian Hesse
> >
> > Running the scan before udevd finished startup may result in
> > failure.
> > This has been reported for Arch Linux [0] and proper ordering fixes
> > the i
On Wed, 2021-02-17 at 13:03 +0100, Christian Hesse wrote:
>
> Let's keep this in mind. Now let's have a look at udevd startup: It
> signals
> being ready by calling sd_notifyf(), but it loads rules and applies
> permissions before doing so [0].
> Even before we have some code about handling events
On Wed, 2021-02-17 at 14:38 +0100, Oleksandr Natalenko wrote:
> Hi.
>
Thanks for the logs!
> I'm not sure this issue is reproducible with any kind of LVM layout.
> What I have is thin-LVM-on-LUKS-on-LVM:
I saw MD in your other logs...?
More comments below.
> With regard to the journal, here i
On Thu, 2021-02-18 at 16:30 +0100, Oleksandr Natalenko wrote:
> >
> > So what's timing out here is the attempt to _stop_ pvscan. That's
> > curious. It looks like a problem in pvscan to me, not having
> > reacted to
> > a TERM signal for 30s.
> >
> > It's also worth noting that the parallel pvsca
On Fri, 2021-02-19 at 10:37 -0600, David Teigland wrote:
> On Thu, Feb 18, 2021 at 04:19:01PM +0100, Martin Wilck wrote:
> > > Feb 10 17:24:26 archlinux lvm[643]: pvscan[643] VG sys run
> > > autoactivation.
> > > Feb 10 17:24:26 archlinux lvm[643]: /usr/b
On Fri, 2021-02-19 at 23:47 +0100, Zdenek Kabelac wrote:
>
> Right time is when switch is finished and we have rootfs with /usr
> available - should be ensured by lvm2-monitor.service and it
> dependencies.
While we're at it - I'm wondering why dmeventd is started so early. dm-
event.service on
On So, 2021-06-06 at 11:35 -0500, Roger Heflin wrote:
> This might be a simpler way to control the number of threads at the
> same time.
>
> On large machines (cpu wise, memory wise and disk wise). I have
> only seen lvm timeout when udev_children is set to default. The
> default seems to be s
On So, 2021-06-06 at 14:15 +0800, heming.z...@suse.com wrote:
>
> 1. During boot phase, lvm2 automatically swithes to direct activation
> mode
> ("event_activation = 0"). After booted, switch back to the event
> activation mode.
>
> Booting phase is a speical stage. *During boot*, we could "prete
On Mo, 2021-06-07 at 23:30 +0800, heming.z...@suse.com wrote:
> On 6/7/21 6:27 PM, Martin Wilck wrote:
> > On So, 2021-06-06 at 11:35 -0500, Roger Heflin wrote:
> > > This might be a simpler way to control the number of threads at
> > > the
> > > same time.
>
On Mo, 2021-06-07 at 16:30 -0500, David Teigland wrote:
> On Mon, Jun 07, 2021 at 10:27:20AM +0000, Martin Wilck wrote:
> > Most importantly, this was about LVM2 scanning of physical volumes.
> > The
> > number of udev workers has very little influence on PV scanning,
>
On Di, 2021-06-08 at 14:29 +0200, Peter Rajnoha wrote:
> On Mon 07 Jun 2021 16:48, David Teigland wrote:
> >
> > If there are say 1000 PVs already present on the system, there
> > could be
> > real savings in having one lvm command process all 1000, and then
> > switch
> > over to processing ueven
On Di, 2021-06-08 at 10:39 -0500, David Teigland wrote:
>
> . Use both native md/mpath detection *and* udev info when it's
> readily
> available (don't wait for it), instead of limiting ourselves to one
> source of info. If either source indicates an md/mpath component,
> then we consider i
On Di, 2021-06-08 at 18:02 +0200, Zdenek Kabelac wrote:
> >
> > > A third related improvement that could follow is to add stronger
> > > native
> > > mpath detection, in which lvm uses uses /etc/multipath/wwids,
> > > directly or
> > > through a multipath library, to identify mpath components. Th
On Di, 2021-06-08 at 17:19 +0200, Peter Rajnoha wrote:
> On Tue 08 Jun 2021 14:48, Martin Wilck wrote:
> >
> > Recent multipath-tools ships the "libmpathvalid" library that
> > could be used for this purpose, to make the logic comply with what
> > multipath
On Di, 2021-06-08 at 15:56 +0200, Peter Rajnoha wrote:
>
> The issue is that we're relying now on udev db records that contain
> info about mpath and MD components - without this, the detection (and
> hence filtering) could fail in certain cases. So if go without
> checking
> udev db, that'll be a
On Di, 2021-06-08 at 11:03 -0500, David Teigland wrote:
> On Tue, Jun 08, 2021 at 03:47:39PM +0000, Martin Wilck wrote:
> > Hm. You can boot with "multipath=off" which udev would take into
> > account. What would you do in that case? Native mpath detection
> > would
On Fr, 2021-07-02 at 16:09 -0500, David Teigland wrote:
> On Sun, Jun 06, 2021 at 02:15:23PM +0800, heming.z...@suse.com wrote:
> > dev_cache_scan //order: O(n^2)
> > + _insert_dirs //O(n)
> > | if obtain_device_list_from_udev() true
> > | _insert_udev_dir //O(n)
> > |
> > + dev_cache_index_
On Thu, 2021-09-09 at 14:44 -0500, David Teigland wrote:
> On Tue, Jun 08, 2021 at 01:23:33PM +0000, Martin Wilck wrote:
> > On Di, 2021-06-08 at 14:29 +0200, Peter Rajnoha wrote:
> > > On Mon 07 Jun 2021 16:48, David Teigland wrote:
> > > >
> > > > If
On Tue, 2021-09-28 at 17:16 +0200, Martin Wilck wrote:
> On Tue, 2021-09-28 at 09:42 -0500, David Teigland wrote:
>
> >
> > Firstly, with the new devices file, only the actual md/mpath device
> > will
> > be in the devices file, the components will not be, so lvm
Hello David and Peter,
On Mon, 2021-09-27 at 10:38 -0500, David Teigland wrote:
> On Mon, Sep 27, 2021 at 12:00:32PM +0200, Peter Rajnoha wrote:
> > > - We could use the new lvm-activate-* services to replace the
> > > activation
> > > generator when lvm.conf event_activation=0. This would be don
On Tue, 2021-09-28 at 09:42 -0500, David Teigland wrote:
> On Tue, Sep 28, 2021 at 06:34:06AM +0000, Martin Wilck wrote:
> > Hello David and Peter,
> >
> > On Mon, 2021-09-27 at 10:38 -0500, David Teigland wrote:
> > > On Mon, Sep 27, 2021 at 12:00:32PM +0200, Peter
On Tue, 2021-09-28 at 12:42 -0500, Benjamin Marzinski wrote:
> On Tue, Sep 28, 2021 at 03:16:08PM +0000, Martin Wilck wrote:
> > On Tue, 2021-09-28 at 09:42 -0500, David Teigland wrote:
> >
> >
> > I have pondered this quite a bit, but I can't say I have a conc
On Wed, 2021-09-29 at 23:39 +0200, Peter Rajnoha wrote:
> On Mon 27 Sep 2021 10:38, David Teigland wrote:
> > On Mon, Sep 27, 2021 at 12:00:32PM +0200, Peter Rajnoha wrote:
> > > > - We could use the new lvm-activate-* services to replace the
> > > > activation
> > > > generator when lvm.conf event
On Wed, 2021-09-29 at 23:53 +0200, Peter Rajnoha wrote:
> On Tue 28 Sep 2021 06:34, Martin Wilck wrote:
> >
> > You said it should wait for multipathd, which in turn waits for
> > udev
> > settle. And indeed it makes some sense. After all: the idea was to
> >
On Thu, 2021-09-30 at 16:07 +0800, heming.z...@suse.com wrote:
> On 9/30/21 3:51 PM, Martin Wilck wrote:
>
>
> Another performance story:
> The legacy lvm2 (2.02.xx) with lvmetad daemon, the event-activation
> mode
> is very likely timeout on a large scale PVs.
> When cus
On Thu, 2021-09-30 at 00:06 +0200, Peter Rajnoha wrote:
> On Tue 28 Sep 2021 12:42, Benjamin Marzinski wrote:
> > On Tue, Sep 28, 2021 at 03:16:08PM +0000, Martin Wilck wrote:
> > > I have pondered this quite a bit, but I can't say I have a
> > > concrete
&
On Thu, 2021-09-30 at 23:32 +0800, heming.z...@suse.com wrote:
> > I just want to say that some of the issues might simply be
> > regressions/issues with systemd/udev that could be fixed. We as
> > providers of block device abstractions where we need to handle,
> > sometimes, thousands of devices,
On Thu, 2021-09-30 at 09:41 -0500, Benjamin Marzinski wrote:
> On Thu, Sep 30, 2021 at 07:51:08AM +0000, Martin Wilck wrote:
>
>
> For multipathd, we don't even need to care when all the block devices
> have been processed. We only need to care about devices that are
> cu
On Thu, 2021-09-30 at 10:55 -0500, David Teigland wrote:
> On Wed, Sep 29, 2021 at 11:39:52PM +0200, Peter Rajnoha wrote:
>
> >
> > - For event-based activation, we need to be sure that we use
> > "RUN"
> > rule, not any of "IMPORT{program}" or "PROGRAM" rule. The
> > difference
> > is
On Wed, 2021-10-20 at 09:50 -0500, David Teigland wrote:
>
> I was just providing some background history after you and Peter both
> mentioned the idea of using RUN instead of IMPORT. That is, I gave
> up
> trying to use RUN many months ago because it wouldn't work, while
> IMPORT
> actually does
On Mon, 2021-10-18 at 10:04 -0500, David Teigland wrote:
>
> I began trying to use RUN several
> months ago and I think I gave up trying to find a way to pass values
> from
> the RUN program back into the udev rule (possibly by writing values
> to a
> temp file and then doing IMPORT{file}).
Tha
On Thu, 2019-02-07 at 19:13 +0100, suscrici...@gmail.com wrote:
> El Thu, 7 Feb 2019 11:18:40 +0100
>
> There's been a reply in Arch Linux forums and at least I can apply
> some
> contingency measures. If it happens again I will provide more info
> following your advice.
The log shows clearly tha
37 matches
Mail list logo