Re: [linux-lvm] lvm2-testsuite stability

2023-06-20 Thread Scott Moser
>
>Missed to actually answer this embedded question about the current git tree
>for lvm2. We are still at sourceware:
>
>https://sourceware.org/git/?p=lvm2.git

Yeah, sorry. i was aware, I just pointed to the github being lazy.
It'd be nice for that to be up to date, but it isn't really important
here.

>So likely github has some out-of-sync mirror ATM and gitlab even looks like
>some Bastian's  fork ???  (which I'm finding seriously confusing for lvm2
>users - Debian should be giving different/distinct names here for their forked
>projects instead i.e. lvm2-debian  if there exists any real need for that...)

I'm not certain of this, but I think that the branch is maintained with
as many debian packages are being maintained now.  Rather than a set of
patches against an upstream, they're using git to maintain delta and git
merge to sync changes. See https://wiki.debian.org/PackagingWithGit  for
more information.

The upstream (lvm2) code is tagged in the repo, so you can see
differences between upstream/2.03.16 to debian/2.03.16-2 at:

   
https://salsa.debian.org/lvm-team/lvm2/-/compare/debian%2F2.03.16-2...upstream%2F2.03.16?from_project_id=24161&straight=true

That will give you some idea of the changes in debian. you can get a
diff without debian/ dir with:

 git diff upstream/2.03.16..debian/2.03.16-2 -- ':!debian'

or

 git diff upstream/2.03.16..master -- ':!debian'

Unfortunately, that doesn't get you a whole lot more information. There
isn't a huge delta, but the delta there is not annotated particularly
well.  If I had to guess, I'd assume that some of it is no longer
necessary.

Dropping that delta would be a good example of something that is made
much easier with the integration test that autopkgtest provides.

>>> The gist at https://gist.github.com/smoser/3107dafec490c0f4d9bf9faf02327f04
>>> describes how I am doing this.  It is "standard" package build and 
>>> autopkgtest
>>> on debian/ubuntu.  The autopkgtest VM does not use LVM for the system
>>> so we don't have to worry about interaction with that.
>
>So leads to a question was this testing actually tried against upstream git
>main/HEAD branch ?
>Is this test testing Debian packaged version of lvm2 which is not equivalent
>to upstream lvm2 ?

I was intentionally testing the debian package version here, as that is
the ultimate goal of package tests.  I've not tried with upstream.
Sorry if that wasn't clear.

___
linux-lvm mailing list
linux-lvm@redhat.com
https://listman.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/



Re: [linux-lvm] lvm2-testsuite stability

2023-06-19 Thread Scott Moser
Hi, thanks for your response.

> Yep - some tests are failing
>
> > expected-fail  api/dbustest.sh
>
> We do have them even split to individual tests;
> api/dbus_test_cache_lv_create.sh
> api/dbus_test_log_file_option.
sh

That is not available upstream, right?
I just saw the single 'dbustest.sh' in
[main/test](https://github.com/lvmteam/lvm2/tree/master/test/api).
Is there another branch I should be looking at?

> I'd likely need to get access/see  to the logs of such machines
> (or you would need to provide as some downloadable image of you Qemu machine
> installation)

The gist at https://gist.github.com/smoser/3107dafec490c0f4d9bf9faf02327f04
describes how I am doing this.  It is "standard" package build and autopkgtest
on debian/ubuntu.  The autopkgtest VM does not use LVM for the system
so we don't have to worry about interaction with that.

I could provide a vm image if you were interested.

> > Do others run this test-suite in automation and get reliable results ?
> >
>
> We surely do run these tests on regular basis on VM - so those are usually
> slightly modified to avoid collisions with tests.  There is also no
> strict rule to not break some 'tests' - so occasionally some tests can
> be failing for a while if they are seen 'less important' over some other
> bugs...

Identifying the set of tests that were allowed to fail in git
and gating pull requests on successful pass would be wonderful.  Without
some expected-working list, it is hard for me as a downstream user to
separate signal from noise.

Would upstream be open to pull requests that added test suite running
 via github actions?  is there some other preferred mechanism for such a thing?

The test suite is really well done. I was surprised how well it insulates
itself from the system and how easy it was to use.  Running it in a
distro would give the distro developer a *huge* boost in confidence when
attempting to integrate a new LVM release into the distro.

>
> We would need to think much harder if the test should be running with
> some daemons or autoactivation on the system that could see and could
> interact with our devices generated during the test run (one of the
> reasons machine for tests need some local modification - we may provide
> some Ansible-like testing script eventually.

Autopkgtest will
 * start a new vm for each run of the tests
 * install the packages listed as dependencies of the test.
 * run the test "entrypoint" (debian/test/testsuite).

I think that I have debian/test/testsuite correctly shutting
down/masking the necessary system services before invoking the tests. As
suggested in TESTING.

> But anyway - the easiest is to give us access to your test results so we
> could see whether there is something wrong with our test environment,
> lvm2 bug, or system setup - it's not always trivial to guess...

If you are willing to help, I can post a vm image somewhere. I suspect
you're not working with debian or ubuntu on a daily basis.  If you had
access to a debian or ubuntu system it would probably be easiest to
just let autopkgtest do the running. autopkgtest does provide a
`--shell` and `--shell-fail` parameter to put you into a root shell
after the tests.

My ultimate goal is to provide a distro with confidence that the lvm2
package they're integrating is working correctly.  I'm ok to skip
tests that provide noisy results.  In this case, having *some*
reliable test is a huge improvement.

Thanks,
Scott

On Mon, Jun 19, 2023 at 8:26 AM Zdenek Kabelac  wrote:
>
> Dne 15. 06. 23 v 20:02 Scott Moser napsal(a):
> > Hi,
> > [sorry for duplicate post, re-sending from a subscribed address]
> >
> > I'm looking to enable the lvm2 testsuite as an autopkgtest [1] to run
> > in debian and ubuntu. I have a merge request up at [2].  The general
> > idea is just to a.) package 'lvm2-testsuite' as an installable package
> > b.) run the testsuite as part of the autopkgtest.
> >
> > The version I'm testing on Ubuntu 22.04 is 2.03.16-3 from debian
> > (rebuilt for 22.04). I'm running udev-vanilla  in a 2 cpu/4GB VM, and
> > stopping/masking  the following services: dm-event lvm2-lvmpolld
> > lvm2-monitor lvm2-lvmdbusd .
> >
> > I'm seeing some failures when running the test.  Some seem expected
> > due to size limitations, some seem to fail every time, and some see
> > transient failures.
> >
> > Here is the list of tests that I'm seeing fail and my initial
> > categorization.  I've seen this across say half a dozen runs:
> >
>
> Yep - some tests are failing
>
> > expected-fail  api/dbustest.sh
>
> We do have them even split to individual tests;
>
> api/dbus_test_cache_lv

[linux-lvm] lvm2-testsuite stability

2023-06-15 Thread Scott Moser
Hi,

I'm looking to enable the lvm2 testsuite as an autopkgtest [1] to run
in debian and ubuntu. I have a merge request up at [2].  The general
idea is just to a.) package 'lvm2-testsuite' as an installable package
b.) run the testsuite as part of the autopkgtest.

The version I'm testing on Ubuntu 22.04 is 2.03.16-3 from debian
(rebuilt for 22.04). I'm running udev-vanilla  in a 2 cpu/4GB VM, and
stopping/masking  the following services: dm-event lvm2-lvmpolld
lvm2-monitor lvm2-lvmdbusd .

I'm seeing some failures when running the test.  Some seem expected
due to size limitations, some seem to fail every time, and some see
transient failures.

Here is the list of tests that I'm seeing fail and my initial
categorization.  I've seen this across say half a dozen runs:

expected-fail  api/dbustest.sh
expected-fail  shell/lvconvert-repair-thin.sh
space-req  shell/lvcreate-large-raid.sh
space-req  shell/lvcreate-thin-limits.sh
expected-fail  shell/lvm-conf-error.sh
expected-fail  shell/lvresize-full.sh
timeoutshell/pvmove-abort-all.sh
space-req  shell/pvmove-basic.sh
expected-fail  shell/pvscan-autoactivation-polling.sh
expected-fail  shell/snapshot-merge.sh
space-req  shell/thin-large.sh
racy   shell/writecache-cache-blocksize.sh

expected-fail fails most every time. timeout seems to work sometimes,
space-req i think is just space requirement issue (i'll just skip
those tests).

The full output from the test run can be seen at [3] in the
testsuite-stdout.txt and testsuite-stderr.txt files.

Do others run this test-suite in automation and get reliable results ?

Thanks in advance for any help.

--
[1] https://wiki.ubuntu.com/ProposedMigration#autopkgtests
[2] https://salsa.debian.org/lvm-team/lvm2/-/merge_requests/6
[3] https://gist.github.com/smoser/3107dafec490c0f4d9bf9faf02327f04

___
linux-lvm mailing list
linux-lvm@redhat.com
https://listman.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/



[linux-lvm] lvm2-testsuite stability

2023-06-15 Thread Scott Moser
Hi,
[sorry for duplicate post, re-sending from a subscribed address]

I'm looking to enable the lvm2 testsuite as an autopkgtest [1] to run
in debian and ubuntu. I have a merge request up at [2].  The general
idea is just to a.) package 'lvm2-testsuite' as an installable package
b.) run the testsuite as part of the autopkgtest.

The version I'm testing on Ubuntu 22.04 is 2.03.16-3 from debian
(rebuilt for 22.04). I'm running udev-vanilla  in a 2 cpu/4GB VM, and
stopping/masking  the following services: dm-event lvm2-lvmpolld
lvm2-monitor lvm2-lvmdbusd .

I'm seeing some failures when running the test.  Some seem expected
due to size limitations, some seem to fail every time, and some see
transient failures.

Here is the list of tests that I'm seeing fail and my initial
categorization.  I've seen this across say half a dozen runs:

expected-fail  api/dbustest.sh
expected-fail  shell/lvconvert-repair-thin.sh
space-req  shell/lvcreate-large-raid.sh
space-req  shell/lvcreate-thin-limits.sh
expected-fail  shell/lvm-conf-error.sh
expected-fail  shell/lvresize-full.sh
timeoutshell/pvmove-abort-all.sh
space-req  shell/pvmove-basic.sh
expected-fail  shell/pvscan-autoactivation-polling.sh
expected-fail  shell/snapshot-merge.sh
space-req  shell/thin-large.sh
racy   shell/writecache-cache-blocksize.sh

expected-fail fails most every time. timeout seems to work sometimes,
space-req i think is just space requirement issue (i'll just skip
those tests).

The full output from the test run can be seen at [3] in the
testsuite-stdout.txt and testsuite-stderr.txt files.

Do others run this test-suite in automation and get reliable results ?

Thanks in advance for any help.

--
[1] https://wiki.ubuntu.com/ProposedMigration#autopkgtests
[2] https://salsa.debian.org/lvm-team/lvm2/-/merge_requests/6
[3] https://gist.github.com/smoser/3107dafec490c0f4d9bf9faf02327f04

___
linux-lvm mailing list
linux-lvm@redhat.com
https://listman.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/



[linux-lvm] [PATCH] man: document that 2 times poolmetadatasize is allocated for lvmthin

2020-10-26 Thread Scott Moser
diff --git a/man/lvmthin.7_main b/man/lvmthin.7_main
index ce2343183..e21e516c9 100644
--- a/man/lvmthin.7_main
+++ b/man/lvmthin.7_main
@@ -1101,6 +1101,11 @@ specified with the --poolmetadatasize option.
When this option is not
 given, LVM automatically chooses a size based on the data size and chunk
 size.

+The space allocated by creation of a thinpool will include
+a spare metadata LV by default (see "Spare metadata LV").  As a result,
+the VG must have enough space for the --size option plus twice
+the specified (or calculated) --poolmetadata value.
+
 It can be hard to predict the amount of metadata space that will be
 needed, so it is recommended to start with a size of 1GiB which should be
 enough for all practical purposes.  A thin pool metadata LV can later be

___
linux-lvm mailing list
linux-lvm@redhat.com
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/



[linux-lvm] [PATCH] man: document that 2 times poolmetadatasize is allocated for lvmthin

2020-10-20 Thread Scott Moser
diff --git a/man/lvmthin.7_main b/man/lvmthin.7_main
index ce2343183..e21e516c9 100644
--- a/man/lvmthin.7_main
+++ b/man/lvmthin.7_main
@@ -1101,6 +1101,11 @@ specified with the --poolmetadatasize option.
When this option is not
 given, LVM automatically chooses a size based on the data size and chunk
 size.

+The space allocated by creation of a thinpool will include
+a spare metadata LV by default (see "Spare metadata LV").  As a result,
+the VG must have enough space for the --size option plus twice
+the specified (or calculated) --poolmetadata value.
+
 It can be hard to predict the amount of metadata space that will be
 needed, so it is recommended to start with a size of 1GiB which should be
 enough for all practical purposes.  A thin pool metadata LV can later be

___
linux-lvm mailing list
linux-lvm@redhat.com
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/



[linux-lvm] Why does thinpool take 2*poolmetadatasize space?

2020-10-20 Thread Scott Moser
When I create an lvmthinpool with size S and poolmetadatasize P,
it reduces the available freespace by S+2P. I expected that to
be S+P. Where did the extra poolmetadatasize get used?

See below for example.
before lvcreate we had 255868 free, after we had 254588.
The difference is 1280.  (1024 + 2*128).

# lvm vgcreate --metadatasize=128m myvg0 /dev/vda
  Physical volume "/dev/vda" successfully created.
  Volume group "myvg0" successfully created

# pvs --unit=m
  PV VGFmt  Attr PSize  PFree
  /dev/vda   myvg0 lvm2 a--  255868.00m 255868.00m

# vgs --unit=m
  VG#PV #LV #SN Attr   VSize  VFree
  myvg0   1   0   0 wz--n- 255868.00m 255868.00m

# lvm lvcreate --ignoremonitoring --yes --activate=y \
   --setactivationskip=n --size=1024m --poolmetadatasize=128m \
   --thinpool=mythinpool myvg0
  Thin pool volume with chunk size 64.00 KiB can address at most 15.81 TiB of da
ta.
  Logical volume "mythinpool" created.

# vgs --unit=m
  VG#PV #LV #SN Attr   VSize  VFree
  myvg0   1   1   0 wz--n- 255868.00m 254588.00m

# lvs --all --unit=m
  LV VGAttr   LSizePool Origin Data%
Meta%  Move Log Cpy%Sync Convert
  [lvol0_pmspare]myvg0 ewi---  128.00m
  mythinpool myvg0 twi-a-tz-- 1024.00m 0.00   10.03
  [mythinpool_tdata] myvg0 Twi-ao 1024.00m
  [mythinpool_tmeta] myvg0 ewi-ao  128.00m

___
linux-lvm mailing list
linux-lvm@redhat.com
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/