Re: [linux-lvm] lvm2-testsuite stability

2023-06-20 Thread Scott Moser
>
>Missed to actually answer this embedded question about the current git tree
>for lvm2. We are still at sourceware:
>
>https://sourceware.org/git/?p=lvm2.git

Yeah, sorry. i was aware, I just pointed to the github being lazy.
It'd be nice for that to be up to date, but it isn't really important
here.

>So likely github has some out-of-sync mirror ATM and gitlab even looks like
>some Bastian's  fork ???  (which I'm finding seriously confusing for lvm2
>users - Debian should be giving different/distinct names here for their forked
>projects instead i.e. lvm2-debian  if there exists any real need for that...)

I'm not certain of this, but I think that the branch is maintained with
as many debian packages are being maintained now.  Rather than a set of
patches against an upstream, they're using git to maintain delta and git
merge to sync changes. See https://wiki.debian.org/PackagingWithGit  for
more information.

The upstream (lvm2) code is tagged in the repo, so you can see
differences between upstream/2.03.16 to debian/2.03.16-2 at:

   
https://salsa.debian.org/lvm-team/lvm2/-/compare/debian%2F2.03.16-2...upstream%2F2.03.16?from_project_id=24161=true

That will give you some idea of the changes in debian. you can get a
diff without debian/ dir with:

 git diff upstream/2.03.16..debian/2.03.16-2 -- ':!debian'

or

 git diff upstream/2.03.16..master -- ':!debian'

Unfortunately, that doesn't get you a whole lot more information. There
isn't a huge delta, but the delta there is not annotated particularly
well.  If I had to guess, I'd assume that some of it is no longer
necessary.

Dropping that delta would be a good example of something that is made
much easier with the integration test that autopkgtest provides.

>>> The gist at https://gist.github.com/smoser/3107dafec490c0f4d9bf9faf02327f04
>>> describes how I am doing this.  It is "standard" package build and 
>>> autopkgtest
>>> on debian/ubuntu.  The autopkgtest VM does not use LVM for the system
>>> so we don't have to worry about interaction with that.
>
>So leads to a question was this testing actually tried against upstream git
>main/HEAD branch ?
>Is this test testing Debian packaged version of lvm2 which is not equivalent
>to upstream lvm2 ?

I was intentionally testing the debian package version here, as that is
the ultimate goal of package tests.  I've not tried with upstream.
Sorry if that wasn't clear.

___
linux-lvm mailing list
linux-lvm@redhat.com
https://listman.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/



Re: [linux-lvm] lvm2-testsuite stability

2023-06-20 Thread Zdenek Kabelac

Dne 19. 06. 23 v 20:22 Scott Moser napsal(a):

Hi, thanks for your response.


Yep - some tests are failing


expected-fail  api/dbustest.sh


We do have them even split to individual tests;
api/dbus_test_cache_lv_create.sh
api/dbus_test_log_file_option.

sh

That is not available upstream, right?
I just saw the single 'dbustest.sh' in
[main/test](https://github.com/lvmteam/lvm2/tree/master/test/api).
Is there another branch I should be looking at?


Correct - that's a local 'mod' for some test machines - but I'd like to get it
merged to upstream - although made in different way.


I'd likely need to get access/see  to the logs of such machines
(or you would need to provide as some downloadable image of you Qemu machine
installation)


The gist at https://gist.github.com/smoser/3107dafec490c0f4d9bf9faf02327f04
describes how I am doing this.  It is "standard" package build and autopkgtest
on debian/ubuntu.  The autopkgtest VM does not use LVM for the system
so we don't have to worry about interaction with that.

I could provide a vm image if you were interested.



The tricky part with lvm2 is it's dependency on the proper 'udev' rule 
processing. Unfortunately Debian distro somewhat changes those rules on it's 
upstream package without deeper consultation with upstream and there were few 
more difference that upstream lvm2 doesn't consider as valid modification 
(though haven't checked the recent state)



Do others run this test-suite in automation and get reliabl results ?


Yes our VM machines do give reliable results for properly configured boxes.
Although as said before - there are some 'failing' tests we know about.



Identifying the set of tests that were allowed to fail in git
and gating pull requests on successful pass would be wonderful.  Without
some expected-working list, it is hard for me as a downstream user to
separate signal from noise.


There are no 'tests' allowed to fail.

There are either 'broken' tests or broken lvm2 code - but it's just not always 
exactly easy to fix some bugs and not enough hands to fix all issues quickly.
So all failing tests do present some real problem from class  a)  or  b) and 
should be fixed - it may have just lower priority with other tasks.




Would upstream be open to pull requests that added test suite running
  via github actions?  is there some other preferred mechanism for such a thing?

The test suite is really well done. I was surprised how well it insulates
itself from the system and how easy it was to use.  Running it in a
distro would give the distro developer a *huge* boost in confidence when
attempting to integrate a new LVM release into the distro.


Basically we are in decision point to move either to github or gitlab and add
this CI capabilities - but definitively some extra hands here might be helpful.



We would need to think much harder if the test should be running with
some daemons or autoactivation on the system that could see and could
interact with our devices generated during the test run (one of the
reasons machine for tests need some local modification - we may provide
some Ansible-like testing script eventually.


Autopkgtest will
  * start a new vm for each run of the tests
  * install the packages listed as dependencies of the test.
  * run the test "entrypoint" (debian/test/testsuite).

I think that I have debian/test/testsuite correctly shutting
down/masking the necessary system services before invoking the tests. As
suggested in TESTING.


I'm not sure what's the state of current udev rules - and these may impact 
some tests and possibly add some unexpected randomness


Another aspect of our test suite is the 'try-out' of various 'race' moments,
which may eventually need further tuning on even faster hardware to hit the 
race - but that might be possibly harder to 'set-up' if the VM are without
'ssh' access for a developer to enhance testing (it might be somewhat annoying 
trying to fix this with individual git commits)



If you are willing to help, I can post a vm image somewhere. I suspect


For at least initial diagnostics should be sufficient to just expose somewhere 
results from failing tests (content of failing tests in this subdir basically).



you're not working with debian or ubuntu on a daily basis.  If you had
access to a debian or ubuntu system it would probably be easiest to
just let autopkgtest do the running. autopkgtest does provide a
`--shell` and `--shell-fail` parameter to put you into a root shell
after the tests.

My ultimate goal is to provide a distro with confidence that the lvm2
package they're integrating is working correctly.  I'm ok to skip
tests that provide noisy results.  In this case, having *some*
reliable test is a huge improvement.


We were kind of trying to get some 'strange' deviation of Debian package fixed 
in past - however it seemed to lead nowhere...
(Ideally all the 'needed' changes should be only set via configure option and 
there should be no need of any extra patch on 

Re: [linux-lvm] lvm2-testsuite stability

2023-06-19 Thread Scott Moser
Hi, thanks for your response.

> Yep - some tests are failing
>
> > expected-fail  api/dbustest.sh
>
> We do have them even split to individual tests;
> api/dbus_test_cache_lv_create.sh
> api/dbus_test_log_file_option.
sh

That is not available upstream, right?
I just saw the single 'dbustest.sh' in
[main/test](https://github.com/lvmteam/lvm2/tree/master/test/api).
Is there another branch I should be looking at?

> I'd likely need to get access/see  to the logs of such machines
> (or you would need to provide as some downloadable image of you Qemu machine
> installation)

The gist at https://gist.github.com/smoser/3107dafec490c0f4d9bf9faf02327f04
describes how I am doing this.  It is "standard" package build and autopkgtest
on debian/ubuntu.  The autopkgtest VM does not use LVM for the system
so we don't have to worry about interaction with that.

I could provide a vm image if you were interested.

> > Do others run this test-suite in automation and get reliable results ?
> >
>
> We surely do run these tests on regular basis on VM - so those are usually
> slightly modified to avoid collisions with tests.  There is also no
> strict rule to not break some 'tests' - so occasionally some tests can
> be failing for a while if they are seen 'less important' over some other
> bugs...

Identifying the set of tests that were allowed to fail in git
and gating pull requests on successful pass would be wonderful.  Without
some expected-working list, it is hard for me as a downstream user to
separate signal from noise.

Would upstream be open to pull requests that added test suite running
 via github actions?  is there some other preferred mechanism for such a thing?

The test suite is really well done. I was surprised how well it insulates
itself from the system and how easy it was to use.  Running it in a
distro would give the distro developer a *huge* boost in confidence when
attempting to integrate a new LVM release into the distro.

>
> We would need to think much harder if the test should be running with
> some daemons or autoactivation on the system that could see and could
> interact with our devices generated during the test run (one of the
> reasons machine for tests need some local modification - we may provide
> some Ansible-like testing script eventually.

Autopkgtest will
 * start a new vm for each run of the tests
 * install the packages listed as dependencies of the test.
 * run the test "entrypoint" (debian/test/testsuite).

I think that I have debian/test/testsuite correctly shutting
down/masking the necessary system services before invoking the tests. As
suggested in TESTING.

> But anyway - the easiest is to give us access to your test results so we
> could see whether there is something wrong with our test environment,
> lvm2 bug, or system setup - it's not always trivial to guess...

If you are willing to help, I can post a vm image somewhere. I suspect
you're not working with debian or ubuntu on a daily basis.  If you had
access to a debian or ubuntu system it would probably be easiest to
just let autopkgtest do the running. autopkgtest does provide a
`--shell` and `--shell-fail` parameter to put you into a root shell
after the tests.

My ultimate goal is to provide a distro with confidence that the lvm2
package they're integrating is working correctly.  I'm ok to skip
tests that provide noisy results.  In this case, having *some*
reliable test is a huge improvement.

Thanks,
Scott

On Mon, Jun 19, 2023 at 8:26 AM Zdenek Kabelac  wrote:
>
> Dne 15. 06. 23 v 20:02 Scott Moser napsal(a):
> > Hi,
> > [sorry for duplicate post, re-sending from a subscribed address]
> >
> > I'm looking to enable the lvm2 testsuite as an autopkgtest [1] to run
> > in debian and ubuntu. I have a merge request up at [2].  The general
> > idea is just to a.) package 'lvm2-testsuite' as an installable package
> > b.) run the testsuite as part of the autopkgtest.
> >
> > The version I'm testing on Ubuntu 22.04 is 2.03.16-3 from debian
> > (rebuilt for 22.04). I'm running udev-vanilla  in a 2 cpu/4GB VM, and
> > stopping/masking  the following services: dm-event lvm2-lvmpolld
> > lvm2-monitor lvm2-lvmdbusd .
> >
> > I'm seeing some failures when running the test.  Some seem expected
> > due to size limitations, some seem to fail every time, and some see
> > transient failures.
> >
> > Here is the list of tests that I'm seeing fail and my initial
> > categorization.  I've seen this across say half a dozen runs:
> >
>
> Yep - some tests are failing
>
> > expected-fail  api/dbustest.sh
>
> We do have them even split to individual tests;
>
> api/dbus_test_cache_lv_create.sh
> api/dbus_test_copy_signature.sh
> api/dbus_test_external_event.sh
> api/dbus_test_log_file_option.sh
> api/dbus_test_wipefs.sh
> api/dbus_test_z_sigint.sh
>
> these need to be fixed and resolved.
>
> > expected-fail  shell/lvconvert-repair-thin.sh
>
>
>
> > space-req  shell/lvcreate-large-raid.sh
> > space-req  shell/lvcreate-thin-limits.sh
> > 

Re: [linux-lvm] lvm2-testsuite stability

2023-06-19 Thread Zdenek Kabelac

Dne 15. 06. 23 v 20:02 Scott Moser napsal(a):

Hi,
[sorry for duplicate post, re-sending from a subscribed address]

I'm looking to enable the lvm2 testsuite as an autopkgtest [1] to run
in debian and ubuntu. I have a merge request up at [2].  The general
idea is just to a.) package 'lvm2-testsuite' as an installable package
b.) run the testsuite as part of the autopkgtest.

The version I'm testing on Ubuntu 22.04 is 2.03.16-3 from debian
(rebuilt for 22.04). I'm running udev-vanilla  in a 2 cpu/4GB VM, and
stopping/masking  the following services: dm-event lvm2-lvmpolld
lvm2-monitor lvm2-lvmdbusd .

I'm seeing some failures when running the test.  Some seem expected
due to size limitations, some seem to fail every time, and some see
transient failures.

Here is the list of tests that I'm seeing fail and my initial
categorization.  I've seen this across say half a dozen runs:



Yep - some tests are failing


expected-fail  api/dbustest.sh


We do have them even split to individual tests;

api/dbus_test_cache_lv_create.sh


api/dbus_test_copy_signature.sh 



api/dbus_test_external_event.sh 



api/dbus_test_log_file_option.sh



api/dbus_test_wipefs.sh 



api/dbus_test_z_sigint.sh

these need to be fixed and resolved.


expected-fail  shell/lvconvert-repair-thin.sh





space-req  shell/lvcreate-large-raid.sh
space-req  shell/lvcreate-thin-limits.sh
expected-fail  shell/lvm-conf-error.sh
expected-fail  shell/lvresize-full.sh
timeoutshell/pvmove-abort-all.sh
space-req  shell/pvmove-basic.sh
expected-fail  shell/pvscan-autoactivation-polling.sh
expected-fail  shell/snapshot-merge.sh
space-req  shell/thin-large.sh
racy   shell/writecache-cache-blocksize.sh


These are individual - we have some of those testing on some machines.
They may need some 'extra' care.



expected-fail fails most every time. timeout seems to work sometimes,
space-req i think is just space requirement issue (i'll just skip
those tests).



I'd likely need to get access/see  to the logs of such machines
(or you would need to provide as some downloadable image of you Qemu machine 
installation)




The full output from the test run can be seen at [3] in the
testsuite-stdout.txt and testsuite-stderr.txt files.

Do others run this test-suite in automation and get reliable results ?



We surely do run these tests on regular basis on VM - so those are usually 
slightly modified to avoid collisions with tests.
There is also no strict rule to not break some 'tests' - so occasionally some 
tests can be failing for a while if they are seen 'less important' over some 
other bugs...


We would need to think much harder if the test should be running with some 
daemons or autoactivation on the system that could see and could interact with 
our devices generated during the test run (one of the reasons machine for 
tests need some local modification - we may provide some Ansible-like testing 
script eventually.


But anyway - the easiest is to give us access to your test results so we could 
see whether there is something wrong with our test environment,  lvm2 bug, or 
system setup - it's not always trivial to guess...



Regards

Zdenek


___
linux-lvm mailing list
linux-lvm@redhat.com
https://listman.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/



[linux-lvm] lvm2-testsuite stability

2023-06-15 Thread Scott Moser
Hi,

I'm looking to enable the lvm2 testsuite as an autopkgtest [1] to run
in debian and ubuntu. I have a merge request up at [2].  The general
idea is just to a.) package 'lvm2-testsuite' as an installable package
b.) run the testsuite as part of the autopkgtest.

The version I'm testing on Ubuntu 22.04 is 2.03.16-3 from debian
(rebuilt for 22.04). I'm running udev-vanilla  in a 2 cpu/4GB VM, and
stopping/masking  the following services: dm-event lvm2-lvmpolld
lvm2-monitor lvm2-lvmdbusd .

I'm seeing some failures when running the test.  Some seem expected
due to size limitations, some seem to fail every time, and some see
transient failures.

Here is the list of tests that I'm seeing fail and my initial
categorization.  I've seen this across say half a dozen runs:

expected-fail  api/dbustest.sh
expected-fail  shell/lvconvert-repair-thin.sh
space-req  shell/lvcreate-large-raid.sh
space-req  shell/lvcreate-thin-limits.sh
expected-fail  shell/lvm-conf-error.sh
expected-fail  shell/lvresize-full.sh
timeoutshell/pvmove-abort-all.sh
space-req  shell/pvmove-basic.sh
expected-fail  shell/pvscan-autoactivation-polling.sh
expected-fail  shell/snapshot-merge.sh
space-req  shell/thin-large.sh
racy   shell/writecache-cache-blocksize.sh

expected-fail fails most every time. timeout seems to work sometimes,
space-req i think is just space requirement issue (i'll just skip
those tests).

The full output from the test run can be seen at [3] in the
testsuite-stdout.txt and testsuite-stderr.txt files.

Do others run this test-suite in automation and get reliable results ?

Thanks in advance for any help.

--
[1] https://wiki.ubuntu.com/ProposedMigration#autopkgtests
[2] https://salsa.debian.org/lvm-team/lvm2/-/merge_requests/6
[3] https://gist.github.com/smoser/3107dafec490c0f4d9bf9faf02327f04

___
linux-lvm mailing list
linux-lvm@redhat.com
https://listman.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/



[linux-lvm] lvm2-testsuite stability

2023-06-15 Thread Scott Moser
Hi,
[sorry for duplicate post, re-sending from a subscribed address]

I'm looking to enable the lvm2 testsuite as an autopkgtest [1] to run
in debian and ubuntu. I have a merge request up at [2].  The general
idea is just to a.) package 'lvm2-testsuite' as an installable package
b.) run the testsuite as part of the autopkgtest.

The version I'm testing on Ubuntu 22.04 is 2.03.16-3 from debian
(rebuilt for 22.04). I'm running udev-vanilla  in a 2 cpu/4GB VM, and
stopping/masking  the following services: dm-event lvm2-lvmpolld
lvm2-monitor lvm2-lvmdbusd .

I'm seeing some failures when running the test.  Some seem expected
due to size limitations, some seem to fail every time, and some see
transient failures.

Here is the list of tests that I'm seeing fail and my initial
categorization.  I've seen this across say half a dozen runs:

expected-fail  api/dbustest.sh
expected-fail  shell/lvconvert-repair-thin.sh
space-req  shell/lvcreate-large-raid.sh
space-req  shell/lvcreate-thin-limits.sh
expected-fail  shell/lvm-conf-error.sh
expected-fail  shell/lvresize-full.sh
timeoutshell/pvmove-abort-all.sh
space-req  shell/pvmove-basic.sh
expected-fail  shell/pvscan-autoactivation-polling.sh
expected-fail  shell/snapshot-merge.sh
space-req  shell/thin-large.sh
racy   shell/writecache-cache-blocksize.sh

expected-fail fails most every time. timeout seems to work sometimes,
space-req i think is just space requirement issue (i'll just skip
those tests).

The full output from the test run can be seen at [3] in the
testsuite-stdout.txt and testsuite-stderr.txt files.

Do others run this test-suite in automation and get reliable results ?

Thanks in advance for any help.

--
[1] https://wiki.ubuntu.com/ProposedMigration#autopkgtests
[2] https://salsa.debian.org/lvm-team/lvm2/-/merge_requests/6
[3] https://gist.github.com/smoser/3107dafec490c0f4d9bf9faf02327f04

___
linux-lvm mailing list
linux-lvm@redhat.com
https://listman.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/