[ovirt-devel] Re: cloning LVM partitions

2021-06-21 Thread Nir Soffer
On Mon, Jun 21, 2021 at 10:29 PM  wrote:
>
> Hi,
> What are the internal commands used to clone LVM partitions ?

Are you sure this is the right mailing list?

> (you can simply indicate me the source code where this is done)

If you mean how we copy disks lvm based disks (e.g. on FC/iSCSI storage domain),
we use this:
https://github.com/oVirt/vdsm/blob/4f728377f6cd6950035a7739014737789a4d6f14/lib/vdsm/storage/qemuimg.py#L232

Most flows uses this API for copying disks:
https://github.com/oVirt/vdsm/blob/master/lib/vdsm/storage/sdm/api/copy_data.py

Nir
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/YI7LQQBB5U2UMXF75RUCX2NQVFGNR4X5/


[ovirt-devel] Re: AV 8.4 for CentOS Linux

2021-06-21 Thread Nir Soffer
On Mon, Jun 21, 2021 at 8:39 PM Marcin Sobczyk  wrote:
>
>
>
> On 6/20/21 12:23 PM, Dana Elfassy wrote:
> > Hi,
> > I'm getting packages conflicts when trying to upgrade my Centos8.4 and
> > Centos-Stream hosts.
> > (Centos Stream was installed from iso, then I
> > installed ovirt-release-master.rpm and deployed the host)
> > The details below are the output for Centos-Stream
> > * The packages conflicts occur also on OST -
> > https://rhv-devops-jenkins.rhev-ci-vms.eng.rdu2.redhat.com/job/ds-ost-baremetal_manual/7211/console
> > 
> >
> > Do you know what could've caused this and how it can be fixed?
> Yes, libvirt 7.4.0 + qemu-kvm 6.0.0 is currently broken and has bugs
> filed on it.
> We're trying to avoid these packages by excluding them on vdsm's spec
> level [1]
> and downgrading to older versions (7.0.0 and 5.2.0 respectively) that
> work in OST [2].
> Unfortunately somewhere around late Friday a new version of qemu-kvm
> was published, which makes the downgrade process go from 6.0.0-19 to
> 6.0.0-18
> and not the 5.2.0 that works. We don't have a reasonable resolution for
> OST yet.
>
> If you manage your host manually simply 'dnf downgrade qemu-kvm' until
> you get version 5.2.0
> or download and install all the older RPMs manually.

This was true this morning, but we reverted the patch conflicting with
libvirt 7.4.0. Please use the latest vdsm from master.

You can build vdsm locally or use this repo:
https://jenkins.ovirt.org/job/vdsm_standard-on-merge/3659/artifact/build-artifacts.build-py3.el8stream.x86_64/

Nir

> Regards, Marcin
>
> [1] https://gerrit.ovirt.org/#/c/vdsm/+/115193/
> [2] https://gerrit.ovirt.org/#/c/ovirt-system-tests/+/115194/
>
> > Thanks,
> > Dana
> >
> > [root@localhost ~]# rpm -q vdsm
> > vdsm-4.40.70.4-5.git73fbe23cd.el8.x86_64
> >
> > [root@localhost ~]# dnf module list virt
> > Last metadata expiration check: 1:09:54 ago on Sun 20 Jun 2021
> > 05:09:50 AM EDT.
> > CentOS Stream 8 - AppStream
> > Name  Stream   Profiles  Summary
> > virt  rhel [d][e]common
> > [d]  Virtualization module
> >
> > The error:
> > [root@localhost ~]# dnf update
> > Last metadata expiration check: 1:08:13 ago on Sun 20 Jun 2021
> > 05:09:50 AM EDT.
> > Error:
> >  Problem 1: package vdsm-4.40.70.4-5.git73fbe23cd.el8.x86_64 requires
> > (libvirt-daemon-kvm >= 7.0.0-14 and libvirt-daemon-kvm < 7.4.0-1), but
> > none of the providers can be installed
> >   - cannot install both libvirt-daemon-kvm-7.4.0-1.el8s.x86_64 and
> > libvirt-daemon-kvm-7.0.0-14.1.el8.x86_64
> >   - cannot install both libvirt-daemon-kvm-7.4.0-1.el8s.x86_64 and
> > libvirt-daemon-kvm-6.0.0-35.module_el8.5.0+746+bbd5d70c.x86_64
> >   - cannot install both libvirt-daemon-kvm-7.4.0-1.el8s.x86_64 and
> > libvirt-daemon-kvm-6.0.0-36.module_el8.5.0+821+97472045.x86_64
> >   - cannot install both libvirt-daemon-kvm-7.4.0-1.el8s.x86_64 and
> > libvirt-daemon-kvm-5.6.0-10.el8.x86_64
> >   - cannot install both libvirt-daemon-kvm-7.4.0-1.el8s.x86_64 and
> > libvirt-daemon-kvm-6.0.0-17.el8.x86_64
> >   - cannot install both libvirt-daemon-kvm-7.4.0-1.el8s.x86_64 and
> > libvirt-daemon-kvm-6.0.0-25.2.el8.x86_64
> >   - cannot install both libvirt-daemon-kvm-7.4.0-1.el8s.x86_64 and
> > libvirt-daemon-kvm-6.6.0-13.el8.x86_64
> >   - cannot install both libvirt-daemon-kvm-7.4.0-1.el8s.x86_64 and
> > libvirt-daemon-kvm-6.6.0-7.1.el8.x86_64
> >   - cannot install both libvirt-daemon-kvm-7.4.0-1.el8s.x86_64 and
> > libvirt-daemon-kvm-6.6.0-7.3.el8.x86_64
> >   - cannot install both libvirt-daemon-kvm-7.4.0-1.el8s.x86_64 and
> > libvirt-daemon-kvm-7.0.0-13.el8s.x86_64
> >   - cannot install both libvirt-daemon-kvm-7.4.0-1.el8s.x86_64 and
> > libvirt-daemon-kvm-7.0.0-14.el8s.x86_64
> >   - cannot install both libvirt-daemon-kvm-7.0.0-13.el8s.x86_64 and
> > libvirt-daemon-kvm-7.4.0-1.el8s.x86_64
> >   - cannot install both libvirt-daemon-kvm-7.0.0-14.el8s.x86_64 and
> > libvirt-daemon-kvm-7.4.0-1.el8s.x86_64
> >   - cannot install both libvirt-daemon-kvm-7.0.0-9.el8s.x86_64 and
> > libvirt-daemon-kvm-7.4.0-1.el8s.x86_64
> >   - cannot install the best update candidate for package
> > vdsm-4.40.70.4-5.git73fbe23cd.el8.x86_64
> >   - cannot install the best update candidate for package
> > libvirt-daemon-kvm-7.0.0-14.1.el8.x86_64
> >  Problem 2: problem with installed package
> > vdsm-4.40.70.4-5.git73fbe23cd.el8.x86_64
> >   - package vdsm-4.40.70.4-5.git73fbe23cd.el8.x86_64 requires
> > (qemu-kvm >= 15:5.2.0 and qemu-kvm < 15:6.0.0), but none of the
> > providers can be installed
> >   - cannot install both qemu-kvm-15:6.0.0-19.el8s.x86_64 and
> > qemu-kvm-15:5.2.0-16.el8s.x86_64
> >   - cannot install both qemu-kvm-15:6.0.0-19.el8s.x86_64 and
> > qemu-kvm-15:4.2.0-48.module_el8.5.0+746+bbd5d70c.x86_64
> >   - cannot install both qemu-kvm-15:6.0.0-19.el8s.x86_64 

[ovirt-devel] Re: OST verifed -1 broken, fails for infra issue in OST

2021-06-21 Thread Nir Soffer
On Mon, Jun 21, 2021 at 7:30 PM Michal Skrivanek
 wrote:
>
>
>
> > On 14. 6. 2021, at 13:14, Nir Soffer  wrote:
> >
> > I got this wrong review from OST, which looks like an infra issue in OST:
> >
> > Patch:
> > https://gerrit.ovirt.org/c/vdsm/+/115232
> >
> > Error:
> > https://gerrit.ovirt.org/c/vdsm/+/115232#message-46ad5e75_ed543485
> >
> > Failing code:
> >
> > Package(*line.split()) for res in results.values() > for line in
> > _filter_results(res['stdout'].splitlines()) ] E TypeError: __new__()
> > missing 2 required positional arguments: 'version' and 'repo'
> > ost_utils/ost_utils/deployment_utils/package_mgmt.py:177: TypeError
> >
> > I hope someone working on OST can take a look soon.
>
> it’s from a week ago, is that still relevant or you pasted a wrong patch?

Yes, this was sent 7 days ago, and Marcin already answered on the same day.

For some reason the mailing list sent this mail to subscribers only today.
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/W23P7MC7POMAAJR5UBK3MU6D3MNHOOJ6/


[ovirt-devel] Re: OST verifed -1 broken, fails for infra issue in OST

2021-06-21 Thread Michal Skrivanek


> On 14. 6. 2021, at 13:14, Nir Soffer  wrote:
> 
> I got this wrong review from OST, which looks like an infra issue in OST:
> 
> Patch:
> https://gerrit.ovirt.org/c/vdsm/+/115232
> 
> Error:
> https://gerrit.ovirt.org/c/vdsm/+/115232#message-46ad5e75_ed543485
> 
> Failing code:
> 
> Package(*line.split()) for res in results.values() > for line in
> _filter_results(res['stdout'].splitlines()) ] E TypeError: __new__()
> missing 2 required positional arguments: 'version' and 'repo'
> ost_utils/ost_utils/deployment_utils/package_mgmt.py:177: TypeError
> 
> I hope someone working on OST can take a look soon.

it’s from a week ago, is that still relevant or you pasted a wrong patch?

specifically this issue has been fixed by https://gerrit.ovirt.org/115254 on 
June 15th

Thanks,
michal
> 
> Nir
> ___
> Devel mailing list -- devel@ovirt.org
> To unsubscribe send an email to devel-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/devel@ovirt.org/message/EDLFMHDYR37FFNJBN7FLTBALURZYEC7V/
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/6KPLROLHFYCUKPXZETWP6A2TIQLUZJ4J/


[ovirt-devel] Re: hc-basic-suite-master fails due to missing glusterfs firewalld services

2021-06-21 Thread Yedidyah Bar David
On Thu, Jun 17, 2021 at 6:27 PM Marcin Sobczyk  wrote:
>
>
>
> On 6/17/21 1:44 PM, Yedidyah Bar David wrote:
> > On Wed, Jun 16, 2021 at 1:23 PM Yedidyah Bar David  wrote:
> >> Hi,
> >>
> >> I now tried running locally hc-basic-suite-master with a patched OST,
> >> and it failed due to $subject. I checked and see that this also
> >> happened on CI, e.g. [1], before it started failing to to an unrelated
> >> reason later:
> >>
> >> E   TASK [gluster.infra/roles/firewall_config : Add/Delete
> >> services to firewalld rules] ***
> >> E   failed: [lago-hc-basic-suite-master-host-0]
> >> (item=glusterfs) => {"ansible_loop_var": "item", "changed": false,
> >> "item": "glusterfs", "msg": "ERROR: Exception caught:
> >> org.fedoraproject.FirewallD1.Exception: INVALID_SERVICE: 'glusterfs'
> >> not among existing services Permanent and Non-Permanent(immediate)
> >> operation, Services are defined by port/tcp relationship and named as
> >> they are in /etc/services (on most systems)"}
> >> E   failed: [lago-hc-basic-suite-master-host-2]
> >> (item=glusterfs) => {"ansible_loop_var": "item", "changed": false,
> >> "item": "glusterfs", "msg": "ERROR: Exception caught:
> >> org.fedoraproject.FirewallD1.Exception: INVALID_SERVICE: 'glusterfs'
> >> not among existing services Permanent and Non-Permanent(immediate)
> >> operation, Services are defined by port/tcp relationship and named as
> >> they are in /etc/services (on most systems)"}
> >> E   failed: [lago-hc-basic-suite-master-host-1]
> >> (item=glusterfs) => {"ansible_loop_var": "item", "changed": false,
> >> "item": "glusterfs", "msg": "ERROR: Exception caught:
> >> org.fedoraproject.FirewallD1.Exception: INVALID_SERVICE: 'glusterfs'
> >> not among existing services Permanent and Non-Permanent(immediate)
> >> operation, Services are defined by port/tcp relationship and named as
> >> they are in /etc/services (on most systems)"}
> >>
> >> This seems similar to [2], and indeed I can't see the package
> >> 'glusterfs-server' installed locally on host-0. Any idea?
> > I think I understand:
> >
> > It seems like the deployment of hc relied on the order of running the deploy
> > scripts as written in lagoinitfile. With the new deploy code, all of them 
> > run
> > in parallel. Does this make sense?
> The scripts run in parallel as in "on all VMs at the same time", but
> sequentially
> as in "one script at a time on each VM" - this is the same behavior we
> had with lago deployment.

Well, I do not think it works as intended, then. When running locally,
I logged into host-0, and after it failed, I had:

# dnf history
ID | Command line

   | Date and time| Action(s)  | Altered
--
 4 | install -y --nogpgcheck ansible gluster-ansible-roles
ovirt-hosted-engine-setup ovirt-ansible-hosted-engine-setup
ovirt-ansible-reposit | 2021-06-17 11:54 | I, U   |8
 3 | -y --nogpgcheck install ovirt-host python3-coverage
vdsm-hook-vhostmd
 | 2021-06-08 02:15 | Install|  493 EE
 2 | install -y dnf-utils
https://resources.ovirt.org/pub/yum-repo/ovirt-release-master.rpm
| 2021-06-08 02:14 |
Install|1
 1 |

   | 2021-06-08 02:06 | Install|  511 EE

Meaning, it already ran setup_first_host.sh (and failed there), but
didn't run hc_setup_host.sh, although it appears before it.

If you check [1], which is a build that failed due to this reason
(unlike the later ones), you see there:

-- Captured log setup --
2021-06-07 01:58:38+,594 INFO
[ost_utils.pytest.fixtures.deployment] Waiting for SSH on the VMs
(deployment:40)
2021-06-07 01:59:11+,947 INFO
[ost_utils.deployment_utils.package_mgmt] oVirt packages used on VMs:
(package_mgmt:133)
2021-06-07 01:59:11+,948 INFO
[ost_utils.deployment_utils.package_mgmt]
vdsm-4.40.70.2-1.git34cdc8884.el8.x86_64 (package_mgmt:135)
2021-06-07 01:59:11+,950 INFO
[ost_utils.deployment_utils.scripts] Running
/home/jenkins/workspace/ovirt-system-tests_hc-basic-suite-master/ovirt-system-tests/common/deploy-scripts/setup_host.sh
on lago-hc-basic-suite-master-host-1 (scripts:36)
2021-06-07 01:59:11+,950 INFO
[ost_utils.deployment_utils.scripts] Running
/home/jenkins/workspace/ovirt-system-tests_hc-basic-suite-master/ovirt-system-tests/common/deploy-scripts/setup_host.sh
on lago-hc-basic-suite-master-host-2 (scripts:36)
2021-06-07 01:59:11+,952 INFO
[ost_utils.deployment_utils.scripts] Running
/home/jenkins/workspace/ovirt-system-tests_hc-basic-suite-master/ovirt-system-tests/common/deploy-scripts/setup_host.sh
on lago-hc-basic-suite-master-host-0 (scripts:36)
2021-06-07 01:59:13+,260 INFO
[ost_utils.deployment_utils.scripts] Running

[ovirt-devel] CI failures due to oVirt proxy getting forbidden access to CentOS mirrors

2021-06-21 Thread Sandro Bonazzola
Hi,
we are experiencing build failures due to

*08:49:10* # /usr/bin/yum --installroot
/var/lib/mock/epel-8-x86_64-6c17b26f3795335ce1ac45b0dcdbc9d5-bootstrap-8304/root/
--releasever 8 install dnf dnf-plugins-core distribution-gpg-keys
--setopt=tsflags=nocontexts

 
*08:49:10*
Failed to set locale, defaulting to C

 
*08:49:10*
http://mirror.centos.org/centos-8/8/AppStream/x86_64/os/repodata/repomd.xml:
[Errno 14] HTTP Error 403 - Forbidden

 
*08:49:10*
Trying other mirror.

...

*08:49:10* failure: repodata/repomd.xml from centos-appstream-el8:
[Errno 256] No more mirrors to try.

 
*08:49:10*
http://mirror.centos.org/centos-8/8/AppStream/x86_64/os/repodata/repomd.xml:
[Errno 14] HTTP Error 403 - Forbidden


(see 
https://jenkins.ovirt.org/job/ovirt-appliance_master_build-artifacts-el8-x86_64/708/console)


I checked our squid logs on proxy01.phx.ovirt.org and I see:

1623395369.584  1 38.145.50.116 TCP_DENIED/403 4148 GET
http://mirror.centos.org/centos-8/8/AppStream/x86_64/os/repodata/repomd.xml
- HIER_NONE/- text/html


Apparently there's no option in oVirt CI system to turn off the proxy
since mock_runner is always called with --try-proxy option.


Anyone that can look into this?


-- 

Sandro Bonazzola

MANAGER, SOFTWARE ENGINEERING, EMEA R RHV

Red Hat EMEA 

sbona...@redhat.com


*Red Hat respects your work life balance. Therefore there is no need to
answer this email out of your office hours.*
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/43M7G3UPJIDM7CRUQP44E746AQF2JSCV/


[ovirt-devel] Re: libvirtError: internal error: unknown feature amd-sev-es

2021-06-21 Thread Nir Soffer
I'm using RHEL AV 8.5.0 nightly repos without any issue.

$ rpm -q libvirt-daemon vdsm
libvirt-daemon-7.3.0-1.module+el8.5.0+11004+f4810536.x86_64
vdsm-4.40.70.3-202106131544.git9b5c96716.el8.x86_64

Nir
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/KUGN3VQLZBX2JSJE6JXQULD3YJLS257O/


[ovirt-devel] Re: Moving #vdsm to #ovirt?

2021-06-21 Thread Vojtech Juranek
On Monday, 21 June 2021 14:36:43 CEST Nir Soffer wrote:
> We had mostly dead #vdsm channel in freenode[1].
> 
> Recently there was a hostile takeover of freenode, and old freenode
> folks created
> libera[2] network. Most (all?) projects moved to this network.
> 
> We can move #vdsm to libera, but I think we have a better option, use
> #ovirt channel
> in oftc[3], which is pretty lively.

+1

> Having vdsm developers in #ovirt channel is good for the project and
> will make it easier
> to reach developers.
> 
> Moving to libera require registration work. Moving to #ovirt requires no
> change. In both cases we need to update vdsm readme and ovirt.org.
> 
> What do you think?
> 
> [1] https://freenode.net/
> [2] https://libera.chat/
> [3] https://www.oftc.net/
> 
> Nir



signature.asc
Description: This is a digitally signed message part.
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/NBKIPN2Z5PWW5VNU2SOYTPYMRLHHT2Q4/


[ovirt-devel] Re: ovirt-engine CI failing after the recent translation update

2021-06-21 Thread Liran Rotenberg
On Wed, Jun 9, 2021 at 4:10 PM Martin Perina  wrote:

>
>
> On Wed, Jun 9, 2021 at 2:44 PM Milan Zamazal  wrote:
>
>> Hi Scott,
>>
>> ovirt-engine CI is failing, see
>>
>> https://jenkins.ovirt.org/blue/organizations/jenkins/ovirt-engine_standard-check-patch/detail/ovirt-engine_standard-check-patch/12582/pipeline
>>
>> The error is:
>>
>>   UIMessagesTest.doTest:15 approveCertificateTrust does not match the
>> number of parameters in UIMessages_cs_CZ.properties
>>
>
> Here is the fix, which restores new functionality:
>
> https://gerrit.ovirt.org/c/ovirt-engine/+/115170
>
Thanks Martin!
I commented in the translation patch as it did pass CI:

Building UI Utils Compatibility (for UICommon)

...

[INFO] --- maven-surefire-plugin:2.22.0:test (default-test) @ uicompat ---
[INFO] Tests are skipped.

IMO related to the flags we use in CI:
make -j1 BUILD_GWT=1 BUILD_ALL_USER_AGENTS=0 BUILD_LOCALES=0 BUILD_UT=0
BUILD_VALIDATION=0

Should we change the make flags in the CI?

>
>   ...
>>
>> Indeed, it seems the recently merged translation patch has dropped "{3}"
>> from approveCertificateTrust.  Could you please fix it so that
>> ovirt-engine CI works again?
>>
>> Thanks,
>> Milan
>> ___
>> Devel mailing list -- devel@ovirt.org
>> To unsubscribe send an email to devel-le...@ovirt.org
>> Privacy Statement: https://www.ovirt.org/privacy-policy.html
>> oVirt Code of Conduct:
>> https://www.ovirt.org/community/about/community-guidelines/
>> List Archives:
>> https://lists.ovirt.org/archives/list/devel@ovirt.org/message/VVSEVCSWSBMVEASPHZBGIVF3EAPQ2KSJ/
>>
>
>
> --
> Martin Perina
> Manager, Software Engineering
> Red Hat Czech s.r.o.
> ___
> Devel mailing list -- devel@ovirt.org
> To unsubscribe send an email to devel-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/devel@ovirt.org/message/C4YMU4PWCIMJRLRFOJYN3KSKQDCXU4RO/
>
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/ZBJZP753OJY7BGEFZZTXQAK7G3CCEVSK/


[ovirt-devel] Re: Purging inactive maintainers from vdsm-master-maintainers

2021-06-21 Thread Nir Soffer
On Mon, Jun 21, 2021 at 12:09 PM Milan Zamazal  wrote:
>
> Nir Soffer  writes:
>
> > On Mon, Jun 21, 2021 at 11:35 AM Milan Zamazal  wrote:
> >>
> >> Edward Haas  writes:
> >
> >>
> >> > On Sun, Jun 20, 2021 at 11:29 PM Nir Soffer  wrote:
> >> >
> >> >> On Mon, Dec 2, 2019 at 4:27 PM Adam Litke  wrote:
> >> >>
> >> >>> I also agree with the proposal.  It's sad to turn in my keys but I'm
> >> >>> likely unable to perform many duties expected of a maintainer at this
> >> >>> point.  I know that people can still find me via the git history :)
> >> >>>
> >> >>> On Thu, Nov 28, 2019 at 3:37 AM Milan Zamazal 
> >> >>> wrote:
> >> >>>
> >>  Dan Kenigsberg  writes:
> >> 
> >>  > On Wed, Nov 27, 2019 at 4:33 PM Francesco Romani 
> >>  > 
> >>  wrote:
> >>  >>
> >>  >> On 11/27/19 3:25 PM, Nir Soffer wrote:
> >>  >
> >>  >> > I want to remove inactive contributors from 
> >>  >> > vdsm-master-maintainers.
> >>  >> >
> >>  >> > I suggest the simple rule of 2 years of inactivity for removing 
> >>  >> > from
> >>  >> > this group,
> >>  >> > based on git log.
> >>  >> >
> >>  >> > See the list below for current status:
> >>  >> > https://gerrit.ovirt.org/#/admin/groups/106,members
> >>  >>
> >>  >>
> >>  >> No objections, keeping the list minimal and current is a good idea.
> >>  >
> >>  >
> >>  > I love removing dead code; I feel a bit different about removing old
> >>  > colleagues. Maybe I'm just being nostalgic.
> >>  >
> >>  > If we introduce this policy (which I understand is healthy), let us
> >>  > give a long warning period (6 months?) before we apply the policy to
> >>  > existing dormant maintainers. We should also make sure that we
> >>  > actively try to contact a person before he or she is dropped.
> >> 
> >>  I think this is a reasonable proposal.
> >> 
> >>  Regards,
> >>  Milan
> >> 
> >> >>>
> >> >> I forgot about this, and another year passed.
> >> >>
> >> >> Sending again, this time I added all past maintainers that may not watch
> >> >> this list.
> >> >>
> >> >
> >> > Very sad, but it makes total sense. +1
> >> > Note that other projects move past maintainers to a special group named
> >> > "emeritus_*".
> >>
> >> Not a bad idea, I think we could have such a group in Vdsm too.
> >
> > It would be nice but not part of gerrit permission configuration.
> >
> > We have an AUTHORS file, last updated in 2013. We can use this file
> > to give credit to past maintainers.
>
> AUTHORS is a better place to give credits, but the group could be also
> useful as a more reliable tracking past maintainers and in case of
> restoring maintainer rights, if such a need ever occurs.  (Yes, no way
> necessary for that but maybe nice to have.)

Gerrit has an audit log:
https://gerrit.ovirt.org/admin/groups/106,audit-log

If you don't trust it, we can add a file with this info in the project.

If we look at other projects, qemu has this file:
https://github.com/qemu/qemu/blob/master/MAINTAINERS
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/4WFAUGBWL4WLGIGA6INYGG3OQR2MKSBB/


[ovirt-devel] Re: Purging inactive maintainers from vdsm-master-maintainers

2021-06-21 Thread Nir Soffer
On Mon, Dec 2, 2019 at 4:27 PM Adam Litke  wrote:

> I also agree with the proposal.  It's sad to turn in my keys but I'm
> likely unable to perform many duties expected of a maintainer at this
> point.  I know that people can still find me via the git history :)
>
> On Thu, Nov 28, 2019 at 3:37 AM Milan Zamazal  wrote:
>
>> Dan Kenigsberg  writes:
>>
>> > On Wed, Nov 27, 2019 at 4:33 PM Francesco Romani 
>> wrote:
>> >>
>> >> On 11/27/19 3:25 PM, Nir Soffer wrote:
>> >
>> >> > I want to remove inactive contributors from vdsm-master-maintainers.
>> >> >
>> >> > I suggest the simple rule of 2 years of inactivity for removing from
>> >> > this group,
>> >> > based on git log.
>> >> >
>> >> > See the list below for current status:
>> >> > https://gerrit.ovirt.org/#/admin/groups/106,members
>> >>
>> >>
>> >> No objections, keeping the list minimal and current is a good idea.
>> >
>> >
>> > I love removing dead code; I feel a bit different about removing old
>> > colleagues. Maybe I'm just being nostalgic.
>> >
>> > If we introduce this policy (which I understand is healthy), let us
>> > give a long warning period (6 months?) before we apply the policy to
>> > existing dormant maintainers. We should also make sure that we
>> > actively try to contact a person before he or she is dropped.
>>
>> I think this is a reasonable proposal.
>>
>> Regards,
>> Milan
>>
>
I forgot about this, and another year passed.

Sending again, this time I added all past maintainers that may not watch
this list.

Nir
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/Z36S3OSDKGEB7JYGDPKT3IFDSPS7GFVY/


[ovirt-devel] Moving #vdsm to #ovirt?

2021-06-21 Thread Nir Soffer
We had mostly dead #vdsm channel in freenode[1].

Recently there was a hostile takeover of freenode, and old freenode
folks created
libera[2] network. Most (all?) projects moved to this network.

We can move #vdsm to libera, but I think we have a better option, use
#ovirt channel
in oftc[3], which is pretty lively.

Having vdsm developers in #ovirt channel is good for the project and
will make it easier
to reach developers.

Moving to libera require registration work. Moving to #ovirt requires no change.
In both cases we need to update vdsm readme and ovirt.org.

What do you think?

[1] https://freenode.net/
[2] https://libera.chat/
[3] https://www.oftc.net/

Nir
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/LXMSZSA3SKMYV36OJMPC65CI4S2WQZ3U/


[ovirt-devel] test_hotplug_memory fails basic and he-basic

2021-06-21 Thread Yedidyah Bar David
Hi all,

On Sun, Jun 20, 2021 at 7:18 AM  wrote:
>
> Project: 
> https://jenkins.ovirt.org/job/ovirt-system-tests_he-basic-suite-master/
> Build: 
> https://jenkins.ovirt.org/job/ovirt-system-tests_he-basic-suite-master/2060/
> Build Number: 2060

basic-suite-master started failing on test_hotplug_memory at:

https://jenkins.ovirt.org/job/ovirt-system-tests_basic-suite-master_nightly/1247/

1246 failed on a later test (due to some other (temporary) reason?),
all other runs since failed on test_hotplug_memory.

Didn't try to bisect engine/vdsm.

On an OST check-patch run from this afternoon, basic-suite did pass,
so it's not always failing:

https://jenkins.ovirt.org/job/ovirt-system-tests_standard-check-patch/17364/

Any idea?

Thanks and best regards,
--
Didi
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/XDFBOSXCNLNMA5VVDQ45T74FGGHJVMDP/


[ovirt-devel] Re: OST verifed -1 broken, fails for infra issue in OST

2021-06-21 Thread Marcin Sobczyk

Hi,

On 6/14/21 1:14 PM, Nir Soffer wrote:

I got this wrong review from OST, which looks like an infra issue in OST:

Patch:
https://gerrit.ovirt.org/c/vdsm/+/115232

Error:
https://gerrit.ovirt.org/c/vdsm/+/115232#message-46ad5e75_ed543485

Failing code:

Package(*line.split()) for res in results.values() > for line in
_filter_results(res['stdout'].splitlines()) ] E TypeError: __new__()
missing 2 required positional arguments: 'version' and 'repo'
ost_utils/ost_utils/deployment_utils/package_mgmt.py:177: TypeError

I hope someone working on OST can take a look soon.

Sure, the fix is merged already:

https://gerrit.ovirt.org/#/c/ovirt-system-tests/+/115249/

Regards, Marcin



Nir


___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/YCV6RJQMBEY6MQGOYR7IQ2JCJ4JD34NH/


[ovirt-devel] Re: Purging inactive maintainers from vdsm-master-maintainers

2021-06-21 Thread Milan Zamazal
Edward Haas  writes:

> On Sun, Jun 20, 2021 at 11:29 PM Nir Soffer  wrote:
>
>> On Mon, Dec 2, 2019 at 4:27 PM Adam Litke  wrote:
>>
>>> I also agree with the proposal.  It's sad to turn in my keys but I'm
>>> likely unable to perform many duties expected of a maintainer at this
>>> point.  I know that people can still find me via the git history :)
>>>
>>> On Thu, Nov 28, 2019 at 3:37 AM Milan Zamazal 
>>> wrote:
>>>
 Dan Kenigsberg  writes:

 > On Wed, Nov 27, 2019 at 4:33 PM Francesco Romani 
 wrote:
 >>
 >> On 11/27/19 3:25 PM, Nir Soffer wrote:
 >
 >> > I want to remove inactive contributors from vdsm-master-maintainers.
 >> >
 >> > I suggest the simple rule of 2 years of inactivity for removing from
 >> > this group,
 >> > based on git log.
 >> >
 >> > See the list below for current status:
 >> > https://gerrit.ovirt.org/#/admin/groups/106,members
 >>
 >>
 >> No objections, keeping the list minimal and current is a good idea.
 >
 >
 > I love removing dead code; I feel a bit different about removing old
 > colleagues. Maybe I'm just being nostalgic.
 >
 > If we introduce this policy (which I understand is healthy), let us
 > give a long warning period (6 months?) before we apply the policy to
 > existing dormant maintainers. We should also make sure that we
 > actively try to contact a person before he or she is dropped.

 I think this is a reasonable proposal.

 Regards,
 Milan

>>>
>> I forgot about this, and another year passed.
>>
>> Sending again, this time I added all past maintainers that may not watch
>> this list.
>>
>
> Very sad, but it makes total sense. +1
> Note that other projects move past maintainers to a special group named
> "emeritus_*".

Not a bad idea, I think we could have such a group in Vdsm too.

Regards,
Milan
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/KWZXIRBXCH2BJJ2P3T6QUTTZ66XJE52B/


[ovirt-devel] Re: Moving #vdsm to #ovirt?

2021-06-21 Thread Milan Zamazal
Nir Soffer  writes:

> We had mostly dead #vdsm channel in freenode[1].
>
> Recently there was a hostile takeover of freenode, and old freenode
> folks created
> libera[2] network. Most (all?) projects moved to this network.
>
> We can move #vdsm to libera, but I think we have a better option, use
> #ovirt channel
> in oftc[3], which is pretty lively.
>
> Having vdsm developers in #ovirt channel is good for the project and
> will make it easier
> to reach developers.

Yes, makes sense.

> Moving to libera require registration work. Moving to #ovirt requires no 
> change.
> In both cases we need to update vdsm readme and ovirt.org.
>
> What do you think?
>
> [1] https://freenode.net/
> [2] https://libera.chat/
> [3] https://www.oftc.net/
>
> Nir
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/6NMVPEJIPSL7DHHAYKN5UMPGEYOBIEEO/


[ovirt-devel] Re: hc-basic-suite-master fails due to missing glusterfs firewalld services

2021-06-21 Thread Marcin Sobczyk



On 6/17/21 6:59 PM, Yedidyah Bar David wrote:

On Thu, Jun 17, 2021 at 6:27 PM Marcin Sobczyk  wrote:



On 6/17/21 1:44 PM, Yedidyah Bar David wrote:

On Wed, Jun 16, 2021 at 1:23 PM Yedidyah Bar David  wrote:

Hi,

I now tried running locally hc-basic-suite-master with a patched OST,
and it failed due to $subject. I checked and see that this also
happened on CI, e.g. [1], before it started failing to to an unrelated
reason later:

E   TASK [gluster.infra/roles/firewall_config : Add/Delete
services to firewalld rules] ***
E   failed: [lago-hc-basic-suite-master-host-0]
(item=glusterfs) => {"ansible_loop_var": "item", "changed": false,
"item": "glusterfs", "msg": "ERROR: Exception caught:
org.fedoraproject.FirewallD1.Exception: INVALID_SERVICE: 'glusterfs'
not among existing services Permanent and Non-Permanent(immediate)
operation, Services are defined by port/tcp relationship and named as
they are in /etc/services (on most systems)"}
E   failed: [lago-hc-basic-suite-master-host-2]
(item=glusterfs) => {"ansible_loop_var": "item", "changed": false,
"item": "glusterfs", "msg": "ERROR: Exception caught:
org.fedoraproject.FirewallD1.Exception: INVALID_SERVICE: 'glusterfs'
not among existing services Permanent and Non-Permanent(immediate)
operation, Services are defined by port/tcp relationship and named as
they are in /etc/services (on most systems)"}
E   failed: [lago-hc-basic-suite-master-host-1]
(item=glusterfs) => {"ansible_loop_var": "item", "changed": false,
"item": "glusterfs", "msg": "ERROR: Exception caught:
org.fedoraproject.FirewallD1.Exception: INVALID_SERVICE: 'glusterfs'
not among existing services Permanent and Non-Permanent(immediate)
operation, Services are defined by port/tcp relationship and named as
they are in /etc/services (on most systems)"}

This seems similar to [2], and indeed I can't see the package
'glusterfs-server' installed locally on host-0. Any idea?

I think I understand:

It seems like the deployment of hc relied on the order of running the deploy
scripts as written in lagoinitfile. With the new deploy code, all of them run
in parallel. Does this make sense?

The scripts run in parallel as in "on all VMs at the same time", but
sequentially
as in "one script at a time on each VM" - this is the same behavior we
had with lago deployment.

Well, I do not think it works as intended, then. When running locally,
I logged into host-0, and after it failed, I had:

# dnf history
ID | Command line

| Date and time| Action(s)  | Altered
--
  4 | install -y --nogpgcheck ansible gluster-ansible-roles
ovirt-hosted-engine-setup ovirt-ansible-hosted-engine-setup
ovirt-ansible-reposit | 2021-06-17 11:54 | I, U   |8
  3 | -y --nogpgcheck install ovirt-host python3-coverage
vdsm-hook-vhostmd
  | 2021-06-08 02:15 | Install|  493 EE
  2 | install -y dnf-utils
https://resources.ovirt.org/pub/yum-repo/ovirt-release-master.rpm
 | 2021-06-08 02:14 |
Install|1
  1 |

| 2021-06-08 02:06 | Install|  511 EE

Meaning, it already ran setup_first_host.sh (and failed there), but
didn't run hc_setup_host.sh, although it appears before it.

If you check [1], which is a build that failed due to this reason
(unlike the later ones), you see there:

-- Captured log setup --
2021-06-07 01:58:38+,594 INFO
[ost_utils.pytest.fixtures.deployment] Waiting for SSH on the VMs
(deployment:40)
2021-06-07 01:59:11+,947 INFO
[ost_utils.deployment_utils.package_mgmt] oVirt packages used on VMs:
(package_mgmt:133)
2021-06-07 01:59:11+,948 INFO
[ost_utils.deployment_utils.package_mgmt]
vdsm-4.40.70.2-1.git34cdc8884.el8.x86_64 (package_mgmt:135)
2021-06-07 01:59:11+,950 INFO
[ost_utils.deployment_utils.scripts] Running
/home/jenkins/workspace/ovirt-system-tests_hc-basic-suite-master/ovirt-system-tests/common/deploy-scripts/setup_host.sh
on lago-hc-basic-suite-master-host-1 (scripts:36)
2021-06-07 01:59:11+,950 INFO
[ost_utils.deployment_utils.scripts] Running
/home/jenkins/workspace/ovirt-system-tests_hc-basic-suite-master/ovirt-system-tests/common/deploy-scripts/setup_host.sh
on lago-hc-basic-suite-master-host-2 (scripts:36)
2021-06-07 01:59:11+,952 INFO
[ost_utils.deployment_utils.scripts] Running
/home/jenkins/workspace/ovirt-system-tests_hc-basic-suite-master/ovirt-system-tests/common/deploy-scripts/setup_host.sh
on lago-hc-basic-suite-master-host-0 (scripts:36)
2021-06-07 01:59:13+,260 INFO
[ost_utils.deployment_utils.scripts] Running
/home/jenkins/workspace/ovirt-system-tests_hc-basic-suite-master/ovirt-system-tests/hc-basic-suite-master/hc_setup_host.sh
on lago-hc-basic-suite-master-host-1 

[ovirt-devel] cloning LVM partitions

2021-06-21 Thread duparchy
Hi, 
What are the internal commands used to clone LVM partitions ?

(you can simply indicate me the source code where this is done)

Thanks
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/RPX2DLJKJAJ6OFP4KL447JWYJET6IEU2/


[ovirt-devel] libvirtError: internal error: unknown feature amd-sev-es

2021-06-21 Thread Vojtech Juranek
Hi,
I moved to CentOS stream on the hosts as I need recent sanlock package due to 
vdsm dependency on it. 
After moving to Stream and updating packages, all my hosts fails with exception 
bellow. Bellow are also
libvirt and vdsm versions and VM XML dump.

Yesterday I tried more recent libvirt, but IIRC got SeLinux exception when 
connecting to libvirt.

Do you know how to fix this issue or basically how to create working env. where 
I would be
able to install recent vdsm?

Thanks
Vojta

2021-06-16 05:23:01,275-0400 ERROR (jsonrpc/5) [root] Error while getting 
domain capabilities (machinetype:92)
Traceback (most recent call last):
  File "/usr/lib/python3.6/site-packages/vdsm/common/cache.py", line 41, in 
__call__
return self.cache[args]
KeyError: ()

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/lib/python3.6/site-packages/vdsm/machinetype.py", line 90, in 
_get_domain_capabilities
domcaps = conn.getDomainCapabilities(None, arch, None, virt_type, 0)
  File "/usr/lib/python3.6/site-packages/vdsm/common/libvirtconnection.py", 
line 131, in wrapper
ret = f(*args, **kwargs)
  File "/usr/lib/python3.6/site-packages/vdsm/common/function.py", line 94, in 
wrapper
return func(inst, *args, **kwargs)
  File "/usr/lib64/python3.6/site-packages/libvirt.py", line 4493, in 
getDomainCapabilities
raise libvirtError('virConnectGetDomainCapabilities() failed')
libvirt.libvirtError: internal error: unknown feature amd-sev-es
2021-06-16 05:23:01,277-0400 ERROR (jsonrpc/5) [root] Error while getting CPU 
features: no domain capabilities found (machinetype:188)
2021-06-16 05:23:01,278-0400 ERROR (jsonrpc/5) [root] Error while getting 
domain capabilities (machinetype:92)
Traceback (most recent call last):
  File "/usr/lib/python3.6/site-packages/vdsm/common/cache.py", line 41, in 
__call__
return self.cache[args]
KeyError: ()

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/lib/python3.6/site-packages/vdsm/machinetype.py", line 90, in 
_get_domain_capabilities
domcaps = conn.getDomainCapabilities(None, arch, None, virt_type, 0)
  File "/usr/lib/python3.6/site-packages/vdsm/common/libvirtconnection.py", 
line 131, in wrapper
ret = f(*args, **kwargs)
  File "/usr/lib/python3.6/site-packages/vdsm/common/function.py", line 94, in 
wrapper
return func(inst, *args, **kwargs)
  File "/usr/lib64/python3.6/site-packages/libvirt.py", line 4493, in 
getDomainCapabilities
raise libvirtError('virConnectGetDomainCapabilities() failed')
libvirt.libvirtError: internal error: unknown feature amd-sev-es



[root@localhost ~]# rpm -qa|grep libvirt
libvirt-daemon-driver-storage-logical-7.0.0-14.1.el8.x86_64
python3-libvirt-7.0.0-1.el8.x86_64
libvirt-daemon-config-nwfilter-7.0.0-14.1.el8.x86_64
libvirt-daemon-driver-interface-7.0.0-14.1.el8.x86_64
libvirt-7.0.0-14.1.el8.x86_64
libvirt-daemon-driver-storage-disk-7.0.0-14.1.el8.x86_64
libvirt-daemon-driver-storage-iscsi-7.0.0-14.1.el8.x86_64
libvirt-client-7.0.0-14.1.el8.x86_64
libvirt-admin-7.0.0-14.1.el8.x86_64
libvirt-daemon-driver-network-7.0.0-14.1.el8.x86_64
libvirt-daemon-driver-secret-7.0.0-14.1.el8.x86_64
libvirt-lock-sanlock-7.0.0-14.1.el8.x86_64
libvirt-daemon-driver-storage-gluster-7.0.0-14.1.el8.x86_64
libvirt-daemon-driver-storage-rbd-7.0.0-14.1.el8.x86_64
libvirt-daemon-driver-storage-7.0.0-14.1.el8.x86_64
libvirt-daemon-7.0.0-14.1.el8.x86_64
libvirt-daemon-config-network-7.0.0-14.1.el8.x86_64
libvirt-daemon-driver-storage-iscsi-direct-7.0.0-14.1.el8.x86_64
libvirt-daemon-driver-storage-scsi-7.0.0-14.1.el8.x86_64
libvirt-daemon-driver-nodedev-7.0.0-14.1.el8.x86_64
libvirt-daemon-driver-qemu-7.0.0-14.1.el8.x86_64
libvirt-daemon-driver-storage-core-7.0.0-14.1.el8.x86_64
libvirt-daemon-kvm-7.0.0-14.1.el8.x86_64
libvirt-bash-completion-7.0.0-14.1.el8.x86_64
libvirt-daemon-driver-storage-mpath-7.0.0-14.1.el8.x86_64
libvirt-libs-7.0.0-14.1.el8.x86_64
libvirt-daemon-driver-nwfilter-7.0.0-14.1.el8.x86_64


[root@localhost ~]# rpm -qa|grep vdsm
vdsm-http-4.40.60.7-1.el8.noarch
vdsm-hook-fcoe-4.40.60.7-1.el8.noarch
vdsm-network-4.40.60.7-1.el8.x86_64
vdsm-yajsonrpc-4.40.60.7-1.el8.noarch
vdsm-api-4.40.60.7-1.el8.noarch
vdsm-hook-openstacknet-4.40.60.7-1.el8.noarch
vdsm-python-4.40.60.7-1.el8.noarch
vdsm-hook-vhostmd-4.40.60.7-1.el8.noarch
vdsm-hook-vmfex-dev-4.40.60.7-1.el8.noarch
vdsm-common-4.40.60.7-1.el8.noarch
vdsm-client-4.40.60.7-1.el8.noarch
vdsm-4.40.60.7-1.el8.x86_64
vdsm-hook-ethtool-options-4.40.60.7-1.el8.noarch
vdsm-jsonrpc-4.40.60.7-1.el8.noarch


Host VM xml dump:


  centos82-host2-mig
  e1ee2430-37af-4ce1-b74a-7981895b5789
  
http://libosinfo.org/xmlns/libvirt/domain/1.0;>
  http://centos.org/centos/8"/>

  
  2097152
  2097152
  2
  
/machine
  
  
hvm

  
  



  
  
  



  
  destroy
  restart
  destroy
  


  
  

[ovirt-devel] Re: Moving #vdsm to #ovirt?

2021-06-21 Thread Sandro Bonazzola
Il giorno lun 21 giu 2021 alle ore 14:37 Nir Soffer  ha
scritto:

> We had mostly dead #vdsm channel in freenode[1].
>
> Recently there was a hostile takeover of freenode, and old freenode
> folks created
> libera[2] network. Most (all?) projects moved to this network.
>
> We can move #vdsm to libera, but I think we have a better option, use
> #ovirt channel
> in oftc[3], which is pretty lively.
>
> Having vdsm developers in #ovirt channel is good for the project and
> will make it easier
> to reach developers.
>
> Moving to libera require registration work. Moving to #ovirt requires no
> change.
> In both cases we need to update vdsm readme and ovirt.org.
>
> What do you think?
>

+1 for using #ovirt. I wasn't even aware we had #vdsm on freenode.



>
> [1] https://freenode.net/
> [2] https://libera.chat/
> [3] https://www.oftc.net/
>
> Nir
>
>

-- 

Sandro Bonazzola

MANAGER, SOFTWARE ENGINEERING, EMEA R RHV

Red Hat EMEA 

sbona...@redhat.com


*Red Hat respects your work life balance. Therefore there is no need to
answer this email out of your office hours.*
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/5ILWLODV56PQ5Z6SWC7EM5VZ23L7CLH7/


[ovirt-devel] Re: Purging inactive maintainers from vdsm-master-maintainers

2021-06-21 Thread Milan Zamazal
Nir Soffer  writes:

> On Mon, Jun 21, 2021 at 12:09 PM Milan Zamazal  wrote:
>>
>> Nir Soffer  writes:
>
>>
>> > On Mon, Jun 21, 2021 at 11:35 AM Milan Zamazal  wrote:
>> >>
>> >> Edward Haas  writes:
>> >
>> >>
>> >> > On Sun, Jun 20, 2021 at 11:29 PM Nir Soffer  wrote:
>> >> >
>> >> >> On Mon, Dec 2, 2019 at 4:27 PM Adam Litke  wrote:
>> >> >>
>> >> >>> I also agree with the proposal.  It's sad to turn in my keys but I'm
>> >> >>> likely unable to perform many duties expected of a maintainer at this
>> >> >>> point.  I know that people can still find me via the git history :)
>> >> >>>
>> >> >>> On Thu, Nov 28, 2019 at 3:37 AM Milan Zamazal 
>> >> >>> wrote:
>> >> >>>
>> >>  Dan Kenigsberg  writes:
>> >> 
>> >>  > On Wed, Nov 27, 2019 at 4:33 PM Francesco Romani 
>> >>  > 
>> >>  wrote:
>> >>  >>
>> >>  >> On 11/27/19 3:25 PM, Nir Soffer wrote:
>> >>  >
>> >>  >> > I want to remove inactive contributors from 
>> >>  >> > vdsm-master-maintainers.
>> >>  >> >
>> >>  >> > I suggest the simple rule of 2 years of inactivity for removing 
>> >>  >> > from
>> >>  >> > this group,
>> >>  >> > based on git log.
>> >>  >> >
>> >>  >> > See the list below for current status:
>> >>  >> > https://gerrit.ovirt.org/#/admin/groups/106,members
>> >>  >>
>> >>  >>
>> >>  >> No objections, keeping the list minimal and current is a good 
>> >>  >> idea.
>> >>  >
>> >>  >
>> >>  > I love removing dead code; I feel a bit different about removing 
>> >>  > old
>> >>  > colleagues. Maybe I'm just being nostalgic.
>> >>  >
>> >>  > If we introduce this policy (which I understand is healthy), let us
>> >>  > give a long warning period (6 months?) before we apply the policy 
>> >>  > to
>> >>  > existing dormant maintainers. We should also make sure that we
>> >>  > actively try to contact a person before he or she is dropped.
>> >> 
>> >>  I think this is a reasonable proposal.
>> >> 
>> >>  Regards,
>> >>  Milan
>> >> 
>> >> >>>
>> >> >> I forgot about this, and another year passed.
>> >> >>
>> >> >> Sending again, this time I added all past maintainers that may not 
>> >> >> watch
>> >> >> this list.
>> >> >>
>> >> >
>> >> > Very sad, but it makes total sense. +1
>> >> > Note that other projects move past maintainers to a special group named
>> >> > "emeritus_*".
>> >>
>> >> Not a bad idea, I think we could have such a group in Vdsm too.
>> >
>> > It would be nice but not part of gerrit permission configuration.
>> >
>> > We have an AUTHORS file, last updated in 2013. We can use this file
>> > to give credit to past maintainers.
>>
>> AUTHORS is a better place to give credits, but the group could be also
>> useful as a more reliable tracking past maintainers and in case of
>> restoring maintainer rights, if such a need ever occurs.  (Yes, no way
>> necessary for that but maybe nice to have.)
>
> Gerrit has an audit log:
> https://gerrit.ovirt.org/admin/groups/106,audit-log

Ah, that looks good enough, I don't think anything more is needed.

> If you don't trust it, we can add a file with this info in the project.
>
> If we look at other projects, qemu has this file:
> https://github.com/qemu/qemu/blob/master/MAINTAINERS
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/VU4BNKXBE2VWWAYACAXG34UGYENNOBSS/


[ovirt-devel] Re: OST gating failed on - test_import_vm1

2021-06-21 Thread Michal Skrivanek


> On 16. 6. 2021, at 10:03, Eyal Shenitzky  wrote:
> 
> Thanks for looking into it Michal.
> 
> Actually, my patch related to incremental backup so there nothing changed 
> around the snapshot area and I believe the failure isn't related to it,
> 
> I re-run OST for this change - 
> https://rhv-devops-jenkins.rhev-ci-vms.eng.rdu2.redhat.com/job/ds-ost-baremetal_manual/6795/
>  
> .
> 
> Let's see if it works fine.

it still needs to be investigated and fixed

> 
> On Tue, 15 Jun 2021 at 14:00, Michal Skrivanek  > wrote:
> 
> 
>> On 15. 6. 2021, at 12:00, Eyal Shenitzky > > wrote:
>> 
>> Hi All,
>> 
>> As part of OST gating verification, the verification failed with the 
>> following message - 
>> 
>> gating2 (43) : OST build 6687 failed with: test_import_vm1 failed:
>> 
>> engine = 
>> event_id = [1165], timeout = 600
>> 
>> @contextlib.contextmanager
>> def wait_for_event(engine, event_id, timeout=assertions.LONG_TIMEOUT):
>> '''
>> event_id could either be an int - a single
>> event ID or a list - multiple event IDs
>> that all will be checked
>> '''
>> events = engine.events_service()
>> last_event = int(events.list(max=2)[0].id)
>> try:
>> >   yield
>> 
>> ost_utils/ost_utils/engine_utils.py:36: 
>> _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
>> _ _ 
>> 
>> engine = 
>> correlation_id = 'test_validate_ova_import_vm', vm_name = 'imported_vm'
>> imported_url = 'ova:///var/tmp/ova_vm.ova <>', storage_domain = 'iscsi'
>> cluster_name = 'test-cluster'
>> 
>> def _import_ova(engine, correlation_id, vm_name, imported_url, 
>> storage_domain, cluster_name):
>> sd = 
>> engine.storage_domains_service().list(search='name={}'.format(storage_domain))[0]
>> cluster = 
>> engine.clusters_service().list(search='name={}'.format(cluster_name))[0]
>> imports_service = engine.external_vm_imports_service()
>> host = test_utils.get_first_active_host_by_name(engine)
>> 
>> with engine_utils.wait_for_event(engine, 1165): # 
>> IMPORTEXPORT_STARTING_IMPORT_VM
>> imports_service.add(
>> types.ExternalVmImport(
>> name=vm_name,
>> provider=types.ExternalVmProviderType.KVM,
>> url=imported_url,
>> cluster=types.Cluster(
>> id=cluster.id 
>> ),
>> storage_domain=types.StorageDomain(
>> id=sd.id 
>> ),
>> host=types.Host(
>> id=host.id 
>> ),
>> sparse=True
>> >   ), async_=True, query={'correlation_id': correlation_id}
>> )
>> 
>> basic-suite-master/test-scenarios/test_004_basic_sanity.py:935: 
>> _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
>> _ _ 
>> 
>> self = 
>> import_ = 
>> headers = None, query = {'correlation_id': 'test_validate_ova_import_vm'}
>> wait = True, kwargs = {'async_': True}
>> 
>> def add(
>> self,
>> import_,
>> headers=None,
>> query=None,
>> wait=True,
>> **kwargs
>> ):
>> """
>> This operation is used to import a virtual machine from external hypervisor, 
>> such as KVM, XEN or VMware.
>> For example import of a virtual machine from VMware can be facilitated using 
>> the following request:
>> [source]
>> 
>> POST /externalvmimports
>> 
>> With request body of type <>, for 
>> example:
>> [source,xml]
>> 
>> 
>> 
>> my_vm
>> 
>> 
>> 
>> vm_name_as_is_in_vmware
>> true
>> vmware_user
>> 123456
>> VMWARE
>> vpx://wmware_user@vcenter-host/DataCenter/Cluster/esxi-host?no_verify=1
>>  <>
>> 
>> 
>> 
>> 
>> 
>> """
>> # Check the types of the parameters:
>> Service._check_types([
>> ('import_', import_, types.ExternalVmImport),
>> ])
>> 
>> # Build the URL:
>> 
>> Patch set 4:Verified -1
>> 
>> 
>> 
>> The OST run as part of verification for patch - 
>> https://gerrit.ovirt.org/#/c/ovirt-engine/+/115192/ 
>> 
>> 
>> Can someone from Virt/OST team have a look?
> 
> you should be able to review logs in generic way
> 
> you can ee
> 2021-06-15 11:08:37,515+02 ERROR 
> [org.ovirt.engine.core.bll.exportimport.ImportVmFromExternalUrlCommand] 
> (default task-2) [test_validate_ova_import_vm] Exception: 
> java.lang.NullPointerException
>   at 
> deployment.engine.ear.bll.jar//org.ovirt.engine.core.bll.exportimport.ImportVmFromExternalUrlCommand$ExternalVmImporter.performImport(ImportVmFromExternalUrlCommand.java:116)
>   at 
> deployment.engine.ear.bll.jar//org.ovirt.engine.core.bll.exportimport.ImportVmFromExternalUrlCommand.executeCommand(ImportVmFromExternalUrlCommand.java:65)
>   at 
> deployment.engine.ear.bll.jar//org.ovirt.engine.core.bll.CommandBase.executeWithoutTransaction(CommandBase.java:1174)
>   at 
> deployment.engine.ear.bll.jar//org.ovirt.engine.core.bll.CommandBase.executeActionInTransactionScope(CommandBase.java:1332)
>   at 
> 

[ovirt-devel] Re: Gerrit hook adds unrelated patches to bugs

2021-06-21 Thread Eyal Shenitzky
+Dusan Fodor 

On Mon, 21 Jun 2021 at 13:32, Nir Soffer  wrote:

> Gerrit hook is wrongly looking for https://bugzilla.redhat.com/ URLs
> in the commit message, and adding the patch to the bug.
>
> Example patch:
> https://gerrit.ovirt.org/c/vdsm/+/115339
>
> I had to clean up the bug after the broken hook (see screenshot).
>
> The hook should really look only in the single URL in (one or more)
> Bug-Url headers:
>
> Bug-Url: https://bugzilla.redhat.com/
>
> I reported this years ago (I think for Related-To:), and I remember we had
> a patch fixing this issue, but for some reason it was lost.
>
> Nir
>


-- 
Regards,
Eyal Shenitzky
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/X3SN2YS24S3MJENZPZI42ARCWPIYRP2A/


[ovirt-devel] hc-basic-suite-master fails due to missing glusterfs firewalld services

2021-06-21 Thread Yedidyah Bar David
Hi,

I now tried running locally hc-basic-suite-master with a patched OST,
and it failed due to $subject. I checked and see that this also
happened on CI, e.g. [1], before it started failing to to an unrelated
reason later:

E   TASK [gluster.infra/roles/firewall_config : Add/Delete
services to firewalld rules] ***
E   failed: [lago-hc-basic-suite-master-host-0]
(item=glusterfs) => {"ansible_loop_var": "item", "changed": false,
"item": "glusterfs", "msg": "ERROR: Exception caught:
org.fedoraproject.FirewallD1.Exception: INVALID_SERVICE: 'glusterfs'
not among existing services Permanent and Non-Permanent(immediate)
operation, Services are defined by port/tcp relationship and named as
they are in /etc/services (on most systems)"}
E   failed: [lago-hc-basic-suite-master-host-2]
(item=glusterfs) => {"ansible_loop_var": "item", "changed": false,
"item": "glusterfs", "msg": "ERROR: Exception caught:
org.fedoraproject.FirewallD1.Exception: INVALID_SERVICE: 'glusterfs'
not among existing services Permanent and Non-Permanent(immediate)
operation, Services are defined by port/tcp relationship and named as
they are in /etc/services (on most systems)"}
E   failed: [lago-hc-basic-suite-master-host-1]
(item=glusterfs) => {"ansible_loop_var": "item", "changed": false,
"item": "glusterfs", "msg": "ERROR: Exception caught:
org.fedoraproject.FirewallD1.Exception: INVALID_SERVICE: 'glusterfs'
not among existing services Permanent and Non-Permanent(immediate)
operation, Services are defined by port/tcp relationship and named as
they are in /etc/services (on most systems)"}

This seems similar to [2], and indeed I can't see the package
'glusterfs-server' installed locally on host-0. Any idea?

Thanks and best regards,

[1] https://jenkins.ovirt.org/job/ovirt-system-tests_hc-basic-suite-master/2088/

[2] https://github.com/oVirt/ovirt-ansible/issues/124
-- 
Didi
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/TDGYVCAECQMP4QOARXZJG5EVSLQQTIOY/


[ovirt-devel] Re: oVirt CI mirror for centos stream advancedvirt-common repo

2021-06-21 Thread Sandro Bonazzola
Il giorno ven 4 giu 2021 alle ore 19:44 Nir Soffer  ha
scritto:

> I updated vdsm libvirt requirement to 7.0.0-14. The package exists in
> centos
> stream and rhel so the change is correct, but the install check fails in
> the CI.
>
> I found that adding the repo to check-patch.repos works:
> https://gerrit.ovirt.org/c/vdsm/+/115039/4/automation/check-patch.repos
>
> But depending on mirror.centos.org does not feel like the right way. I
> think
> keeping a local mirror is the right way.
>

Please note that despite we are pointing to centos mirrors, CI is running
under proxy, so we are caching on our datacenter the rpms we consume anyway.
That said, we can mirror advanced virtualization repos as well and the
local mirror will be automatically picked up.

I see we are already mirroring the test repo for CentOS Linux:
./data/mirrors-reposync.conf:
   70 : [ovirt-master-centos-advanced-virtualization-el8]
   72 : baseurl=
https://buildlogs.centos.org/centos/8/virt/x86_64/advanced-virtualization/

For CentOS Stream we'll need an addition there. Please open a ticket on
infra-supp...@ovirt.org about it.



> It looks like the stdci docs[1] are not maintained for a while,
> listing mirrors for
> fedora 30 and centos 7.
>

I would suggest to open a ticket for this as well



>
> Looking in the mirrors jobs[2] we have advanced-virtualization for
> centos[3] but
> it holds old versions (6.x).
>
> Can we add a local mirror for this repo?
>
> [1]
> https://ovirt-infra-docs.readthedocs.io/en/latest/CI/List_of_mirrors/index.html
> [2] https://jenkins.ovirt.org/search/?q=system-sync_mirrors
> [3]
> https://buildlogs.centos.org/centos/8/virt/x86_64/advanced-virtualization/Packages/l/
>
> Nir
>
>

-- 

Sandro Bonazzola

MANAGER, SOFTWARE ENGINEERING, EMEA R RHV

Red Hat EMEA 

sbona...@redhat.com


*Red Hat respects your work life balance. Therefore there is no need to
answer this email out of your office hours.
*
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/UU3ZOUS7425IQKCGMYOVVOWD5VFYI5DI/


[ovirt-devel] Re: libvirtError: internal error: unknown feature amd-sev-es

2021-06-21 Thread vjuranek
Btw. BZ #1961558 [1] says that it happens for , 
but for me fails also for  which 
worked for hosts on CentOS 8.2.
Vojta

[1] https://bugzilla.redhat.com/1961558
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/CEQ23CYWVBGX35OGSRMK5FDS72UTH6A6/


[ovirt-devel] Re: CI failures due to oVirt proxy getting forbidden access to CentOS mirrors

2021-06-21 Thread Sandro Bonazzola
Tracked on https://ovirt-jira.atlassian.net/browse/OVIRT-3097

Il giorno ven 11 giu 2021 alle ore 09:40 Sandro Bonazzola <
sbona...@redhat.com> ha scritto:

> Hi,
> we are experiencing build failures due to
>
> *08:49:10* # /usr/bin/yum --installroot 
> /var/lib/mock/epel-8-x86_64-6c17b26f3795335ce1ac45b0dcdbc9d5-bootstrap-8304/root/
>  --releasever 8 install dnf dnf-plugins-core distribution-gpg-keys 
> --setopt=tsflags=nocontexts
>
>  
> *08:49:10*
>  Failed to set locale, defaulting to C
>
>  
> *08:49:10*
>  http://mirror.centos.org/centos-8/8/AppStream/x86_64/os/repodata/repomd.xml: 
> [Errno 14] HTTP Error 403 - Forbidden
>
>  
> *08:49:10*
>  Trying other mirror.
>
> ...
>
> *08:49:10* failure: repodata/repomd.xml from centos-appstream-el8: [Errno 
> 256] No more mirrors to try.
>
>  
> *08:49:10*
>  http://mirror.centos.org/centos-8/8/AppStream/x86_64/os/repodata/repomd.xml: 
> [Errno 14] HTTP Error 403 - Forbidden
>
>
> (see 
> https://jenkins.ovirt.org/job/ovirt-appliance_master_build-artifacts-el8-x86_64/708/console)
>
>
> I checked our squid logs on proxy01.phx.ovirt.org and I see:
>
> 1623395369.584  1 38.145.50.116 TCP_DENIED/403 4148 GET 
> http://mirror.centos.org/centos-8/8/AppStream/x86_64/os/repodata/repomd.xml - 
> HIER_NONE/- text/html
>
>
> Apparently there's no option in oVirt CI system to turn off the proxy since 
> mock_runner is always called with --try-proxy option.
>
>
> Anyone that can look into this?
>
>
> --
>
> Sandro Bonazzola
>
> MANAGER, SOFTWARE ENGINEERING, EMEA R RHV
>
> Red Hat EMEA 
>
> sbona...@redhat.com
> 
>
> *Red Hat respects your work life balance. Therefore there is no need to
> answer this email out of your office hours.*
>
>
>

-- 

Sandro Bonazzola

MANAGER, SOFTWARE ENGINEERING, EMEA R RHV

Red Hat EMEA 

sbona...@redhat.com


*Red Hat respects your work life balance. Therefore there is no need to
answer this email out of your office hours.
*
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/P4QYC4ZTSSOQLN4CHPRRZS7W6VPV4C6P/


[ovirt-devel] OST gating failed on - test_import_vm1

2021-06-21 Thread Eyal Shenitzky
Hi All,

As part of OST gating verification, the verification failed with the
following message -

gating2 (43) : OST build 6687 failed with: test_import_vm1 failed:

engine = 
event_id = [1165], timeout = 600

@contextlib.contextmanager
def wait_for_event(engine, event_id, timeout=assertions.LONG_TIMEOUT):
'''
event_id could either be an int - a single
event ID or a list - multiple event IDs
that all will be checked
'''
events = engine.events_service()
last_event = int(events.list(max=2)[0].id)
try:
> yield

ost_utils/ost_utils/engine_utils.py:36:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
_ _

engine = 
correlation_id = 'test_validate_ova_import_vm', vm_name = 'imported_vm'
imported_url = 'ova:///var/tmp/ova_vm.ova', storage_domain = 'iscsi'
cluster_name = 'test-cluster'

def _import_ova(engine, correlation_id, vm_name, imported_url,
storage_domain, cluster_name):
sd =
engine.storage_domains_service().list(search='name={}'.format(storage_domain))[0]
cluster =
engine.clusters_service().list(search='name={}'.format(cluster_name))[0]
imports_service = engine.external_vm_imports_service()
host = test_utils.get_first_active_host_by_name(engine)

with engine_utils.wait_for_event(engine, 1165): #
IMPORTEXPORT_STARTING_IMPORT_VM
imports_service.add(
types.ExternalVmImport(
name=vm_name,
provider=types.ExternalVmProviderType.KVM,
url=imported_url,
cluster=types.Cluster(
id=cluster.id
),
storage_domain=types.StorageDomain(
id=sd.id
),
host=types.Host(
id=host.id
),
sparse=True
> ), async_=True, query={'correlation_id': correlation_id}
)

basic-suite-master/test-scenarios/test_004_basic_sanity.py:935:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
_ _

self = 
import_ = 
headers = None, query = {'correlation_id': 'test_validate_ova_import_vm'}
wait = True, kwargs = {'async_': True}

def add(
self,
import_,
headers=None,
query=None,
wait=True,
**kwargs
):
"""
This operation is used to import a virtual machine from external
hypervisor, such as KVM, XEN or VMware.
For example import of a virtual machine from VMware can be facilitated
using the following request:
[source]

POST /externalvmimports

With request body of type <>,
for example:
[source,xml]



my_vm



vm_name_as_is_in_vmware
true
vmware_user
123456
VMWARE
vpx://wmware_user@vcenter-host
/DataCenter/Cluster/esxi-host?no_verify=1





"""
# Check the types of the parameters:
Service._check_types([
('import_', import_, types.ExternalVmImport),
])

# Build the URL:

Patch set 4:Verified -1


The OST run as part of verification for patch -
https://gerrit.ovirt.org/#/c/ovirt-engine/+/115192/

Can someone from Virt/OST team have a look?


-- 
Regards,
Eyal Shenitzky
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/XTRIJPHK6MDOBDKKJGYS7PMOW57IKSKM/


[ovirt-devel] ovirt-engine has been tagged (ovirt-engine-4.4.7.4)

2021-06-21 Thread Tal Nisan

___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/VYLOSDTN5X4FCM6YEJ5KNFQXVDMK66ES/


[ovirt-devel] Re: Moving #vdsm to #ovirt?

2021-06-21 Thread Ales Musil
On Mon, Jun 21, 2021 at 2:37 PM Nir Soffer  wrote:

> We had mostly dead #vdsm channel in freenode[1].
>
> Recently there was a hostile takeover of freenode, and old freenode
> folks created
> libera[2] network. Most (all?) projects moved to this network.
>
> We can move #vdsm to libera, but I think we have a better option, use
> #ovirt channel
> in oftc[3], which is pretty lively.
>

 +1


>
> Having vdsm developers in #ovirt channel is good for the project and
> will make it easier
> to reach developers.
>

I think most of the developers are already on both channels so in this case
it should not be an issue.


>
> Moving to libera require registration work. Moving to #ovirt requires no
> change.
> In both cases we need to update vdsm readme and ovirt.org.
>
> What do you think?
>
> [1] https://freenode.net/
> [2] https://libera.chat/
> [3] https://www.oftc.net/
>
> Nir
>
>

Thanks,
Ales

-- 

Ales Musil

Software Engineer - RHV Network

Red Hat EMEA 

amu...@redhat.comIM: amusil

___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/A2PJKWOXII5P2KB7ZLNCULHMMMSLMIWG/


[ovirt-devel] Re: Purging inactive maintainers from vdsm-master-maintainers

2021-06-21 Thread Nir Soffer
On Mon, Jun 21, 2021 at 11:35 AM Milan Zamazal  wrote:
>
> Edward Haas  writes:
>
> > On Sun, Jun 20, 2021 at 11:29 PM Nir Soffer  wrote:
> >
> >> On Mon, Dec 2, 2019 at 4:27 PM Adam Litke  wrote:
> >>
> >>> I also agree with the proposal.  It's sad to turn in my keys but I'm
> >>> likely unable to perform many duties expected of a maintainer at this
> >>> point.  I know that people can still find me via the git history :)
> >>>
> >>> On Thu, Nov 28, 2019 at 3:37 AM Milan Zamazal 
> >>> wrote:
> >>>
>  Dan Kenigsberg  writes:
> 
>  > On Wed, Nov 27, 2019 at 4:33 PM Francesco Romani 
>  wrote:
>  >>
>  >> On 11/27/19 3:25 PM, Nir Soffer wrote:
>  >
>  >> > I want to remove inactive contributors from vdsm-master-maintainers.
>  >> >
>  >> > I suggest the simple rule of 2 years of inactivity for removing from
>  >> > this group,
>  >> > based on git log.
>  >> >
>  >> > See the list below for current status:
>  >> > https://gerrit.ovirt.org/#/admin/groups/106,members
>  >>
>  >>
>  >> No objections, keeping the list minimal and current is a good idea.
>  >
>  >
>  > I love removing dead code; I feel a bit different about removing old
>  > colleagues. Maybe I'm just being nostalgic.
>  >
>  > If we introduce this policy (which I understand is healthy), let us
>  > give a long warning period (6 months?) before we apply the policy to
>  > existing dormant maintainers. We should also make sure that we
>  > actively try to contact a person before he or she is dropped.
> 
>  I think this is a reasonable proposal.
> 
>  Regards,
>  Milan
> 
> >>>
> >> I forgot about this, and another year passed.
> >>
> >> Sending again, this time I added all past maintainers that may not watch
> >> this list.
> >>
> >
> > Very sad, but it makes total sense. +1
> > Note that other projects move past maintainers to a special group named
> > "emeritus_*".
>
> Not a bad idea, I think we could have such a group in Vdsm too.

It would be nice but not part of gerrit permission configuration.

We have an AUTHORS file, last updated in 2013. We can use this file
to give credit to past maintainers.
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/YLAXQA7RUHWNR6N5HVAGTFQ7BG3MKBAP/


[ovirt-devel] basic-suite-master is broken

2021-06-21 Thread Yedidyah Bar David
Failed last 5 runs:

https://jenkins.ovirt.org/job/ovirt-system-tests_basic-suite-master_nightly/

Known issue?
-- 
Didi
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/6TGEYYGCBFNPPRFRY7NGLKEMMNOWB4CO/


[ovirt-devel] Re: Purging inactive maintainers from vdsm-master-maintainers

2021-06-21 Thread Milan Zamazal
Nir Soffer  writes:

> On Mon, Jun 21, 2021 at 11:35 AM Milan Zamazal  wrote:
>>
>> Edward Haas  writes:
>
>>
>> > On Sun, Jun 20, 2021 at 11:29 PM Nir Soffer  wrote:
>> >
>> >> On Mon, Dec 2, 2019 at 4:27 PM Adam Litke  wrote:
>> >>
>> >>> I also agree with the proposal.  It's sad to turn in my keys but I'm
>> >>> likely unable to perform many duties expected of a maintainer at this
>> >>> point.  I know that people can still find me via the git history :)
>> >>>
>> >>> On Thu, Nov 28, 2019 at 3:37 AM Milan Zamazal 
>> >>> wrote:
>> >>>
>>  Dan Kenigsberg  writes:
>> 
>>  > On Wed, Nov 27, 2019 at 4:33 PM Francesco Romani 
>>  wrote:
>>  >>
>>  >> On 11/27/19 3:25 PM, Nir Soffer wrote:
>>  >
>>  >> > I want to remove inactive contributors from 
>>  >> > vdsm-master-maintainers.
>>  >> >
>>  >> > I suggest the simple rule of 2 years of inactivity for removing 
>>  >> > from
>>  >> > this group,
>>  >> > based on git log.
>>  >> >
>>  >> > See the list below for current status:
>>  >> > https://gerrit.ovirt.org/#/admin/groups/106,members
>>  >>
>>  >>
>>  >> No objections, keeping the list minimal and current is a good idea.
>>  >
>>  >
>>  > I love removing dead code; I feel a bit different about removing old
>>  > colleagues. Maybe I'm just being nostalgic.
>>  >
>>  > If we introduce this policy (which I understand is healthy), let us
>>  > give a long warning period (6 months?) before we apply the policy to
>>  > existing dormant maintainers. We should also make sure that we
>>  > actively try to contact a person before he or she is dropped.
>> 
>>  I think this is a reasonable proposal.
>> 
>>  Regards,
>>  Milan
>> 
>> >>>
>> >> I forgot about this, and another year passed.
>> >>
>> >> Sending again, this time I added all past maintainers that may not watch
>> >> this list.
>> >>
>> >
>> > Very sad, but it makes total sense. +1
>> > Note that other projects move past maintainers to a special group named
>> > "emeritus_*".
>>
>> Not a bad idea, I think we could have such a group in Vdsm too.
>
> It would be nice but not part of gerrit permission configuration.
>
> We have an AUTHORS file, last updated in 2013. We can use this file
> to give credit to past maintainers.

AUTHORS is a better place to give credits, but the group could be also
useful as a more reliable tracking past maintainers and in case of
restoring maintainer rights, if such a need ever occurs.  (Yes, no way
necessary for that but maybe nice to have.)
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/6XDTLKXGAMDW5CZCVW6QPVRIOYVAXOIZ/


[ovirt-devel] Re: AV 8.4 for CentOS Linux

2021-06-21 Thread Dana Elfassy
Hi,
I'm getting packages conflicts when trying to upgrade my Centos8.4 and
Centos-Stream hosts.
(Centos Stream was installed from iso, then I
installed ovirt-release-master.rpm and deployed the host)
The details below are the output for Centos-Stream
* The packages conflicts occur also on OST -
https://rhv-devops-jenkins.rhev-ci-vms.eng.rdu2.redhat.com/job/ds-ost-baremetal_manual/7211/console

Do you know what could've caused this and how it can be fixed?
Thanks,
Dana

[root@localhost ~]# rpm -q vdsm
vdsm-4.40.70.4-5.git73fbe23cd.el8.x86_64

[root@localhost ~]# dnf module list virt
Last metadata expiration check: 1:09:54 ago on Sun 20 Jun 2021 05:09:50 AM
EDT.
CentOS Stream 8 - AppStream
Name  Stream
ProfilesSummary

virt  rhel [d][e]
 common [d]  Virtualization module

The error:
[root@localhost ~]# dnf update
Last metadata expiration check: 1:08:13 ago on Sun 20 Jun 2021 05:09:50 AM
EDT.
Error:
 Problem 1: package vdsm-4.40.70.4-5.git73fbe23cd.el8.x86_64 requires
(libvirt-daemon-kvm >= 7.0.0-14 and libvirt-daemon-kvm < 7.4.0-1), but none
of the providers can be installed
  - cannot install both libvirt-daemon-kvm-7.4.0-1.el8s.x86_64 and
libvirt-daemon-kvm-7.0.0-14.1.el8.x86_64
  - cannot install both libvirt-daemon-kvm-7.4.0-1.el8s.x86_64 and
libvirt-daemon-kvm-6.0.0-35.module_el8.5.0+746+bbd5d70c.x86_64
  - cannot install both libvirt-daemon-kvm-7.4.0-1.el8s.x86_64 and
libvirt-daemon-kvm-6.0.0-36.module_el8.5.0+821+97472045.x86_64
  - cannot install both libvirt-daemon-kvm-7.4.0-1.el8s.x86_64 and
libvirt-daemon-kvm-5.6.0-10.el8.x86_64
  - cannot install both libvirt-daemon-kvm-7.4.0-1.el8s.x86_64 and
libvirt-daemon-kvm-6.0.0-17.el8.x86_64
  - cannot install both libvirt-daemon-kvm-7.4.0-1.el8s.x86_64 and
libvirt-daemon-kvm-6.0.0-25.2.el8.x86_64
  - cannot install both libvirt-daemon-kvm-7.4.0-1.el8s.x86_64 and
libvirt-daemon-kvm-6.6.0-13.el8.x86_64
  - cannot install both libvirt-daemon-kvm-7.4.0-1.el8s.x86_64 and
libvirt-daemon-kvm-6.6.0-7.1.el8.x86_64
  - cannot install both libvirt-daemon-kvm-7.4.0-1.el8s.x86_64 and
libvirt-daemon-kvm-6.6.0-7.3.el8.x86_64
  - cannot install both libvirt-daemon-kvm-7.4.0-1.el8s.x86_64 and
libvirt-daemon-kvm-7.0.0-13.el8s.x86_64
  - cannot install both libvirt-daemon-kvm-7.4.0-1.el8s.x86_64 and
libvirt-daemon-kvm-7.0.0-14.el8s.x86_64
  - cannot install both libvirt-daemon-kvm-7.0.0-13.el8s.x86_64 and
libvirt-daemon-kvm-7.4.0-1.el8s.x86_64
  - cannot install both libvirt-daemon-kvm-7.0.0-14.el8s.x86_64 and
libvirt-daemon-kvm-7.4.0-1.el8s.x86_64
  - cannot install both libvirt-daemon-kvm-7.0.0-9.el8s.x86_64 and
libvirt-daemon-kvm-7.4.0-1.el8s.x86_64
  - cannot install the best update candidate for package
vdsm-4.40.70.4-5.git73fbe23cd.el8.x86_64
  - cannot install the best update candidate for package
libvirt-daemon-kvm-7.0.0-14.1.el8.x86_64
 Problem 2: problem with installed package
vdsm-4.40.70.4-5.git73fbe23cd.el8.x86_64
  - package vdsm-4.40.70.4-5.git73fbe23cd.el8.x86_64 requires (qemu-kvm >=
15:5.2.0 and qemu-kvm < 15:6.0.0), but none of the providers can be
installed
  - cannot install both qemu-kvm-15:6.0.0-19.el8s.x86_64 and
qemu-kvm-15:5.2.0-16.el8s.x86_64
  - cannot install both qemu-kvm-15:6.0.0-19.el8s.x86_64 and
qemu-kvm-15:4.2.0-48.module_el8.5.0+746+bbd5d70c.x86_64
  - cannot install both qemu-kvm-15:6.0.0-19.el8s.x86_64 and
qemu-kvm-15:4.2.0-51.module_el8.5.0+821+97472045.x86_64
  - cannot install both qemu-kvm-15:6.0.0-19.el8s.x86_64 and
qemu-kvm-15:4.1.0-23.el8.1.x86_64
  - cannot install both qemu-kvm-15:6.0.0-19.el8s.x86_64 and
qemu-kvm-15:4.2.0-19.el8.x86_64
  - cannot install both qemu-kvm-15:6.0.0-19.el8s.x86_64 and
qemu-kvm-15:4.2.0-29.el8.3.x86_64
  - cannot install both qemu-kvm-15:6.0.0-19.el8s.x86_64 and
qemu-kvm-15:4.2.0-29.el8.6.x86_64
  - cannot install both qemu-kvm-15:6.0.0-19.el8s.x86_64 and
qemu-kvm-15:5.1.0-14.el8.1.x86_64
  - cannot install both qemu-kvm-15:6.0.0-19.el8s.x86_64 and
qemu-kvm-15:5.1.0-20.el8.x86_64
  - cannot install both qemu-kvm-15:6.0.0-19.el8s.x86_64 and
qemu-kvm-15:5.2.0-16.el8.x86_64
  - cannot install both qemu-kvm-15:5.2.0-11.el8s.x86_64 and
qemu-kvm-15:6.0.0-19.el8s.x86_64
  - cannot install both qemu-kvm-15:5.2.0-16.el8s.x86_64 and
qemu-kvm-15:6.0.0-19.el8s.x86_64
  - cannot install the best update candidate for package
qemu-kvm-15:5.2.0-16.el8s.x86_64
 Problem 3: package
ovirt-provider-ovn-driver-1.2.34-0.20201207083749.git75016ed.el8.noarch
requires vdsm, but none of the providers can be installed
  - package vdsm-4.40.70.4-5.git73fbe23cd.el8.x86_64 requires
(libvirt-daemon-kvm >= 7.0.0-14 and libvirt-daemon-kvm < 7.4.0-1), but none
of the providers can be installed
  - package vdsm-4.40.70.4-1.git5dbeaece0.el8.x86_64 requires vdsm-http =
4.40.70.4-1.git5dbeaece0.el8, but none of the providers can be installed
  - package vdsm-4.40.70.4-3.git77ad4d4ea.el8.x86_64 requires vdsm-http =

[ovirt-devel] Re: OST gating failed on - test_import_vm1

2021-06-21 Thread Eyal Shenitzky
Thanks for looking into it Michal.

Actually, my patch related to incremental backup so there nothing changed
around the snapshot area and I believe the failure isn't related to it,

I re-run OST for this change -
https://rhv-devops-jenkins.rhev-ci-vms.eng.rdu2.redhat.com/job/ds-ost-baremetal_manual/6795/
.

Let's see if it works fine.

On Tue, 15 Jun 2021 at 14:00, Michal Skrivanek  wrote:

>
>
> On 15. 6. 2021, at 12:00, Eyal Shenitzky  wrote:
>
> Hi All,
>
> As part of OST gating verification, the verification failed with the
> following message -
>
> gating2 (43) : OST build 6687 failed with: test_import_vm1 failed:
>
> engine = 
> event_id = [1165], timeout = 600
>
> @contextlib.contextmanager
> def wait_for_event(engine, event_id, timeout=assertions.LONG_TIMEOUT):
> '''
> event_id could either be an int - a single
> event ID or a list - multiple event IDs
> that all will be checked
> '''
> events = engine.events_service()
> last_event = int(events.list(max=2)[0].id)
> try:
> > yield
>
> ost_utils/ost_utils/engine_utils.py:36:
> _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
> _ _ _
>
> engine = 
> correlation_id = 'test_validate_ova_import_vm', vm_name = 'imported_vm'
> imported_url = 'ova:///var/tmp/ova_vm.ova', storage_domain = 'iscsi'
> cluster_name = 'test-cluster'
>
> def _import_ova(engine, correlation_id, vm_name, imported_url,
> storage_domain, cluster_name):
> sd =
> engine.storage_domains_service().list(search='name={}'.format(storage_domain))[0]
> cluster =
> engine.clusters_service().list(search='name={}'.format(cluster_name))[0]
> imports_service = engine.external_vm_imports_service()
> host = test_utils.get_first_active_host_by_name(engine)
>
> with engine_utils.wait_for_event(engine, 1165): #
> IMPORTEXPORT_STARTING_IMPORT_VM
> imports_service.add(
> types.ExternalVmImport(
> name=vm_name,
> provider=types.ExternalVmProviderType.KVM,
> url=imported_url,
> cluster=types.Cluster(
> id=cluster.id
> ),
> storage_domain=types.StorageDomain(
> id=sd.id
> ),
> host=types.Host(
> id=host.id
> ),
> sparse=True
> > ), async_=True, query={'correlation_id': correlation_id}
> )
>
> basic-suite-master/test-scenarios/test_004_basic_sanity.py:935:
> _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
> _ _ _
>
> self =  0x7f9129d24860>
> import_ = 
> headers = None, query = {'correlation_id': 'test_validate_ova_import_vm'}
> wait = True, kwargs = {'async_': True}
>
> def add(
> self,
> import_,
> headers=None,
> query=None,
> wait=True,
> **kwargs
> ):
> """
> This operation is used to import a virtual machine from external
> hypervisor, such as KVM, XEN or VMware.
> For example import of a virtual machine from VMware can be facilitated
> using the following request:
> [source]
> 
> POST /externalvmimports
> 
> With request body of type <>,
> for example:
> [source,xml]
> 
> 
> 
> my_vm
> 
> 
> 
> vm_name_as_is_in_vmware
> true
> vmware_user
> 123456
> VMWARE
> 
> vpx://wmware_user@vcenter-host/DataCenter/Cluster/esxi-host?no_verify=1
> 
> 
> 
>
>
> """
> # Check the types of the parameters:
> Service._check_types([
> ('import_', import_, types.ExternalVmImport),
> ])
>
> # Build the URL:
>
> Patch set 4:Verified -1
>
>
> The OST run as part of verification for patch -
> https://gerrit.ovirt.org/#/c/ovirt-engine/+/115192/
>
> Can someone from Virt/OST team have a look?
>
>
> you should be able to review logs in generic way
>
> you can ee
> 2021-06-15 11:08:37,515+02 ERROR
> [org.ovirt.engine.core.bll.exportimport.ImportVmFromExternalUrlCommand]
> (default task-2) [test_validate_ova_import_vm] Exception:
> java.lang.NullPointerException
> at
> deployment.engine.ear.bll.jar//org.ovirt.engine.core.bll.exportimport.ImportVmFromExternalUrlCommand$ExternalVmImporter.performImport(ImportVmFromExternalUrlCommand.java:116)
> at
> deployment.engine.ear.bll.jar//org.ovirt.engine.core.bll.exportimport.ImportVmFromExternalUrlCommand.executeCommand(ImportVmFromExternalUrlCommand.java:65)
> at
> deployment.engine.ear.bll.jar//org.ovirt.engine.core.bll.CommandBase.executeWithoutTransaction(CommandBase.java:1174)
> at
> deployment.engine.ear.bll.jar//org.ovirt.engine.core.bll.CommandBase.executeActionInTransactionScope(CommandBase.java:1332)
> at
> deployment.engine.ear.bll.jar//org.ovirt.engine.core.bll.CommandBase.runInTransaction(CommandBase.java:2008)
>
> likely caused by
> 2021-06-15 11:08:37,513+02 ERROR
> [org.ovirt.engine.core.bll.GetVmFromOvaQuery] (default task-2)
> [test_validate_ova_import_vm] Exception:
> org.ovirt.engine.core.common.utils.ansible.AnsibleRunnerCallException: Task
> Run query script failed to execute. Please check logs for more details:
> /var/log/ovirt-engine/ova/ovirt-query-ova-ansible-20210615110831-lago-basic-suite-master-host-0-test_validate_ova_import_vm.log
>
> seeing then the following error in ansible log:
> 2021-06-15 11:08:37 CEST - fatal: [lago-basic-suite-master-host-0]:
> FAILED! => {"changed": true, "msg": 

[ovirt-devel] Re: hc-basic-suite-master fails due to missing glusterfs firewalld services

2021-06-21 Thread Yedidyah Bar David
On Wed, Jun 16, 2021 at 1:23 PM Yedidyah Bar David  wrote:
>
> Hi,
>
> I now tried running locally hc-basic-suite-master with a patched OST,
> and it failed due to $subject. I checked and see that this also
> happened on CI, e.g. [1], before it started failing to to an unrelated
> reason later:
>
> E   TASK [gluster.infra/roles/firewall_config : Add/Delete
> services to firewalld rules] ***
> E   failed: [lago-hc-basic-suite-master-host-0]
> (item=glusterfs) => {"ansible_loop_var": "item", "changed": false,
> "item": "glusterfs", "msg": "ERROR: Exception caught:
> org.fedoraproject.FirewallD1.Exception: INVALID_SERVICE: 'glusterfs'
> not among existing services Permanent and Non-Permanent(immediate)
> operation, Services are defined by port/tcp relationship and named as
> they are in /etc/services (on most systems)"}
> E   failed: [lago-hc-basic-suite-master-host-2]
> (item=glusterfs) => {"ansible_loop_var": "item", "changed": false,
> "item": "glusterfs", "msg": "ERROR: Exception caught:
> org.fedoraproject.FirewallD1.Exception: INVALID_SERVICE: 'glusterfs'
> not among existing services Permanent and Non-Permanent(immediate)
> operation, Services are defined by port/tcp relationship and named as
> they are in /etc/services (on most systems)"}
> E   failed: [lago-hc-basic-suite-master-host-1]
> (item=glusterfs) => {"ansible_loop_var": "item", "changed": false,
> "item": "glusterfs", "msg": "ERROR: Exception caught:
> org.fedoraproject.FirewallD1.Exception: INVALID_SERVICE: 'glusterfs'
> not among existing services Permanent and Non-Permanent(immediate)
> operation, Services are defined by port/tcp relationship and named as
> they are in /etc/services (on most systems)"}
>
> This seems similar to [2], and indeed I can't see the package
> 'glusterfs-server' installed locally on host-0. Any idea?

I think I understand:

It seems like the deployment of hc relied on the order of running the deploy
scripts as written in lagoinitfile. With the new deploy code, all of them run
in parallel. Does this make sense?


>
> Thanks and best regards,
>
> [1] 
> https://jenkins.ovirt.org/job/ovirt-system-tests_hc-basic-suite-master/2088/
>
> [2] https://github.com/oVirt/ovirt-ansible/issues/124
> --
> Didi



--
Didi
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/HQ37ENFRGYJK4H3INGAAR5FYWK33WAH4/


[ovirt-devel] Re: hc-basic-suite-master fails due to missing glusterfs firewalld services

2021-06-21 Thread Marcin Sobczyk



On 6/17/21 1:44 PM, Yedidyah Bar David wrote:

On Wed, Jun 16, 2021 at 1:23 PM Yedidyah Bar David  wrote:

Hi,

I now tried running locally hc-basic-suite-master with a patched OST,
and it failed due to $subject. I checked and see that this also
happened on CI, e.g. [1], before it started failing to to an unrelated
reason later:

E   TASK [gluster.infra/roles/firewall_config : Add/Delete
services to firewalld rules] ***
E   failed: [lago-hc-basic-suite-master-host-0]
(item=glusterfs) => {"ansible_loop_var": "item", "changed": false,
"item": "glusterfs", "msg": "ERROR: Exception caught:
org.fedoraproject.FirewallD1.Exception: INVALID_SERVICE: 'glusterfs'
not among existing services Permanent and Non-Permanent(immediate)
operation, Services are defined by port/tcp relationship and named as
they are in /etc/services (on most systems)"}
E   failed: [lago-hc-basic-suite-master-host-2]
(item=glusterfs) => {"ansible_loop_var": "item", "changed": false,
"item": "glusterfs", "msg": "ERROR: Exception caught:
org.fedoraproject.FirewallD1.Exception: INVALID_SERVICE: 'glusterfs'
not among existing services Permanent and Non-Permanent(immediate)
operation, Services are defined by port/tcp relationship and named as
they are in /etc/services (on most systems)"}
E   failed: [lago-hc-basic-suite-master-host-1]
(item=glusterfs) => {"ansible_loop_var": "item", "changed": false,
"item": "glusterfs", "msg": "ERROR: Exception caught:
org.fedoraproject.FirewallD1.Exception: INVALID_SERVICE: 'glusterfs'
not among existing services Permanent and Non-Permanent(immediate)
operation, Services are defined by port/tcp relationship and named as
they are in /etc/services (on most systems)"}

This seems similar to [2], and indeed I can't see the package
'glusterfs-server' installed locally on host-0. Any idea?

I think I understand:

It seems like the deployment of hc relied on the order of running the deploy
scripts as written in lagoinitfile. With the new deploy code, all of them run
in parallel. Does this make sense?
The scripts run in parallel as in "on all VMs at the same time", but 
sequentially
as in "one script at a time on each VM" - this is the same behavior we 
had with lago deployment.


Regards, Marcin




Thanks and best regards,

[1] https://jenkins.ovirt.org/job/ovirt-system-tests_hc-basic-suite-master/2088/

[2] https://github.com/oVirt/ovirt-ansible/issues/124
--
Didi



--
Didi


___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/DVNFOM2NHSZO6G4CR2MA6YXKZ26Q6UJU/


[ovirt-devel] Re: AV 8.4 for CentOS Linux

2021-06-21 Thread Marcin Sobczyk



On 6/20/21 12:23 PM, Dana Elfassy wrote:

Hi,
I'm getting packages conflicts when trying to upgrade my Centos8.4 and 
Centos-Stream hosts.
(Centos Stream was installed from iso, then I 
installed ovirt-release-master.rpm and deployed the host)

The details below are the output for Centos-Stream
* The packages conflicts occur also on OST - 
https://rhv-devops-jenkins.rhev-ci-vms.eng.rdu2.redhat.com/job/ds-ost-baremetal_manual/7211/console 



Do you know what could've caused this and how it can be fixed?
Yes, libvirt 7.4.0 + qemu-kvm 6.0.0 is currently broken and has bugs 
filed on it.
We're trying to avoid these packages by excluding them on vdsm's spec 
level [1]
and downgrading to older versions (7.0.0 and 5.2.0 respectively) that 
work in OST [2].

Unfortunately somewhere around late Friday a new version of qemu-kvm
was published, which makes the downgrade process go from 6.0.0-19 to 
6.0.0-18
and not the 5.2.0 that works. We don't have a reasonable resolution for 
OST yet.


If you manage your host manually simply 'dnf downgrade qemu-kvm' until 
you get version 5.2.0

or download and install all the older RPMs manually.

Regards, Marcin

[1] https://gerrit.ovirt.org/#/c/vdsm/+/115193/
[2] https://gerrit.ovirt.org/#/c/ovirt-system-tests/+/115194/


Thanks,
Dana

[root@localhost ~]# rpm -q vdsm
vdsm-4.40.70.4-5.git73fbe23cd.el8.x86_64

[root@localhost ~]# dnf module list virt
Last metadata expiration check: 1:09:54 ago on Sun 20 Jun 2021 
05:09:50 AM EDT.

CentOS Stream 8 - AppStream
Name                              Stream               Profiles  Summary
virt                              rhel [d][e]                common 
[d]  Virtualization module


The error:
[root@localhost ~]# dnf update
Last metadata expiration check: 1:08:13 ago on Sun 20 Jun 2021 
05:09:50 AM EDT.

Error:
 Problem 1: package vdsm-4.40.70.4-5.git73fbe23cd.el8.x86_64 requires 
(libvirt-daemon-kvm >= 7.0.0-14 and libvirt-daemon-kvm < 7.4.0-1), but 
none of the providers can be installed
  - cannot install both libvirt-daemon-kvm-7.4.0-1.el8s.x86_64 and 
libvirt-daemon-kvm-7.0.0-14.1.el8.x86_64
  - cannot install both libvirt-daemon-kvm-7.4.0-1.el8s.x86_64 and 
libvirt-daemon-kvm-6.0.0-35.module_el8.5.0+746+bbd5d70c.x86_64
  - cannot install both libvirt-daemon-kvm-7.4.0-1.el8s.x86_64 and 
libvirt-daemon-kvm-6.0.0-36.module_el8.5.0+821+97472045.x86_64
  - cannot install both libvirt-daemon-kvm-7.4.0-1.el8s.x86_64 and 
libvirt-daemon-kvm-5.6.0-10.el8.x86_64
  - cannot install both libvirt-daemon-kvm-7.4.0-1.el8s.x86_64 and 
libvirt-daemon-kvm-6.0.0-17.el8.x86_64
  - cannot install both libvirt-daemon-kvm-7.4.0-1.el8s.x86_64 and 
libvirt-daemon-kvm-6.0.0-25.2.el8.x86_64
  - cannot install both libvirt-daemon-kvm-7.4.0-1.el8s.x86_64 and 
libvirt-daemon-kvm-6.6.0-13.el8.x86_64
  - cannot install both libvirt-daemon-kvm-7.4.0-1.el8s.x86_64 and 
libvirt-daemon-kvm-6.6.0-7.1.el8.x86_64
  - cannot install both libvirt-daemon-kvm-7.4.0-1.el8s.x86_64 and 
libvirt-daemon-kvm-6.6.0-7.3.el8.x86_64
  - cannot install both libvirt-daemon-kvm-7.4.0-1.el8s.x86_64 and 
libvirt-daemon-kvm-7.0.0-13.el8s.x86_64
  - cannot install both libvirt-daemon-kvm-7.4.0-1.el8s.x86_64 and 
libvirt-daemon-kvm-7.0.0-14.el8s.x86_64
  - cannot install both libvirt-daemon-kvm-7.0.0-13.el8s.x86_64 and 
libvirt-daemon-kvm-7.4.0-1.el8s.x86_64
  - cannot install both libvirt-daemon-kvm-7.0.0-14.el8s.x86_64 and 
libvirt-daemon-kvm-7.4.0-1.el8s.x86_64
  - cannot install both libvirt-daemon-kvm-7.0.0-9.el8s.x86_64 and 
libvirt-daemon-kvm-7.4.0-1.el8s.x86_64
  - cannot install the best update candidate for package 
vdsm-4.40.70.4-5.git73fbe23cd.el8.x86_64
  - cannot install the best update candidate for package 
libvirt-daemon-kvm-7.0.0-14.1.el8.x86_64
 Problem 2: problem with installed package 
vdsm-4.40.70.4-5.git73fbe23cd.el8.x86_64
  - package vdsm-4.40.70.4-5.git73fbe23cd.el8.x86_64 requires 
(qemu-kvm >= 15:5.2.0 and qemu-kvm < 15:6.0.0), but none of the 
providers can be installed
  - cannot install both qemu-kvm-15:6.0.0-19.el8s.x86_64 and 
qemu-kvm-15:5.2.0-16.el8s.x86_64
  - cannot install both qemu-kvm-15:6.0.0-19.el8s.x86_64 and 
qemu-kvm-15:4.2.0-48.module_el8.5.0+746+bbd5d70c.x86_64
  - cannot install both qemu-kvm-15:6.0.0-19.el8s.x86_64 and 
qemu-kvm-15:4.2.0-51.module_el8.5.0+821+97472045.x86_64
  - cannot install both qemu-kvm-15:6.0.0-19.el8s.x86_64 and 
qemu-kvm-15:4.1.0-23.el8.1.x86_64
  - cannot install both qemu-kvm-15:6.0.0-19.el8s.x86_64 and 
qemu-kvm-15:4.2.0-19.el8.x86_64
  - cannot install both qemu-kvm-15:6.0.0-19.el8s.x86_64 and 
qemu-kvm-15:4.2.0-29.el8.3.x86_64
  - cannot install both qemu-kvm-15:6.0.0-19.el8s.x86_64 and 
qemu-kvm-15:4.2.0-29.el8.6.x86_64
  - cannot install both qemu-kvm-15:6.0.0-19.el8s.x86_64 and 
qemu-kvm-15:5.1.0-14.el8.1.x86_64
  - cannot install both qemu-kvm-15:6.0.0-19.el8s.x86_64 and 
qemu-kvm-15:5.1.0-20.el8.x86_64
  

[ovirt-devel] ovirt-engine has been tagged (ovirt-engine-4.4.7.3)

2021-06-21 Thread Tal Nisan

___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/53TLA2YBFEEBVOVE4S7UFI6SKWV37L7W/


[ovirt-devel] Re: Moving #vdsm to #ovirt?

2021-06-21 Thread Marcin Sobczyk



On 6/21/21 2:36 PM, Nir Soffer wrote:

We had mostly dead #vdsm channel in freenode[1].

Recently there was a hostile takeover of freenode, and old freenode
folks created
libera[2] network. Most (all?) projects moved to this network.

We can move #vdsm to libera, but I think we have a better option, use
#ovirt channel
in oftc[3], which is pretty lively.

Having vdsm developers in #ovirt channel is good for the project and
will make it easier
to reach developers.

Moving to libera require registration work. Moving to #ovirt requires no change.
In both cases we need to update vdsm readme and ovirt.org.

What do you think?

+1



[1] https://freenode.net/
[2] https://libera.chat/
[3] https://www.oftc.net/

Nir


___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/53EQIGW4EZ7OAVHIFBEW4F5BDTPLXOA7/


[ovirt-devel] Gerrit hook adds unrelated patches to bugs

2021-06-21 Thread Nir Soffer
Gerrit hook is wrongly looking for https://bugzilla.redhat.com/ URLs
in the commit message, and adding the patch to the bug.

Example patch:
https://gerrit.ovirt.org/c/vdsm/+/115339

I had to clean up the bug after the broken hook (see screenshot).

The hook should really look only in the single URL in (one or more)
Bug-Url headers:

Bug-Url: https://bugzilla.redhat.com/

I reported this years ago (I think for Related-To:), and I remember we had
a patch fixing this issue, but for some reason it was lost.

Nir
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/K2JGCYVATHI3CC6Z57JTXL4MR62UDY2W/


[ovirt-devel] Re: hc-basic-suite-master fails due to missing glusterfs firewalld services

2021-06-21 Thread Yedidyah Bar David
On Fri, Jun 18, 2021 at 10:18 AM Marcin Sobczyk  wrote:
>
>
>
> On 6/17/21 6:59 PM, Yedidyah Bar David wrote:
> > On Thu, Jun 17, 2021 at 6:27 PM Marcin Sobczyk  wrote:
> >>
> >>
> >> On 6/17/21 1:44 PM, Yedidyah Bar David wrote:
> >>> On Wed, Jun 16, 2021 at 1:23 PM Yedidyah Bar David  
> >>> wrote:
>  Hi,
> 
>  I now tried running locally hc-basic-suite-master with a patched OST,
>  and it failed due to $subject. I checked and see that this also
>  happened on CI, e.g. [1], before it started failing to to an unrelated
>  reason later:
> 
>  E   TASK [gluster.infra/roles/firewall_config : Add/Delete
>  services to firewalld rules] ***
>  E   failed: [lago-hc-basic-suite-master-host-0]
>  (item=glusterfs) => {"ansible_loop_var": "item", "changed": false,
>  "item": "glusterfs", "msg": "ERROR: Exception caught:
>  org.fedoraproject.FirewallD1.Exception: INVALID_SERVICE: 'glusterfs'
>  not among existing services Permanent and Non-Permanent(immediate)
>  operation, Services are defined by port/tcp relationship and named as
>  they are in /etc/services (on most systems)"}
>  E   failed: [lago-hc-basic-suite-master-host-2]
>  (item=glusterfs) => {"ansible_loop_var": "item", "changed": false,
>  "item": "glusterfs", "msg": "ERROR: Exception caught:
>  org.fedoraproject.FirewallD1.Exception: INVALID_SERVICE: 'glusterfs'
>  not among existing services Permanent and Non-Permanent(immediate)
>  operation, Services are defined by port/tcp relationship and named as
>  they are in /etc/services (on most systems)"}
>  E   failed: [lago-hc-basic-suite-master-host-1]
>  (item=glusterfs) => {"ansible_loop_var": "item", "changed": false,
>  "item": "glusterfs", "msg": "ERROR: Exception caught:
>  org.fedoraproject.FirewallD1.Exception: INVALID_SERVICE: 'glusterfs'
>  not among existing services Permanent and Non-Permanent(immediate)
>  operation, Services are defined by port/tcp relationship and named as
>  they are in /etc/services (on most systems)"}
> 
>  This seems similar to [2], and indeed I can't see the package
>  'glusterfs-server' installed locally on host-0. Any idea?
> >>> I think I understand:
> >>>
> >>> It seems like the deployment of hc relied on the order of running the 
> >>> deploy
> >>> scripts as written in lagoinitfile. With the new deploy code, all of them 
> >>> run
> >>> in parallel. Does this make sense?
> >> The scripts run in parallel as in "on all VMs at the same time", but
> >> sequentially
> >> as in "one script at a time on each VM" - this is the same behavior we
> >> had with lago deployment.
> > Well, I do not think it works as intended, then. When running locally,
> > I logged into host-0, and after it failed, I had:
> >
> > # dnf history
> > ID | Command line
> >
> > | Date and time| Action(s)  | Altered
> > --
> >   4 | install -y --nogpgcheck ansible gluster-ansible-roles
> > ovirt-hosted-engine-setup ovirt-ansible-hosted-engine-setup
> > ovirt-ansible-reposit | 2021-06-17 11:54 | I, U   |8
> >   3 | -y --nogpgcheck install ovirt-host python3-coverage
> > vdsm-hook-vhostmd
> >   | 2021-06-08 02:15 | Install|  493 EE
> >   2 | install -y dnf-utils
> > https://resources.ovirt.org/pub/yum-repo/ovirt-release-master.rpm
> >  | 2021-06-08 02:14 |
> > Install|1
> >   1 |
> >
> > | 2021-06-08 02:06 | Install|  511 EE
> >
> > Meaning, it already ran setup_first_host.sh (and failed there), but
> > didn't run hc_setup_host.sh, although it appears before it.
> >
> > If you check [1], which is a build that failed due to this reason
> > (unlike the later ones), you see there:
> >
> > -- Captured log setup 
> > --
> > 2021-06-07 01:58:38+,594 INFO
> > [ost_utils.pytest.fixtures.deployment] Waiting for SSH on the VMs
> > (deployment:40)
> > 2021-06-07 01:59:11+,947 INFO
> > [ost_utils.deployment_utils.package_mgmt] oVirt packages used on VMs:
> > (package_mgmt:133)
> > 2021-06-07 01:59:11+,948 INFO
> > [ost_utils.deployment_utils.package_mgmt]
> > vdsm-4.40.70.2-1.git34cdc8884.el8.x86_64 (package_mgmt:135)
> > 2021-06-07 01:59:11+,950 INFO
> > [ost_utils.deployment_utils.scripts] Running
> > /home/jenkins/workspace/ovirt-system-tests_hc-basic-suite-master/ovirt-system-tests/common/deploy-scripts/setup_host.sh
> > on lago-hc-basic-suite-master-host-1 (scripts:36)
> > 2021-06-07 01:59:11+,950 INFO
> > [ost_utils.deployment_utils.scripts] Running
> > 

[ovirt-devel] Re: OST gating failed on - test_import_vm1

2021-06-21 Thread Michal Skrivanek


> On 15. 6. 2021, at 12:00, Eyal Shenitzky  wrote:
> 
> Hi All,
> 
> As part of OST gating verification, the verification failed with the 
> following message - 
> 
> gating2 (43) : OST build 6687 failed with: test_import_vm1 failed:
> 
> engine = 
> event_id = [1165], timeout = 600
> 
> @contextlib.contextmanager
> def wait_for_event(engine, event_id, timeout=assertions.LONG_TIMEOUT):
> '''
> event_id could either be an int - a single
> event ID or a list - multiple event IDs
> that all will be checked
> '''
> events = engine.events_service()
> last_event = int(events.list(max=2)[0].id)
> try:
> >   yield
> 
> ost_utils/ost_utils/engine_utils.py:36: 
> _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
> _ 
> 
> engine = 
> correlation_id = 'test_validate_ova_import_vm', vm_name = 'imported_vm'
> imported_url = 'ova:///var/tmp/ova_vm.ova', storage_domain = 'iscsi'
> cluster_name = 'test-cluster'
> 
> def _import_ova(engine, correlation_id, vm_name, imported_url, 
> storage_domain, cluster_name):
> sd = 
> engine.storage_domains_service().list(search='name={}'.format(storage_domain))[0]
> cluster = 
> engine.clusters_service().list(search='name={}'.format(cluster_name))[0]
> imports_service = engine.external_vm_imports_service()
> host = test_utils.get_first_active_host_by_name(engine)
> 
> with engine_utils.wait_for_event(engine, 1165): # 
> IMPORTEXPORT_STARTING_IMPORT_VM
> imports_service.add(
> types.ExternalVmImport(
> name=vm_name,
> provider=types.ExternalVmProviderType.KVM,
> url=imported_url,
> cluster=types.Cluster(
> id=cluster.id 
> ),
> storage_domain=types.StorageDomain(
> id=sd.id 
> ),
> host=types.Host(
> id=host.id 
> ),
> sparse=True
> >   ), async_=True, query={'correlation_id': correlation_id}
> )
> 
> basic-suite-master/test-scenarios/test_004_basic_sanity.py:935: 
> _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
> _ 
> 
> self = 
> import_ = 
> headers = None, query = {'correlation_id': 'test_validate_ova_import_vm'}
> wait = True, kwargs = {'async_': True}
> 
> def add(
> self,
> import_,
> headers=None,
> query=None,
> wait=True,
> **kwargs
> ):
> """
> This operation is used to import a virtual machine from external hypervisor, 
> such as KVM, XEN or VMware.
> For example import of a virtual machine from VMware can be facilitated using 
> the following request:
> [source]
> 
> POST /externalvmimports
> 
> With request body of type <>, for 
> example:
> [source,xml]
> 
> 
> 
> my_vm
> 
> 
> 
> vm_name_as_is_in_vmware
> true
> vmware_user
> 123456
> VMWARE
> vpx://wmware_user@vcenter-host/DataCenter/Cluster/esxi-host?no_verify=1
> 
> 
> 
> 
> 
> """
> # Check the types of the parameters:
> Service._check_types([
> ('import_', import_, types.ExternalVmImport),
> ])
> 
> # Build the URL:
> 
> Patch set 4:Verified -1
> 
> 
> 
> The OST run as part of verification for patch - 
> https://gerrit.ovirt.org/#/c/ovirt-engine/+/115192/ 
> 
> 
> Can someone from Virt/OST team have a look?

you should be able to review logs in generic way

you can ee
2021-06-15 11:08:37,515+02 ERROR 
[org.ovirt.engine.core.bll.exportimport.ImportVmFromExternalUrlCommand] 
(default task-2) [test_validate_ova_import_vm] Exception: 
java.lang.NullPointerException
at 
deployment.engine.ear.bll.jar//org.ovirt.engine.core.bll.exportimport.ImportVmFromExternalUrlCommand$ExternalVmImporter.performImport(ImportVmFromExternalUrlCommand.java:116)
at 
deployment.engine.ear.bll.jar//org.ovirt.engine.core.bll.exportimport.ImportVmFromExternalUrlCommand.executeCommand(ImportVmFromExternalUrlCommand.java:65)
at 
deployment.engine.ear.bll.jar//org.ovirt.engine.core.bll.CommandBase.executeWithoutTransaction(CommandBase.java:1174)
at 
deployment.engine.ear.bll.jar//org.ovirt.engine.core.bll.CommandBase.executeActionInTransactionScope(CommandBase.java:1332)
at 
deployment.engine.ear.bll.jar//org.ovirt.engine.core.bll.CommandBase.runInTransaction(CommandBase.java:2008)

likely caused by
2021-06-15 11:08:37,513+02 ERROR [org.ovirt.engine.core.bll.GetVmFromOvaQuery] 
(default task-2) [test_validate_ova_import_vm] Exception: 
org.ovirt.engine.core.common.utils.ansible.AnsibleRunnerCallException: Task Run 
query script failed to execute. Please check logs for more details: 
/var/log/ovirt-engine/ova/ovirt-query-ova-ansible-20210615110831-lago-basic-suite-master-host-0-test_validate_ova_import_vm.log

seeing then the following error in ansible log:
2021-06-15 11:08:37 CEST - fatal: [lago-basic-suite-master-host-0]: FAILED! => 
{"changed": true, "msg": "non-zero return code", "rc": 1, "stderr": "Shared 
connection to lago-basic-suite-master-host-0 closed.\r\n", "stderr_lines": 
["Shared connection to lago-basic-suite-master-host-0 closed."], "stdout": 
"Traceback (most recent call last):\r\n  File 

[ovirt-devel] Re: Purging inactive maintainers from vdsm-master-maintainers

2021-06-21 Thread Edward Haas
On Sun, Jun 20, 2021 at 11:29 PM Nir Soffer  wrote:

> On Mon, Dec 2, 2019 at 4:27 PM Adam Litke  wrote:
>
>> I also agree with the proposal.  It's sad to turn in my keys but I'm
>> likely unable to perform many duties expected of a maintainer at this
>> point.  I know that people can still find me via the git history :)
>>
>> On Thu, Nov 28, 2019 at 3:37 AM Milan Zamazal 
>> wrote:
>>
>>> Dan Kenigsberg  writes:
>>>
>>> > On Wed, Nov 27, 2019 at 4:33 PM Francesco Romani 
>>> wrote:
>>> >>
>>> >> On 11/27/19 3:25 PM, Nir Soffer wrote:
>>> >
>>> >> > I want to remove inactive contributors from vdsm-master-maintainers.
>>> >> >
>>> >> > I suggest the simple rule of 2 years of inactivity for removing from
>>> >> > this group,
>>> >> > based on git log.
>>> >> >
>>> >> > See the list below for current status:
>>> >> > https://gerrit.ovirt.org/#/admin/groups/106,members
>>> >>
>>> >>
>>> >> No objections, keeping the list minimal and current is a good idea.
>>> >
>>> >
>>> > I love removing dead code; I feel a bit different about removing old
>>> > colleagues. Maybe I'm just being nostalgic.
>>> >
>>> > If we introduce this policy (which I understand is healthy), let us
>>> > give a long warning period (6 months?) before we apply the policy to
>>> > existing dormant maintainers. We should also make sure that we
>>> > actively try to contact a person before he or she is dropped.
>>>
>>> I think this is a reasonable proposal.
>>>
>>> Regards,
>>> Milan
>>>
>>
> I forgot about this, and another year passed.
>
> Sending again, this time I added all past maintainers that may not watch
> this list.
>

Very sad, but it makes total sense. +1
Note that other projects move past maintainers to a special group named
"emeritus_*".


> Nir
>
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/KRXVKDMF4NOZ7MXEPVBFXLEPABH3LQVO/


[ovirt-devel] OST verifed -1 broken, fails for infra issue in OST

2021-06-21 Thread Nir Soffer
I got this wrong review from OST, which looks like an infra issue in OST:

Patch:
https://gerrit.ovirt.org/c/vdsm/+/115232

Error:
https://gerrit.ovirt.org/c/vdsm/+/115232#message-46ad5e75_ed543485

Failing code:

Package(*line.split()) for res in results.values() > for line in
_filter_results(res['stdout'].splitlines()) ] E TypeError: __new__()
missing 2 required positional arguments: 'version' and 'repo'
ost_utils/ost_utils/deployment_utils/package_mgmt.py:177: TypeError

I hope someone working on OST can take a look soon.

Nir
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/EDLFMHDYR37FFNJBN7FLTBALURZYEC7V/


[ovirt-devel] importing fixtures in ovirt-system-tests

2021-06-21 Thread Yedidyah Bar David
Hi all,

We have several different styles of where/how to import fixtures in
ovirt-system-tests:

- import directly inside the test code
- import in conftest.py
- import in fixtures modules
- For all of above, both 'import *' and importing specific fixtures

I think we should try to agree on a specific style and then follow it.

One drawback of importing directly in test/fixtures code is that it's
then impossible to override them in conftest.py.

A drawback of importing '*' and/or doing this in conftest.py is that
you might inadvertently import more than you want, or this might
happen eventually (after more stuff are added), that this makes it
harder to find what uses what, and that it risks unintended collisions
in names - as opposed to intended overrides.

A related issue is having to update many places if you add/change something.

If there is some kind of "best practices" document somewhere that
people are happy with, perhaps we should follow it. Otherwise, we
should come up with our own.

Personally I think I'd like to have a single file with all the
imports, of specific fixtures (not '*'), and import this file from
conftest.py of all the suites. Didn't actually try this and no idea
what complications it might bring.

Comments/ideas/opinions/decisions are welcome :-)

Best regards,
-- 
Didi
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/VAOD5Q7WU2FC66KF5GS3KXWPLCZ2KYS6/