[ovirt-devel] Re: oVirt appliance on master is now based on CentOS Stream

2020-12-16 Thread Yuval Turgeman
Wow, very nice !!

On Wed, Dec 16, 2020 at 12:07 PM Sandro Bonazzola 
wrote:

> Hi,
> just an heads up that now oVirt Appliance is based on CentOS Stream,
> targeting 4.4.5 to be released.
> The build should be available on master repos tomorrow morning.
> Here's the list of changes within the Appliance content:
>
> --- centos-linux-based-appliance-manifest-rpm 2020-12-16
> 09:50:03.899293856 +0100
> +++ centos-stream-based-appliance-manifest-rpm 2020-12-16
> 09:49:41.763036950 +0100
> @@ -2,4 +2,4 @@
> -NetworkManager-1.26.2-1.el8.x86_64
> -NetworkManager-libnm-1.26.2-1.el8.x86_64
> -NetworkManager-team-1.26.2-1.el8.x86_64
> -NetworkManager-tui-1.26.2-1.el8.x86_64
> +NetworkManager-1.30.0-0.3.el8.x86_64
> +NetworkManager-libnm-1.30.0-0.3.el8.x86_64
> +NetworkManager-team-1.30.0-0.3.el8.x86_64
> +NetworkManager-tui-1.30.0-0.3.el8.x86_64
> @@ -9,0 +10,2 @@
> +adwaita-cursor-theme-3.28.0-2.el8.noarch
> +adwaita-icon-theme-3.28.0-2.el8.noarch
> @@ -11 +13 @@
> -alsa-lib-1.2.3.2-1.el8.x86_64
> +alsa-lib-1.2.4-3.el8.x86_64
> @@ -27,0 +30,3 @@
> +at-spi2-atk-2.26.2-1.el8.x86_64
> +at-spi2-core-2.28.0-1.el8.x86_64
> +atk-2.28.1-1.el8.x86_64
> @@ -30,4 +35,4 @@
> -authselect-1.2.1-2.el8.x86_64
> -authselect-compat-1.2.1-2.el8.x86_64
> -authselect-libs-1.2.1-2.el8.x86_64
> -avahi-libs-0.7-19.el8.x86_64
> +authselect-1.2.2-1.el8.x86_64
> +authselect-compat-1.2.2-1.el8.x86_64
> +authselect-libs-1.2.2-1.el8.x86_64
> +avahi-libs-0.7-20.el8.x86_64
> @@ -37,6 +42,6 @@
> -bind-export-libs-9.11.20-5.el8.x86_64
> -bind-libs-9.11.20-5.el8.x86_64
> -bind-libs-lite-9.11.20-5.el8.x86_64
> -bind-license-9.11.20-5.el8.noarch
> -bind-utils-9.11.20-5.el8.x86_64
> -binutils-2.30-79.el8.x86_64
> +bind-export-libs-9.11.20-6.el8.x86_64
> +bind-libs-9.11.20-6.el8.x86_64
> +bind-libs-lite-9.11.20-6.el8.x86_64
> +bind-license-9.11.20-6.el8.noarch
> +bind-utils-9.11.20-6.el8.x86_64
> +binutils-2.30-85.el8.x86_64
> @@ -51,2 +55,0 @@
> -centos-linux-release-8.3-1.2011.el8.noarch
> -centos-linux-repos-8-2.el8.noarch
> @@ -54,0 +58,2 @@
> +centos-stream-release-8.4-1.el8.noarch
> +centos-stream-repos-8-2.el8.noarch
> @@ -59 +64 @@
> -cloud-init-19.4-11.el8.noarch
> +cloud-init-20.3-5.el8.noarch
> @@ -63,2 +68,2 @@
> -cockpit-dashboard-224.2-1.el8.noarch
> -cockpit-packagekit-224.2-1.el8.noarch
> +cockpit-dashboard-233.1-1.el8.noarch
> +cockpit-packagekit-233.1-1.el8.noarch
> @@ -71,0 +77 @@
> +colord-libs-1.4.2-1.el8.x86_64
> @@ -75 +81 @@
> -cpio-2.12-8.el8.x86_64
> +cpio-2.12-9.el8.x86_64
> @@ -85 +91 @@
> -curl-7.61.1-14.el8.x86_64
> +curl-7.61.1-17.el8.x86_64
> @@ -87,3 +93,3 @@
> -dbus-1.12.8-11.el8.x86_64
> -dbus-common-1.12.8-11.el8.noarch
> -dbus-daemon-1.12.8-11.el8.x86_64
> +dbus-1.12.8-12.el8.x86_64
> +dbus-common-1.12.8-12.el8.noarch
> +dbus-daemon-1.12.8-12.el8.x86_64
> @@ -91,2 +97,3 @@
> -dbus-libs-1.12.8-11.el8.x86_64
> -dbus-tools-1.12.8-11.el8.x86_64
> +dbus-libs-1.12.8-12.el8.x86_64
> +dbus-tools-1.12.8-12.el8.x86_64
> +dconf-0.28.0-4.el8.x86_64
> @@ -95,4 +102,4 @@
> -device-mapper-1.02.171-5.el8.x86_64
> -device-mapper-event-1.02.171-5.el8.x86_64
> -device-mapper-event-libs-1.02.171-5.el8.x86_64
> -device-mapper-libs-1.02.171-5.el8.x86_64
> +device-mapper-1.02.175-0.2.20201103git8801a86.el8.x86_64
> +device-mapper-event-1.02.175-0.2.20201103git8801a86.el8.x86_64
> +device-mapper-event-libs-1.02.175-0.2.20201103git8801a86.el8.x86_64
> +device-mapper-libs-1.02.175-0.2.20201103git8801a86.el8.x86_64
> @@ -100,3 +107,3 @@
> -dhcp-client-4.3.6-41.el8.x86_64
> -dhcp-common-4.3.6-41.el8.noarch
> -dhcp-libs-4.3.6-41.el8.x86_64
> +dhcp-client-4.3.6-42.el8.x86_64
> +dhcp-common-4.3.6-42.el8.noarch
> +dhcp-libs-4.3.6-42.el8.x86_64
> @@ -105,3 +112,4 @@
> -dnf-4.2.23-4.el8.noarch
> -dnf-data-4.2.23-4.el8.noarch
> -dnf-plugins-core-4.0.17-5.el8.noarch
> +dnf-4.4.2-2.el8.noarch
> +dnf-data-4.4.2-2.el8.noarch
> +dnf-plugin-subscription-manager-1.28.5-1.el8.x86_64
> +dnf-plugins-core-4.0.18-1.el8.noarch
> @@ -117,6 +125,6 @@
> -efi-srpm-macros-3-2.el8.noarch
> -elfutils-0.180-1.el8.x86_64
> -elfutils-debuginfod-client-0.180-1.el8.x86_64
> -elfutils-default-yama-scope-0.180-1.el8.noarch
> -elfutils-libelf-0.180-1.el8.x86_64
> -elfutils-libs-0.180-1.el8.x86_64
> +efi-srpm-macros-3-3.el8.noarch
> +elfutils-0.182-2.el8.x86_64
> +elfutils-debuginfod-client-0.182-2.el8.x86_64
> +elfutils-default-yama-scope-0.182-2.el8.noarch
> +elfutils-libelf-0.182-2.el8.x86_64
> +elfutils-libs-0.182-2.el8.x86_64
> @@ -124 +132 @@
> -ethtool-5.0-2.el8.x86_64
> +ethtool-5.8-5.el8.x86_64
> @@ -128 +136 @@
> -filesystem-3.8-3.el8.x86_64
> +filesystem-3.8-4.el8.x86_64
> @@ -130,2 +138,2 @@
> -firewalld-0.8.2-2.el8.noarch
> -firewalld-filesystem-0.8.2-2.el8.noarch
> +firewalld-0.8.2-3.el8.noarch
> +firewalld-filesystem-0.8.2-3.el8.noarch
> @@ -134 +142,2 @@
> -freetype-2.9.1-4.el8_3.1.x86_64
> +freetype-2.9.1-5.el8.x86_64
> +fribidi-1.0.4-8.el8.x86_64
> @@ -136 +145 @@
> -gawk-4.2.1-1.el8.x86_64
> +gawk-4.2.1-2.el8.x86_64
> @@ -138 +147 @@
> -gdb-headless-8.2-12.el8

[ovirt-devel] Re: Why is vdsm enabled by default?

2020-04-02 Thread Yuval Turgeman
Marcin's work looks great on my (manual) tests, in addition to that we
disabled ovirt-vmconsole-host-sshd.service [1] in NGN as it fails to start
due to a missing host key, until it gets added to the engine, which enables
the service as well.

[1] https://gerrit.ovirt.org/#/c/108173/

On Thu, Apr 2, 2020 at 4:50 PM Marcin Sobczyk  wrote:

>
>
> On 2/3/20 3:11 PM, Martin Perina wrote:
>
>
>
> On Sun, Feb 2, 2020 at 9:11 AM Yedidyah Bar David  wrote:
>
>> On Sat, Feb 1, 2020 at 11:26 PM Nir Soffer  wrote:
>> >
>> > On Thu, Jan 30, 2020 at 12:19 PM Dan Kenigsberg 
>> wrote:
>> >>
>> >> On Thu, Jan 30, 2020 at 9:57 AM Yedidyah Bar David 
>> wrote:
>> >> >
>> >> > On Tue, Jan 28, 2020 at 1:20 PM Amit Bawer 
>> wrote:
>> >> >>
>> >> >>
>> >> >>
>> >> >> On Tue, Jan 28, 2020 at 12:40 PM Yedidyah Bar David <
>> d...@redhat.com> wrote:
>> >> >>>
>> >> >>> On Tue, Jan 28, 2020 at 12:11 PM Amit Bawer 
>> wrote:
>> >> 
>> >>  From my limited experience, the usual flow for most users is
>> deploying/upgrading a host and installing vdsm from the engine UI on the
>> hypervisor machine.
>> >> >>>
>> >> >>>
>> >> >>> You are right, for non-hosted-engine hosts. For hosted-engine, at
>> least the first host, you first install stuff on it (including vdsm), then
>> deploy, and only then have an engine. If for any reason you reboot in the
>> middle, you might run into unneeded problems, due to vdsm starting at boot.
>> >> >>>
>> >> 
>> >>  In case of manual installations by non-users, it is accustomed to
>> run "vdsm-tool configure --force" after step 3 and then reboot.
>> >> >>>
>> >> >>>
>> >> >>> I didn't know that, sorry, but would not want to do that either,
>> for hosted-engine. I'd rather hosted-engine deploy to do that, at the right
>> point. Which it does :-)
>> >> >>>
>> >> 
>> >>  Having a host on which vdsm is not running by default renders it
>> useless for ovirt, unless it is explicitly set to be down from UI under
>> particular circumstances.
>> >> >>>
>> >> >>>
>> >> >>> Obviously, for an active host. If it's not active, and is
>> rebooted, not sure we need vdsm to start - even if it's already
>> added/configured/etc (but e.g. put in maintenance). But that's not my
>> question - I don't mind enabling vdsmd as part of host-deploy, so that vdsm
>> would start if a host in maintenance is rebooted. I only ask why it should
>> be enabled by the rpm installation.
>> >> >>
>> >> >>
>> >> >> Hard to tell, this dates back to commit
>> d45e6827f38d36730ec468d31d905f21878c7250 and commit
>> c01a733ce81edc2c51ed3426f1424c93917bb106 before that, in which both did not
>> specify a reason.
>> >> >
>> >> >
>> >> > Adding Dan. Dan - was it enabled by default in sysv? I think not.
>> Was there an explicit requirement/decision to enable it on the move to
>> systemd? If not, is it ok to keep it disabled by default and enable when
>> needed (host-deploy)?
>> >>
>> >> Oh dear, I have only very vague memories right now. I do believe that
>> >> we have always has (the equivalent of) vdsm enable. At one point we
>> >> moved that to an rpm preset per explicit request from Fedora. But my
>> >> gut feeling is that there was not a very good reason to have it that
>> >> way. It might have been only a case of contagiousness: old versions of
>> >> ovirt-host-deploy do not have the logic to enable vdsm, so vdsm had to
>> >> have it itself, so nobody bothered to fix ovirt-host-deploy for the
>> >> next version, and here we are 5 years later.
>> >
>> >
>> > It does not make sense to enable vdsm unless it was configured,
>>
>> Indeed
>>
>> > and we certainly don't
>> > want to configure it automatically,
>>
>> We do, currently :-((
>>
>> > so vdsm should not be enabled by default.
>>
>> :-)
>>
>> >
>> > But someone needs to update host deploy code to enable vdsm before we
>> can change
>> > vdsm deployment.
>>
>> We always did, in otopi ovirt-host-deploy. A quick grep in the ansible
>> code does not find for me this.
>>
>> I am glad we managed to reach a consensus, Thanks :-)
>>
>> Filed these now:
>>
>> https://bugzilla.redhat.com/show_bug.cgi?id=1797284 [RFE] enable vdsm
>> services during deploy
>> https://bugzilla.redhat.com/show_bug.cgi?id=1797287 [RFE] vdsm should
>> be disabled by default
>>
>>
> I vaguely remember that in the past VDSM needed to be enabled by default
> due to NGN image creation.
> Yuval/Sandro, is it still needed?
>
> If not, of course we can change VDSM packaging and host deploy flow ...
>
> I posted https://gerrit.ovirt.org/#/c/108098/ to disable vdsm's autostart
> after installation.
> Actually, due to recent issues with NGN image creation the last of the
> whole topic should be tested:
>
>
> https://gerrit.ovirt.org/#/q/topic:remove-non-socket-activation-libvirt-support+(status:open+OR+status:merged)
>
>
>
> >
>> >> >> But the rpm post installation should also configure vdsm, at least
>> on a fresh install [1], so it makes sense (at least to me) that it is okay
>> to enable it by default since y

[ovirt-devel] Re: Why is vdsm enabled by default?

2020-01-30 Thread Yuval Turgeman
Well, if vdsm wasn't enabled by default, we wouldn't hog the cpu if
vdsm-tool configure fails

On Thu, Jan 30, 2020 at 10:25 AM Yedidyah Bar David  wrote:

> On Thu, Jan 30, 2020 at 10:14 AM Yuval Turgeman 
> wrote:
>
>> Another issue (in 4.4) as long as vdsm is not configured,
>> vdsmd_init_common in ExecPre fails, and systemd keeps trying to start the
>> service which is really annoying
>>
>
> That's not really related, and is probably just a bug. %posttrans runs
> 'vdsm-tool configure --force' for you. It might fail, obviously.
>
> You'll have exactly the same problem if you run 'vdsm-tool configure
> --force' manually and then enable. The point is, that if you run it
> manually, and enable manually, or both inside some script, you should, and
> can, check if things are ok. Enabling automatically does not allow you to
> check and decide by yourself (or inside a script).
>
>
>>
>> On Thu, Jan 30, 2020 at 10:01 AM Yedidyah Bar David 
>> wrote:
>>
>>> On Tue, Jan 28, 2020 at 1:20 PM Amit Bawer  wrote:
>>>
>>>>
>>>>
>>>> On Tue, Jan 28, 2020 at 12:40 PM Yedidyah Bar David 
>>>> wrote:
>>>>
>>>>> On Tue, Jan 28, 2020 at 12:11 PM Amit Bawer  wrote:
>>>>>
>>>>>> From my limited experience, the usual flow for most users is
>>>>>> deploying/upgrading a host and installing vdsm from the engine UI on the
>>>>>> hypervisor machine.
>>>>>>
>>>>>
>>>>> You are right, for non-hosted-engine hosts. For hosted-engine, at
>>>>> least the first host, you first install stuff on it (including vdsm), then
>>>>> deploy, and only then have an engine. If for any reason you reboot in the
>>>>> middle, you might run into unneeded problems, due to vdsm starting at 
>>>>> boot.
>>>>>
>>>>>
>>>>>> In case of manual installations by non-users, it is accustomed to run
>>>>>> "vdsm-tool configure --force" after step 3 and then reboot.
>>>>>>
>>>>>
>>>>> I didn't know that, sorry, but would not want to do that either, for
>>>>> hosted-engine. I'd rather hosted-engine deploy to do that, at the right
>>>>> point. Which it does :-)
>>>>>
>>>>>
>>>>>> Having a host on which vdsm is not running by default renders it
>>>>>> useless for ovirt, unless it is explicitly set to be down from UI under
>>>>>> particular circumstances.
>>>>>>
>>>>>
>>>>> Obviously, for an active host. If it's not active, and is rebooted,
>>>>> not sure we need vdsm to start - even if it's already added/configured/etc
>>>>> (but e.g. put in maintenance). But that's not my question - I don't mind
>>>>> enabling vdsmd as part of host-deploy, so that vdsm would start if a host
>>>>> in maintenance is rebooted. I only ask why it should be enabled by the rpm
>>>>> installation.
>>>>>
>>>>
>>>> Hard to tell, this dates back to commit
>>>> d45e6827f38d36730ec468d31d905f21878c7250 and commit
>>>> c01a733ce81edc2c51ed3426f1424c93917bb106 before that, in which both did not
>>>> specify a reason.
>>>>
>>>
>>> Adding Dan. Dan - was it enabled by default in sysv? I think not. Was
>>> there an explicit requirement/decision to enable it on the move to systemd?
>>> If not, is it ok to keep it disabled by default and enable when needed
>>> (host-deploy)?
>>>
>>>
>>>> But the rpm post installation should also configure vdsm, at least on a
>>>> fresh install [1], so it makes sense (at least to me) that it is okay to
>>>> enable it by default since you have all setup for a regular usage.
>>>>
>>>> [1]
>>>> https://github.com/oVirt/vdsm/blob/b0c338b717ff300575c1ff690d9efa256fcd2164/vdsm.spec.in#L955
>>>>
>>>
>>> I do not agree.
>>>
>>> I think most sensible sysadmin would expect a 'yum install package; yum
>>> remove package' to leave their system mostly unchanged. Also, 'yum install
>>> package; reboot; yum remove package'. I guess most sysadmins know that
>>> there are %pre* and %post* and that package maintainers do all kinds of
>>> stuff there, but do not expect, IMHO, the amount of 

[ovirt-devel] Re: Why is vdsm enabled by default?

2020-01-30 Thread Yuval Turgeman
Another issue (in 4.4) as long as vdsm is not configured, vdsmd_init_common
in ExecPre fails, and systemd keeps trying to start the service which is
really annoying

On Thu, Jan 30, 2020 at 10:01 AM Yedidyah Bar David  wrote:

> On Tue, Jan 28, 2020 at 1:20 PM Amit Bawer  wrote:
>
>>
>>
>> On Tue, Jan 28, 2020 at 12:40 PM Yedidyah Bar David 
>> wrote:
>>
>>> On Tue, Jan 28, 2020 at 12:11 PM Amit Bawer  wrote:
>>>
 From my limited experience, the usual flow for most users is
 deploying/upgrading a host and installing vdsm from the engine UI on the
 hypervisor machine.

>>>
>>> You are right, for non-hosted-engine hosts. For hosted-engine, at least
>>> the first host, you first install stuff on it (including vdsm), then
>>> deploy, and only then have an engine. If for any reason you reboot in the
>>> middle, you might run into unneeded problems, due to vdsm starting at boot.
>>>
>>>
 In case of manual installations by non-users, it is accustomed to run
 "vdsm-tool configure --force" after step 3 and then reboot.

>>>
>>> I didn't know that, sorry, but would not want to do that either, for
>>> hosted-engine. I'd rather hosted-engine deploy to do that, at the right
>>> point. Which it does :-)
>>>
>>>
 Having a host on which vdsm is not running by default renders it
 useless for ovirt, unless it is explicitly set to be down from UI under
 particular circumstances.

>>>
>>> Obviously, for an active host. If it's not active, and is rebooted, not
>>> sure we need vdsm to start - even if it's already added/configured/etc (but
>>> e.g. put in maintenance). But that's not my question - I don't mind
>>> enabling vdsmd as part of host-deploy, so that vdsm would start if a host
>>> in maintenance is rebooted. I only ask why it should be enabled by the rpm
>>> installation.
>>>
>>
>> Hard to tell, this dates back to commit
>> d45e6827f38d36730ec468d31d905f21878c7250 and commit
>> c01a733ce81edc2c51ed3426f1424c93917bb106 before that, in which both did not
>> specify a reason.
>>
>
> Adding Dan. Dan - was it enabled by default in sysv? I think not. Was
> there an explicit requirement/decision to enable it on the move to systemd?
> If not, is it ok to keep it disabled by default and enable when needed
> (host-deploy)?
>
>
>> But the rpm post installation should also configure vdsm, at least on a
>> fresh install [1], so it makes sense (at least to me) that it is okay to
>> enable it by default since you have all setup for a regular usage.
>>
>> [1]
>> https://github.com/oVirt/vdsm/blob/b0c338b717ff300575c1ff690d9efa256fcd2164/vdsm.spec.in#L955
>>
>
> I do not agree.
>
> I think most sensible sysadmin would expect a 'yum install package; yum
> remove package' to leave their system mostly unchanged. Also, 'yum install
> package; reboot; yum remove package'. I guess most sysadmins know that
> there are %pre* and %post* and that package maintainers do all kinds of
> stuff there, but do not expect, IMHO, the amount of changes that we do in
> vdsm-tool.
>
>
>>
>>
>>>
>>> Thanks!
>>>
>>>

 On Tue, Jan 28, 2020 at 11:47 AM Yedidyah Bar David 
 wrote:

> If I do e.g.:
>
> 1. Install CentOS
> 2. yum install ovirt-releaseSOMETHING
> 3. yum install vdsm
>
> Then reboot the machine, vdsm starts, and for this, it does all kinds
> of things to the system (such as configure various services using 
> vdsm-tool
> etc.). Are we sure we want/need this? Why would we want vdsm
> configured/running at all at this stage, before being added to an engine?
>
> In particular, if (especially during development) we have a bug in
> this configuration process, and then fix it, it might not be enough to
> upgrade vdsm - the tooling will then also have to fix the changes done by
> the buggy previous version, or require a full machine reinstall.
>
> Thanks and best regards,
> --
> Didi
> ___
> Devel mailing list -- devel@ovirt.org
> To unsubscribe send an email to devel-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/devel@ovirt.org/message/3YHWLO3DFU2PLPGL44DBIBG25QYGOQL7/
>

>>>
>>> --
>>> Didi
>>>
>>
>
> --
> Didi
> ___
> Devel mailing list -- devel@ovirt.org
> To unsubscribe send an email to devel-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/devel@ovirt.org/message/JYB6N2PJ7YUQBLOREQ5SHQ4YG6UF74M5/
>
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Priv

[ovirt-devel] Re: EL8 builds fail to mount loop device - kernel too old?

2019-10-09 Thread Yuval Turgeman
You may be running out of loop devices, if that's the case, you need to
manually mknod them, see
https://github.com/oVirt/ovirt-node-ng-image/blob/56a2797b5ef84bd56ab95fdb0cbfb908b4bc8ec1/automation/build-artifacts.sh#L24

On Thursday, October 10, 2019, Nir Soffer  wrote:

> On Wed, Oct 9, 2019 at 11:56 PM Nir Soffer  wrote:
>
>> I'm trying to run imageio tests on el8 mock, and the tests fail early
>> when trying to create storage
>> for testing:
>>
>> [userstorage] INFOCreating filesystem 
>> /var/tmp/imageio-storage/file-512-ext4-mount
>> Suggestion: Use Linux kernel >= 3.18 for improved stability of the metadata 
>> and journal checksum features.
>> [userstorage] INFOCreating file 
>> /var/tmp/imageio-storage/file-512-ext4-mount/file
>> [userstorage] INFOCreating backing file 
>> /var/tmp/imageio-storage/file-512-xfs-backing
>> [userstorage] INFOCreating loop device 
>> /var/tmp/imageio-storage/file-512-xfs-loop
>> [userstorage] INFOCreating filesystem 
>> /var/tmp/imageio-storage/file-512-xfs-mount
>> mount: /var/tmp/imageio-storage/file-512-xfs-mount: wrong fs type, bad 
>> option, bad superblock on /dev/loop4, missing codepage or helper program, or 
>> other error.
>> Traceback (most recent call last):
>>   File "/usr/local/bin/userstorage", line 10, in 
>> sys.exit(main())
>>   File "/usr/local/lib/python3.6/site-packages/userstorage/__main__.py", 
>> line 42, in main
>> create(cfg)
>>   File "/usr/local/lib/python3.6/site-packages/userstorage/__main__.py", 
>> line 52, in create
>> b.create()
>>   File "/usr/local/lib/python3.6/site-packages/userstorage/file.py", line 
>> 47, in create
>> self._mount.create()
>>   File "/usr/local/lib/python3.6/site-packages/userstorage/mount.py", line 
>> 53, in create
>> self._mount_loop()
>>   File "/usr/local/lib/python3.6/site-packages/userstorage/mount.py", line 
>> 94, in _mount_loop
>> ["sudo", "mount", "-t", self.fstype, self._loop.path, self.path])
>>   File "/usr/lib64/python3.6/subprocess.py", line 311, in check_call
>> raise CalledProcessError(retcode, cmd)
>> subprocess.CalledProcessError: Command '['sudo', 'mount', '-t', 'xfs', 
>> '/var/tmp/imageio-storage/file-512-xfs-loop', 
>> '/var/tmp/imageio-storage/file-512-xfs-mount']' returned non-zero exit 
>> status 32.
>>
>> https://jenkins.ovirt.org/job/ovirt-imageio_standard-check-
>> patch/1593//artifact/check-patch.el8.ppc64le/mock_logs/
>> script/stdout_stderr.log
>>
>>
>> Same code runs fine in Travis:
>> https://travis-ci.org/nirs/ovirt-imageio/jobs/595794863
>>
>>
>> And also locally on Fedora 29:
>> $ ../jenkins/mock_configs/mock_runner.sh -C ../jenkins/mock_configs -p
>> el8
>> ...
>> ## Wed Oct  9 23:37:22 IDT 2019 Finished env: el8:epel-8-x86_64
>> ##  took 85 seconds
>> ##  rc = 0
>>
>>
>> My guess is that we run el8 jobs on el7 hosts with old kernels
>> (Suggestion: Use Linux kernel >= 3.18 for improved stability of the
>> metadata and journal checksum features.)
>>
>
> Here is info from failed builds:
>
> DEBUG buildroot.py:503: kernel version == 3.10.0-693.11.6.el7.ppc64le
> https://jenkins.ovirt.org/job/ovirt-imageio_standard-check-
> patch/1593/artifact/check-patch.el8.ppc64le/mock_logs/init/root.log
>
> DEBUG buildroot.py:503:  kernel version == 3.10.0-957.12.1.el7.x86_64
> https://jenkins.ovirt.org/job/ovirt-imageio_standard-check-
> patch/1593/artifact/check-patch.el8.x86_64/mock_logs/init/root.log
>
> and successful builds:
>
> DEBUG buildroot.py:503:  kernel version == 5.1.18-200.fc29.x86_64
> My laptop
>
> Runtime kernel version: 4.15.0-1032-gcp
> https://travis-ci.org/nirs/ovirt-imageio/jobs/595794863
>
>
> This issue will affects vdsm, using similar code to create storage for
>> testing.
>>
>> For vdsm the issue is more tricky, since it requires same host/distro:
>>
>>  49 - tests-py37:
>>  50 runtime-requirements:
>>  51   host-distro: same
>>
>> So I think we need:
>> - make slaves with newer kernel (fc29? fc30?) with the el8 mock env
>> - add el8 slaves for running el8 mock with "host-distro: same"
>>
>> If we don't have a solution we need to disable el8 tests, or continue
>> testing without
>> storage, which will skip about 200 tests.
>>
>> Nir
>>
>>
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/D3NQK5HZ52NLDBN6ACZZW5XR4P4O3F5P/


[ovirt-devel] Re: ovirt-node-ng-image_4.3_build-artifacts-fc28-x86_64 #39 stuck for 2 days

2018-12-30 Thread Yuval Turgeman
Looks like livemedia-creator installed the VM correctly, but failed to
build the final image file for some reason (disk issues?).  Stdci failed
the job on timeout, but probably can't kill the hanging process.  Is it
possible to take a look at the slave somehow ?

On Sun, Dec 30, 2018, 19:57 Nir Soffer  Started 2 days 11 hr ago
> Build has been executing for 2 days 11 hr on vm0038.workers-phx.ovirt.or
> 
>
>
> https://jenkins.ovirt.org/job/ovirt-node-ng-image_4.3_build-artifacts-fc28-x86_64/39/
>
>
> ___
> Devel mailing list -- devel@ovirt.org
> To unsubscribe send an email to devel-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/devel@ovirt.org/message/FXGWNLV3ZLZSNC4MEXGBAMFP5GBAP7IY/
>
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/46JUANKQHKPK7AQN5PD3MJGAESVBMFCC/


[ovirt-devel] Re: [VDSM] Fedora 28 support - updates

2018-11-28 Thread Yuval Turgeman
Adding Gal, I think he fixed some of those issues (around add host)

On Wed, Nov 28, 2018 at 5:05 PM Nir Soffer  wrote:

> If you want to add host running Fedora 28, you need to do few manual steps:
>
> 1. Add hosts fail to configure the firewall
>
> Do not check the "Configure filewall" checkbox
>
> Adding host will fail because engine cannot communicate with vdsm.
>
> Fix - disable the firewall on the host.
> $ iptables -F
>
> There is probably a better way, but I did not find it yet :-)
>
> Sandro, do we have an update about this?
>
> 2. Engine fail in Host.getCapabilties
>
> Libvirt changed the location and format of this file. This will cause
> Host.getCapabilties
> to fai, and the host will become "Unassigned"
>
> This is a new issue revealed by updating our virt-preview repo this week.
>
> Fix - install old cpu_map.xml
> $ wget
> https://raw.githubusercontent.com/libvirt/libvirt/18cab54c3a0bc72390f29300684396690a7ecf51/src/cpu/cpu_map.xml
> -O /usr/share/libvirt/cpu_map.xml
>
> Engine should retry and succeed after that.
>
> 3. Sanlock fail to write to its pid file, connecting to storage fail
>
> Fix - set selinux to permissive mode
> $ setenforce 0
>
> To make this persistent edit /etc/selinux/config
> SELINUX=permissive
>
> We have a bug for this.
>
> After that you should have a working system.
>
> Nir
>
> ___
> Devel mailing list -- devel@ovirt.org
> To unsubscribe send an email to devel-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/devel@ovirt.org/message/RTKM4Y2OSPZO4CEWZVRBUALNGMGHIQ4L/
>
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/FIFOWMUL3XXPTTEDKOIT6VLO3GJ5R33H/


[ovirt-devel] Re: [ OST Failure Report ] [ oVirt 4.2 (ovirt-engine) ] [ 15-11-2018 ] [ check_snapshot_with_memory ]

2018-11-18 Thread Yuval Turgeman
imgbased is installed in ovirt-node-ng only, so it doesn't have anything to
do with the basic suite.

On Sun, Nov 18, 2018 at 2:59 PM Eitan Raviv  wrote:

> this has happened in the not so distant past:
> [ovirt-devel] [ OST Failure Report ] [ oVirt 4.2 (imgbased) ] [ 09-08-2018
> ] [ 004_basic_sanity.check_snapshot_with_memory ]
>
> On Fri, Nov 16, 2018 at 3:54 PM Dafna Ron  wrote:
>
>> Other projects are passing so it cannot be ost
>> we have ovirt-engine 4.2 that passed though:
>> https://gerrit.ovirt.org/#/c/95447/
>>
>> logging a ticket to look into these failures.
>>
>> On Fri, Nov 16, 2018 at 1:30 PM Dafna Ron  wrote:
>>
>>> I am seeing a second project to fail on this:
>>>
>>> https://jenkins.ovirt.org/view/Change%20queue%20jobs/job/ovirt-4.2_change-queue-tester/3495/
>>>
>>> I am not seeing any changes in OST merged in the last 2 days that could
>>> have caused this and I am also not seeing any infrastructure commonality
>>> (not running on same hosts).
>>>
>>> However, something has changed to cause the iscsi domain to be filled
>>> and fail on run vm.
>>>
>>> Milan, can you please double check that this change is not causing any
>>> leftovers in the iscsi domain?
>>> https://gerrit.ovirt.org/#/c/95333/
>>>
>>> Thanks,
>>> Dafna
>>>
>>>
>>>
>>> On Fri, Nov 16, 2018 at 11:32 AM Greg Sheremeta 
>>> wrote:
>>>
 It's definitely not that patch - the patch is UI / GWT only and isn't
 tested outside of the selenium tests in 008. Also, this is a backport and
 we didn't see any issues on master.

 Greg

 On Fri, Nov 16, 2018, 6:24 AM Dafna Ron >>>
> Hi,
> we have a failure on basic suite for test check_snapshot_with.
> _memory.
>
> I am actually not seeing any reason the patch would cause that
> specific issue but its consistently failing on this change.
> I am seeing a vm memory saved to the iscsi domain but aside from that
> nothing is failing before tto cause a cleanup issue or create low space on
> the storage).
> can some one please take a look to see if anything in the patch is
> causing this?
>
>
> Link and headline of suspected patches:
>
> https://gerrit.ovirt.org/#/c/95436/1 -
> webadmin: network operation in progress - sync host interfaces
>
> Link to Job:
> http://jenkins.ovirt.org/job/ovirt-4.2_change-queue-tester/3487/
>
> Link to all logs:
>
>
> https://jenkins.ovirt.org/job/ovirt-4.2_change-queue-tester/3487/artifact/basic-suite.el7.x86_64/test_logs/basic-suite-4.2/post-004_basic_sanity.py/
>
> (Relevant) error snippet from the log:
>
> 
>
> 2018-11-15 08:19:33,409-05 DEBUG
> [org.ovirt.engine.core.common.di.interceptor.DebugLoggingInterceptor]
> (default task-5) [dce28fd0-bebc-4aba-80d5-ffc081b1f591] method:
> runVdsCommand, params: [IsVmDuringInitiating
> ,
> IsVmDuringInitiatingVDSCommandParameters:{vmId='9144eb88-8f0a-4e77-9d70-3b761e48ecb4'}],
> timeElapsed: 1ms
> 2018-11-15 08:19:33,430-05 ERROR
> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
> (default task-5) [dce28fd0-bebc-4aba-80d5-ffc081b1f591] EVENT_ID:
> USER_FAILED_RUN_VM(54), Failed to run VM
> vm0 due to a failed validation: [Cannot run VM. Low disk space on
> Storage Domain iscsi.] (User: admin@internal-authz).
> 2018-11-15 08:19:33,430-05 WARN
> [org.ovirt.engine.core.bll.RunVmCommand] (default task-5)
> [dce28fd0-bebc-4aba-80d5-ffc081b1f591] Validation of action 'RunVm' failed
> for user admin@internal-authz. Reasons: VAR__
> ACTION__RUN,VAR__TYPE__VM,ACTION_TYPE_FAILED_DISK_SPACE_LOW_ON_STORAGE_DOMAIN,$storageName
> iscsi
> 2018-11-15 08:19:33,431-05 INFO
> [org.ovirt.engine.core.bll.RunVmCommand] (default task-5)
> [dce28fd0-bebc-4aba-80d5-ffc081b1f591] Lock freed to object
> 'EngineLock:{exclusiveLocks='[9144eb88-8f0a-4e77-9d70-3b761e
> 48ecb4=VM]', sharedLocks=''}'
> 2018-11-15 08:19:33,438-05 DEBUG
> [org.ovirt.engine.core.common.di.interceptor.DebugLoggingInterceptor]
> (default task-5) [dce28fd0-bebc-4aba-80d5-ffc081b1f591] method: runAction,
> params: [RunVm, RunVmParams:{comm
> andId='69628467-56d3-4d93-aed1-a2807b7dbc45', user='null',
> commandType='Unknown', vmId='9144eb88-8f0a-4e77-9d70-3b761e48ecb4'}],
> timeElapsed: 115ms
> 2018-11-15 08:19:33,441-05 ERROR
> [org.ovirt.engine.api.restapi.resource.AbstractBackendResource] (default
> task-5) [] Operation Failed: [Cannot run VM. Low disk space on Storage
> Domain iscsi.]
> 2018-11-15 08:19:34,044-05 DEBUG
> [org.ovirt.engine.core.vdsbroker.vdsbroker.GetAllVmStatsVDSCommand]
> (EE-ManagedThreadFactory-engineScheduled-Thread-25) [] START,
> GetAllVmStatsVDSCommand(HostName = lago-basic-su
> ite-4-2-host-0,
> VdsIdVDSCommandParametersBase:{hostId='063dfef9-bc9d-44c3-8ba3-8142e3eb129c'}),
> log id: 65651e2b
> 2018-11-15 08:19

[ovirt-devel] Re: Adding Fedora 28 host fail with "NotImplementedError: Packager install not implemented"

2018-10-03 Thread Yuval Turgeman
Very nice, but will we be able to add el7 hosts to fc28 engine with this
patch ?

On Wed, Oct 3, 2018 at 11:16 AM Martin Perina  wrote:

>
>
> On Wed, Oct 3, 2018 at 8:38 AM Yedidyah Bar David  wrote:
>
>> (Adding Gal)
>>
>> On Tue, Oct 2, 2018 at 10:12 PM Yuval Turgeman 
>> wrote:
>> >
>> > Right, there is no python2 firewalld module on fedora, and since
>> ansible renders the role on a python2 machine, the resulting script will
>> not be able to import firewalld.  I think there are bugs on both issues.
>> >
>> > On Tue, Oct 2, 2018, 21:32 Nir Soffer  wrote:
>> >>
>> >> On Tue, Oct 2, 2018 at 8:48 PM Nir Soffer  wrote:
>> >>>
>> >>> On Tue, Oct 2, 2018 at 8:46 PM Yuval Turgeman 
>> wrote:
>> >>>>
>> >>>> IIRC you need python-dnf for this (2 or 3 depending on the python
>> version otopi uses)
>> >>
>> >>
>> >> Installing python-dnf fixes this issue.
>> >> Do we have a bug for this issue?
>>
>> I do not think so.
>>
>> >>
>> >> However install fail later in ovirt-host-deploy-ansible:
>> >>
>> >> 2018-10-02 21:18:20,611 p=2294 u=nsoffer |  TASK
>> [ovirt-host-deploy-firewalld : Enable SSH port] ***
>> >> 2018-10-02 21:18:21,063 p=2294 u=nsoffer |  fatal: [
>> voodoo1.tlv.redhat.com]: FAILED! => {
>> >> "changed": false
>> >> }
>> >>
>> >> MSG:
>> >>
>> >> Python Module not found: firewalld and its python module are required
>> for this module, version 0.2.11 or newer required
>> (0.3.9 or newer for offline operations)
>> >>
>> >> 2018-10-02 21:18:21,065 p=2294 u=nsoffer |  PLAY RECAP
>> *
>> >> 2018-10-02 21:18:21,065 p=2294 u=nsoffer |  voodoo1.tlv.redhat.com
>>  : ok=8changed=2unreachable=0failed=1
>> >>
>> >> I have:
>> >>
>> >> $ rpm -qa | grep firewall
>> >> firewalld-0.5.5-1.fc28.noarch
>> >> python3-firewall-0.5.5-1.fc28.noarch
>> >> firewalld-filesystem-0.5.5-1.fc28.noarch
>> >>
>> >> Do we have a bug for this?
>>
>
> We don't, there is only Jira issue somewhere. I've posted preliminary
> patches to fix issues around add host and Ansible for FC28, but didn't have
> time to verify yet:
>
> https://gerrit.ovirt.org/93793
>
>
>> Not sure.
>>
>> >>
>> >>>
>> >>> Maybe, but this should be installed by the system when a user add a
>> host.
>> >>>
>> >>>>
>> >>>>
>> >>>> On Tue, Oct 2, 2018, 20:43 Nir Soffer  wrote:
>> >>>>>
>> >>>>> Trying to add Fedora 28 host to engine master, host installation
>> fails immediately with:
>> >>>>>
>> >>>>> 2018-10-02 20:34:45,440+0300 DEBUG
>> otopi.plugins.otopi.dialog.machine dialog.__logString:204 DIALOG:SEND
>>  **%EventStart STAGE internal_packages METHOD
>> otopi.plugins.ovirt_host_deploy.vd
>> >>>>> sm.vdsmid.Plugin._packages (None)
>> >>>>> 2018-10-02 20:34:45,443+0300 DEBUG otopi.context
>> context._executeMethod:143 method exception
>> >>>>> Traceback (most recent call last):
>> >>>>>   File "/tmp/ovirt-qh2D5sbRmG/pythonlib/otopi/context.py", line
>> 133, in _executeMethod
>> >>>>> method['method']()
>> >>>>>   File
>> "/tmp/ovirt-qh2D5sbRmG/otopi-plugins/ovirt-host-deploy/vdsm/vdsmid.py",
>> line 84, in _packages
>> >>>>> self.packager.install(('dmidecode',))
>> >>>>>   File "/tmp/ovirt-qh2D5sbRmG/pythonlib/otopi/packager.py", line
>> 102, in install
>> >>>>> raise NotImplementedError(_('Packager install not implemented'))
>> >>>>> NotImplementedError: Packager install not implemented
>> >>>>> 2018-10-02 20:34:45,444+0300 ERROR otopi.context
>> context._executeMethod:152 Failed to execute stage 'Environment packages
>> setup': Packager install not implemented
>> >>>>>
>> >>>>> engine version:
>> >>>>> 4.3.0_master (efabde5a36cbf85b43a60ead37e42943c35c2741)
>> >>

[ovirt-devel] Re: Adding Fedora 28 host fail with "NotImplementedError: Packager install not implemented"

2018-10-02 Thread Yuval Turgeman
Right, there is no python2 firewalld module on fedora, and since ansible
renders the role on a python2 machine, the resulting script will not be
able to import firewalld.  I think there are bugs on both issues.

On Tue, Oct 2, 2018, 21:32 Nir Soffer  wrote:

> On Tue, Oct 2, 2018 at 8:48 PM Nir Soffer  wrote:
>
>> On Tue, Oct 2, 2018 at 8:46 PM Yuval Turgeman 
>> wrote:
>>
>>> IIRC you need python-dnf for this (2 or 3 depending on the python
>>> version otopi uses)
>>>
>>
> Installing python-dnf fixes this issue.
> Do we have a bug for this issue?
>
> However install fail later in ovirt-host-deploy-ansible:
>
> 2018-10-02 21:18:20,611 p=2294 u=nsoffer |  TASK
> [ovirt-host-deploy-firewalld : Enable SSH port] ***
> 2018-10-02 21:18:21,063 p=2294 u=nsoffer |  fatal: [voodoo1.tlv.redhat.com]:
> FAILED! => {
> "changed": false
> }
>
> MSG:
>
> Python Module not found: firewalld and its python module are required for
> this module, version 0.2.11 or newer required
> (0.3.9 or newer for offline operations)
>
> 2018-10-02 21:18:21,065 p=2294 u=nsoffer |  PLAY RECAP
> *
> 2018-10-02 21:18:21,065 p=2294 u=nsoffer |  voodoo1.tlv.redhat.com :
> ok=8changed=2unreachable=0failed=1
>
> I have:
>
> $ rpm -qa | grep firewall
> firewalld-0.5.5-1.fc28.noarch
> python3-firewall-0.5.5-1.fc28.noarch
> firewalld-filesystem-0.5.5-1.fc28.noarch
>
> Do we have a bug for this?
>
>
>> Maybe, but this should be installed by the system when a user add a host.
>>
>>
>>>
>>> On Tue, Oct 2, 2018, 20:43 Nir Soffer  wrote:
>>>
>>>> Trying to add Fedora 28 host to engine master, host installation fails
>>>> immediately with:
>>>>
>>>> 2018-10-02 20:34:45,440+0300 DEBUG otopi.plugins.otopi.dialog.machine
>>>> dialog.__logString:204 DIALOG:SEND   **%EventStart STAGE
>>>> internal_packages METHOD otopi.plugins.ovirt_host_deploy.vd
>>>> sm.vdsmid.Plugin._packages (None)
>>>> 2018-10-02 20:34:45,443+0300 DEBUG otopi.context
>>>> context._executeMethod:143 method exception
>>>> Traceback (most recent call last):
>>>>   File "/tmp/ovirt-qh2D5sbRmG/pythonlib/otopi/context.py", line 133, in
>>>> _executeMethod
>>>> method['method']()
>>>>   File
>>>> "/tmp/ovirt-qh2D5sbRmG/otopi-plugins/ovirt-host-deploy/vdsm/vdsmid.py",
>>>> line 84, in _packages
>>>> self.packager.install(('dmidecode',))
>>>>   File "/tmp/ovirt-qh2D5sbRmG/pythonlib/otopi/packager.py", line 102,
>>>> in install
>>>> raise NotImplementedError(_('Packager install not implemented'))
>>>> NotImplementedError: Packager install not implemented
>>>> 2018-10-02 20:34:45,444+0300 ERROR otopi.context
>>>> context._executeMethod:152 Failed to execute stage 'Environment packages
>>>> setup': Packager install not implemented
>>>>
>>>> engine version:
>>>> 4.3.0_master (efabde5a36cbf85b43a60ead37e42943c35c2741)
>>>>
>>>> Is this a known issue? any workaround?
>>>>
>>>> Nir
>>>>
>>>> ___
>>>> Devel mailing list -- devel@ovirt.org
>>>> To unsubscribe send an email to devel-le...@ovirt.org
>>>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>>>> oVirt Code of Conduct:
>>>> https://www.ovirt.org/community/about/community-guidelines/
>>>> List Archives:
>>>> https://lists.ovirt.org/archives/list/devel@ovirt.org/message/PFTNHRIYORWSADDE7ZVSNMD3IX5P6VT3/
>>>>
>>>
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/OIEGUJ4JIQR2ZDMKJCONMXO5HID2LYE6/


[ovirt-devel] Re: Adding Fedora 28 host fail with "NotImplementedError: Packager install not implemented"

2018-10-02 Thread Yuval Turgeman
IIRC you need python-dnf for this (2 or 3 depending on the python version
otopi uses)

On Tue, Oct 2, 2018, 20:43 Nir Soffer  wrote:

> Trying to add Fedora 28 host to engine master, host installation fails
> immediately with:
>
> 2018-10-02 20:34:45,440+0300 DEBUG otopi.plugins.otopi.dialog.machine
> dialog.__logString:204 DIALOG:SEND   **%EventStart STAGE
> internal_packages METHOD otopi.plugins.ovirt_host_deploy.vd
> sm.vdsmid.Plugin._packages (None)
> 2018-10-02 20:34:45,443+0300 DEBUG otopi.context
> context._executeMethod:143 method exception
> Traceback (most recent call last):
>   File "/tmp/ovirt-qh2D5sbRmG/pythonlib/otopi/context.py", line 133, in
> _executeMethod
> method['method']()
>   File
> "/tmp/ovirt-qh2D5sbRmG/otopi-plugins/ovirt-host-deploy/vdsm/vdsmid.py",
> line 84, in _packages
> self.packager.install(('dmidecode',))
>   File "/tmp/ovirt-qh2D5sbRmG/pythonlib/otopi/packager.py", line 102, in
> install
> raise NotImplementedError(_('Packager install not implemented'))
> NotImplementedError: Packager install not implemented
> 2018-10-02 20:34:45,444+0300 ERROR otopi.context
> context._executeMethod:152 Failed to execute stage 'Environment packages
> setup': Packager install not implemented
>
> engine version:
> 4.3.0_master (efabde5a36cbf85b43a60ead37e42943c35c2741)
>
> Is this a known issue? any workaround?
>
> Nir
>
> ___
> Devel mailing list -- devel@ovirt.org
> To unsubscribe send an email to devel-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/devel@ovirt.org/message/PFTNHRIYORWSADDE7ZVSNMD3IX5P6VT3/
>
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/2A5UITKHD5ORWLUBCRQAP3XN7ZUZ7G7M/


[ovirt-devel] Re: [ OST Failure Report ] [ oVirt Master (ovirt-engine) ] [ 14-09-2018 ] [ 002_bootstrap.add_hosts ]

2018-09-19 Thread Yuval Turgeman
That would work too... the end result would be same, tar pads the last
record with zeros as well

On Thu, Sep 20, 2018 at 9:51 AM Yedidyah Bar David  wrote:

> On Mon, Sep 17, 2018 at 6:35 PM Yuval Turgeman 
> wrote:
>
>> Ok, regarding the tar issue, there's another solution - since
>> commons-compress hard coded the blocking factor to 1, while the default in
>> tar is 20, we could continue creating the tar as we do today, but add a
>> bunch of zeros to the end of the tarball.
>>
>> [yturgema@piggie ~/aa]$ ls -l ovirt-host-deploy_fc28.tar
>> -rw-r--r--. 1 yturgema yturgema 2166272 Sep  2 17:03
>> ovirt-host-deploy_fc28.tar
>> [yturgema@piggie ~/aa]$ python -c "print 10240-2166272%10240"
>> 4608
>> [yturgema@piggie ~/aa]$ dd if=/dev/zero of=/dev/stdout bs=4608 count=1 |
>> cat ovirt-host-deploy_fc28.tar - > new_host_deploy.tar
>> 1+0 records in
>> 1+0 records out
>> 4608 bytes (4.6 kB, 4.5 KiB) copied, 0.000373488 s, 12.3 MB/s
>> [yturgema@piggie ~/aa]$ ls -l new_host_deploy.tar
>> -rw-rw-r--. 1 yturgema yturgema 2170880 Sep 17 18:16 new_host_deploy.tar
>>
>> This would solve the problem, and not break el7, if this solution is
>> acceptable, I can send a patch.
>>
>
> This will probably work, but if you ask me, is too ugly. If we want to go
> this path, we probably
> have to find a reliable way to find out the blocking factor, or we risk
> another failure in the
> future.
>
> What about simply giving up on all of this and calling the external 'tar'
> utility also for
> creating the archive, instead of using Java?
>
>
>>
>>
>>
>> On Mon, Sep 17, 2018 at 6:02 PM, Martin Perina 
>> wrote:
>>
>>>
>>>
>>> On Mon, 17 Sep 2018, 16:25 Ravi Shankar Nori,  wrote:
>>>
>>>> host-deploy is still broken on master fc28
>>>>
>>>
>>> Yes, there are multiple issues on FC28, but the question is if this
>>> fixed OST on CentOS?
>>>
>>>
>>>> On Mon, Sep 17, 2018 at 8:01 AM, Yuval Turgeman 
>>>> wrote:
>>>>
>>>>> I'm pretty sure I verified this on el7 as well, i'll check again, but
>>>>> thinking about it, tar will stop when it gets to the first empty block, so
>>>>> if the record size on the engine's side is large and the end is filled 
>>>>> with
>>>>> zeros, -b1 will make it stop at the first empty block so the next read on
>>>>> the host's side would get the trailing zeros which is what otopi reads.
>>>>> Btw, it could be a problem with deployed el7 systems as well, if for
>>>>> any reason the default on the host is set to something that is more than 
>>>>> 20
>>>>> blocks (can be set with export TAR_BLOCKING_FACTOR for the root account on
>>>>> the host side).
>>>>> It's ok to revert the patch to fix the regression, but I don't see any
>>>>> other way other than -b1... perhaps add a `cat -` after to just read until
>>>>> EOF or something, or have otopi strip the input.
>>>>>
>>>>> On Mon, Sep 17, 2018 at 2:30 PM, Galit Rosenthal 
>>>>> wrote:
>>>>>
>>>>>> Didi,
>>>>>>
>>>>>> Is this what you are looking for
>>>>>> https://ovirt-jira.atlassian.net/browse/OVIRT-2259
>>>>>> ?
>>>>>> Galit
>>>>>>
>>>>>> On Mon, Sep 17, 2018 at 1:54 PM Dafna Ron  wrote:
>>>>>>
>>>>>>> I think that in ovirt-engine we currently only build to centos.
>>>>>>> since we have not had an engine build for 2 weeks (on master) I
>>>>>>> think we should merge and worry about fc28 once it would be relevant.
>>>>>>>
>>>>>>> the failure we have now could be another regression missed since the
>>>>>>> project has been broken for two weeks.
>>>>>>>
>>>>>>> Thanks,
>>>>>>> Dafna
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> On Mon, Sep 17, 2018 at 10:30 AM Yedidyah Bar David 
>>>>>>> wrote:
>>>>>>>
>>>>>>>> On Mon, Sep 17, 2018 at 11:49 AM Dafna Ron  wrote:
>>>>>>>> >
>>>>>>>> > Didi, Marin, any update on the patch?
>>>>>>>>
>>&g

[ovirt-devel] Re: [ OST Failure Report ] [ oVirt Master (ovirt-engine) ] [ 14-09-2018 ] [ 002_bootstrap.add_hosts ]

2018-09-17 Thread Yuval Turgeman
Ok, regarding the tar issue, there's another solution - since
commons-compress hard coded the blocking factor to 1, while the default in
tar is 20, we could continue creating the tar as we do today, but add a
bunch of zeros to the end of the tarball.

[yturgema@piggie ~/aa]$ ls -l ovirt-host-deploy_fc28.tar
-rw-r--r--. 1 yturgema yturgema 2166272 Sep  2 17:03
ovirt-host-deploy_fc28.tar
[yturgema@piggie ~/aa]$ python -c "print 10240-2166272%10240"
4608
[yturgema@piggie ~/aa]$ dd if=/dev/zero of=/dev/stdout bs=4608 count=1 |
cat ovirt-host-deploy_fc28.tar - > new_host_deploy.tar
1+0 records in
1+0 records out
4608 bytes (4.6 kB, 4.5 KiB) copied, 0.000373488 s, 12.3 MB/s
[yturgema@piggie ~/aa]$ ls -l new_host_deploy.tar
-rw-rw-r--. 1 yturgema yturgema 2170880 Sep 17 18:16 new_host_deploy.tar

This would solve the problem, and not break el7, if this solution is
acceptable, I can send a patch.



On Mon, Sep 17, 2018 at 6:02 PM, Martin Perina  wrote:

>
>
> On Mon, 17 Sep 2018, 16:25 Ravi Shankar Nori,  wrote:
>
>> host-deploy is still broken on master fc28
>>
>
> Yes, there are multiple issues on FC28, but the question is if this fixed
> OST on CentOS?
>
>
>> On Mon, Sep 17, 2018 at 8:01 AM, Yuval Turgeman 
>> wrote:
>>
>>> I'm pretty sure I verified this on el7 as well, i'll check again, but
>>> thinking about it, tar will stop when it gets to the first empty block, so
>>> if the record size on the engine's side is large and the end is filled with
>>> zeros, -b1 will make it stop at the first empty block so the next read on
>>> the host's side would get the trailing zeros which is what otopi reads.
>>> Btw, it could be a problem with deployed el7 systems as well, if for any
>>> reason the default on the host is set to something that is more than 20
>>> blocks (can be set with export TAR_BLOCKING_FACTOR for the root account on
>>> the host side).
>>> It's ok to revert the patch to fix the regression, but I don't see any
>>> other way other than -b1... perhaps add a `cat -` after to just read until
>>> EOF or something, or have otopi strip the input.
>>>
>>> On Mon, Sep 17, 2018 at 2:30 PM, Galit Rosenthal 
>>> wrote:
>>>
>>>> Didi,
>>>>
>>>> Is this what you are looking for
>>>> https://ovirt-jira.atlassian.net/browse/OVIRT-2259
>>>> ?
>>>> Galit
>>>>
>>>> On Mon, Sep 17, 2018 at 1:54 PM Dafna Ron  wrote:
>>>>
>>>>> I think that in ovirt-engine we currently only build to centos.
>>>>> since we have not had an engine build for 2 weeks (on master) I think
>>>>> we should merge and worry about fc28 once it would be relevant.
>>>>>
>>>>> the failure we have now could be another regression missed since the
>>>>> project has been broken for two weeks.
>>>>>
>>>>> Thanks,
>>>>> Dafna
>>>>>
>>>>>
>>>>>
>>>>> On Mon, Sep 17, 2018 at 10:30 AM Yedidyah Bar David 
>>>>> wrote:
>>>>>
>>>>>> On Mon, Sep 17, 2018 at 11:49 AM Dafna Ron  wrote:
>>>>>> >
>>>>>> > Didi, Marin, any update on the patch?
>>>>>>
>>>>>> Yes - it passed. Actually failed, but only after host-deploy:
>>>>>>
>>>>>> https://jenkins.ovirt.org/view/oVirt%20system%20tests/
>>>>>> job/ovirt-system-tests_manual/3189/
>>>>>>
>>>>>> I'd rather not merge it as-is, because it will break fedora.
>>>>>>
>>>>>> If someone can have a look at the code generating the tar file, and
>>>>>> can see if
>>>>>> it's easy to make it work well for both centos and fedora, perhaps by
>>>>>> explicitly
>>>>>> setting all relevant params to some reasonable values, great.
>>>>>> Otherwise, I guess
>>>>>> we can merge for now, as fedora is still not supported anyway.
>>>>>>
>>>>>> Thanks,
>>>>>>
>>>>>> >
>>>>>> >
>>>>>> > On Sun, Sep 16, 2018 at 11:09 AM Yedidyah Bar David <
>>>>>> d...@redhat.com> wrote:
>>>>>> >>
>>>>>> >> On Sun, Sep 16, 2018 at 12:53 PM Yedidyah Bar David <
>>>>>> d...@redhat.com> wrote:
>>>>>> >> >
>>>

[ovirt-devel] Re: [ OST Failure Report ] [ oVirt Master (ovirt-engine) ] [ 14-09-2018 ] [ 002_bootstrap.add_hosts ]

2018-09-17 Thread Yuval Turgeman
I'm pretty sure I verified this on el7 as well, i'll check again, but
thinking about it, tar will stop when it gets to the first empty block, so
if the record size on the engine's side is large and the end is filled with
zeros, -b1 will make it stop at the first empty block so the next read on
the host's side would get the trailing zeros which is what otopi reads.
Btw, it could be a problem with deployed el7 systems as well, if for any
reason the default on the host is set to something that is more than 20
blocks (can be set with export TAR_BLOCKING_FACTOR for the root account on
the host side).
It's ok to revert the patch to fix the regression, but I don't see any
other way other than -b1... perhaps add a `cat -` after to just read until
EOF or something, or have otopi strip the input.

On Mon, Sep 17, 2018 at 2:30 PM, Galit Rosenthal 
wrote:

> Didi,
>
> Is this what you are looking for
> https://ovirt-jira.atlassian.net/browse/OVIRT-2259
> ?
> Galit
>
> On Mon, Sep 17, 2018 at 1:54 PM Dafna Ron  wrote:
>
>> I think that in ovirt-engine we currently only build to centos.
>> since we have not had an engine build for 2 weeks (on master) I think we
>> should merge and worry about fc28 once it would be relevant.
>>
>> the failure we have now could be another regression missed since the
>> project has been broken for two weeks.
>>
>> Thanks,
>> Dafna
>>
>>
>>
>> On Mon, Sep 17, 2018 at 10:30 AM Yedidyah Bar David 
>> wrote:
>>
>>> On Mon, Sep 17, 2018 at 11:49 AM Dafna Ron  wrote:
>>> >
>>> > Didi, Marin, any update on the patch?
>>>
>>> Yes - it passed. Actually failed, but only after host-deploy:
>>>
>>> https://jenkins.ovirt.org/view/oVirt%20system%20tests/
>>> job/ovirt-system-tests_manual/3189/
>>>
>>> I'd rather not merge it as-is, because it will break fedora.
>>>
>>> If someone can have a look at the code generating the tar file, and can
>>> see if
>>> it's easy to make it work well for both centos and fedora, perhaps by
>>> explicitly
>>> setting all relevant params to some reasonable values, great. Otherwise,
>>> I guess
>>> we can merge for now, as fedora is still not supported anyway.
>>>
>>> Thanks,
>>>
>>> >
>>> >
>>> > On Sun, Sep 16, 2018 at 11:09 AM Yedidyah Bar David 
>>> wrote:
>>> >>
>>> >> On Sun, Sep 16, 2018 at 12:53 PM Yedidyah Bar David 
>>> wrote:
>>> >> >
>>> >> > On Fri, Sep 14, 2018 at 6:06 PM Martin Perina 
>>> wrote:
>>> >> > >
>>> >> > >
>>> >> > >
>>> >> > > On Fri, Sep 14, 2018 at 4:51 PM, Ravi Shankar Nori <
>>> rn...@redhat.com> wrote:
>>> >> > >>
>>> >> > >> I see the same errors on my dev env. From the logs attached by
>>> Andrej the response received by otopi has a bunch of null chars before the
>>> actual response CONFIRM DEPLOY_PROCEED=yes
>>> >> > >>
>>> >> > >>
>>> >> > >>
>>> >> > >> 2018-09-14 15:49:23,018+0200 DEBUG 
>>> >> > >> otopi.plugins.otopi.dialog.machine
>>> dialog.__logString:204 DIALOG:SEND   ### Response is CONFIRM
>>> DEPLOY_PROCEED=yes|no or ABORT DEPLOY_PROCEED
>>> >> > >>
>>> >> > >> ^@^@^@^@^@^@^@^@^@CONFIRM DEPLOY_PROCEED=yes
>>> >> > >
>>> >> > >
>>> >> > > Didi/Sandro, could you please take a look? Below error seems like
>>> some issue in otopi, where an error is raised when handling binary input:
>>> >> >
>>> >> > Not sure the issue is "binary input" in general, but simply illegal
>>> >> > input. The prompt expects, as it says, one of these 3 replies:
>>> >> >
>>> >> > CONFIRM DEPLOY_PROCEED=yes
>>> >> > CONFIRM DEPLOY_PROCEED=no
>>> >> > ABORT DEPLOY_PROCEED
>>> >> >
>>> >> > Instead, judging from the file supplied by Andrej, it gets from the
>>> engine:
>>> >> > <7169 null bytes>CONFIRM DEPLOY_PROCEED=yes
>>> >> >
>>> >> > So either the engine now sends, for some reason, 7169 null bytes, in
>>> >> > this response, or there is some low-level change causing this to be
>>> >> > eventually supplied to otopi - a change in apache-sshd, openssh,
>>> some
>>> >> > library, the kernel, no idea.
>>> >> >
>>> >> > Well, thinking a bit, I have a wild guess: Perhaps it's related to
>>> the
>>> >> > patch introduced recently to change the tar blocking?
>>> >>
>>> >> https://gerrit.ovirt.org/94357
>>> >>
>>> >> I am leaving soon, perhaps someone can try the manual job with the
>>> >> result of the check-patch job for above patch, to see if it fixes.
>>> >> Otherwise I'll do this tomorrow.
>>> >>
>>> >> >
>>> >> > >
>>> >> > >
>>> >> > > 2018-09-14 15:49:23,032+0200 DEBUG otopi.context
>>> context._executeMethod:143 method exception
>>> >> > > Traceback (most recent call last):
>>> >> > >   File "/usr/lib/python2.7/site-packages/otopi/context.py", line
>>> 133, in _executeMethod
>>> >> > > method['method']()
>>> >> > >   File 
>>> >> > > "/tmp/ovirt-O6CfS4aUHI/otopi-plugins/ovirt-host-deploy/core/misc.py",
>>> line 87, in _confirm
>>> >> > > prompt=True,
>>> >> > >   File "/tmp/ovirt-O6CfS4aUHI/otopi-plugins/otopi/dialog/machine.py",
>>> line 478, in confirm
>>> >> > > code=opcode,
>>> >> > >
>>> >> > >
>>> >> > >>
>>> >> > >> On Fri, Sep 14

[ovirt-devel] Re: [ OST Failure Report ] [ oVirt 4.2 (imgbased) ] [ 09-08-2018 ] [ 004_basic_sanity.check_snapshot_with_memory ]

2018-08-09 Thread Yuval Turgeman
Imgbased runs on ovirt-node-ng, which is not part of the basic-suite
afaik...

On Thu, Aug 9, 2018, 09:25 Dafna Ron  wrote:

> Hi,
>
> We have a failure in 4.2 which I think may be related to the patch itself.
> Jira opened with additional info:
> https://ovirt-jira.atlassian.net/browse/OVIRT-2418
>
> *Link and headline of suspected patches: *
>
>
> *We failed patch https://gerrit.ovirt.org/#/c/93545/
>  - core: remove lvs after a failed
> upgrade on ovirt-4.2Link to Job:*
>
>
> * https://jenkins.ovirt.org/job/ovirt-4.2_change-queue-tester/2819/
> Link to
> all logs:*
>
>
>
> * 
> https://jenkins.ovirt.org/job/ovirt-4.2_change-queue-tester/2819/artifact/basic-suite.el7.x86_64/test_logs/basic-suite-4.2/post-004_basic_sanity.py/
> (Relevant)
> error snippet from the log: *
> Error Message
>
> Fault reason is "Operation Failed". Fault detail is "[Cannot run VM. Low disk 
> space on Storage Domain iscsi.]". HTTP response code is 409.
>
> Stacktrace
>
> Traceback (most recent call last):
>   File "/usr/lib64/python2.7/unittest/case.py", line 369, in run
> testMethod()
>   File "/usr/lib/python2.7/site-packages/nose/case.py", line 197, in runTest
> self.test(*self.arg)
>   File "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py", line 142, in 
> wrapped_test
> test()
>   File "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py", line 60, in 
> wrapper
> return func(get_test_prefix(), *args, **kwargs)
>   File "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py", line 79, in 
> wrapper
> prefix.virt_env.engine_vm().get_api(api_ver=4), *args, **kwargs
>   File 
> "/home/jenkins/workspace/ovirt-4.2_change-queue-tester/ovirt-system-tests/basic-suite-4.2/test-scenarios/004_basic_sanity.py",
>  line 589, in check_snapshot_with_memory
> vm_service.start()
>   File "/usr/lib64/python2.7/site-packages/ovirtsdk4/services.py", line 
> 30074, in start
> return self._internal_action(action, 'start', None, headers, query, wait)
>   File "/usr/lib64/python2.7/site-packages/ovirtsdk4/service.py", line 299, 
> in _internal_action
> return future.wait() if wait else future
>   File "/usr/lib64/python2.7/site-packages/ovirtsdk4/service.py", line 55, in 
> wait
> return self._code(response)
>   File "/usr/lib64/python2.7/site-packages/ovirtsdk4/service.py", line 296, 
> in callback
> self._check_fault(response)
>   File "/usr/lib64/python2.7/site-packages/ovirtsdk4/service.py", line 134, 
> in _check_fault
> self._raise_error(response, body.fault)
>   File "/usr/lib64/python2.7/site-packages/ovirtsdk4/service.py", line 118, 
> in _raise_error
> raise error
> Error: Fault reason is "Operation Failed". Fault detail is "[Cannot run VM. 
> Low disk space on Storage Domain iscsi.]". HTTP response code is 409.
>
> **
>
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/HBUDCEWRNTLZ5IOVXGLFFV55VJD3XX4S/


[ovirt-devel] Re: storage domain deactivating

2018-07-26 Thread Yuval Turgeman
On Thu, Jul 26, 2018, 21:30 Hetz Ben Hamo  wrote:

> On Thu, Jul 26, 2018 at 9:22 PM, Nir Soffer  wrote:
>
>> On Thu, Jul 26, 2018 at 9:11 PM Hetz Ben Hamo  wrote:
>>
>>> Hi,
>>>
>>
>> Hi Hetz,
>>
>> First I want to thank you for your great oVirt videos:
>> https://www.youtube.com/channel/UCWrtPbXo4iVxO45aRABIJjg
>> Warning: Hebrew content :-)
>>
>>
> You're welcome. Didn't know I was *that* famous at RH IL. Give my regards
> to Shai Revivo, Dudi and Yuval Turgeman ;)
>


Hi Hetz, nice to see you here !  +1 for the really great videos, very
recommended for all Hebrew speaking oVirt users :)



>
>> I hope you plan another video showing how to upload ISO image
>> directly into a FC or iSCSI domain...
>>
>
> iSCSI - I'll probably make one. FC - no hardware here at home ;)
>
>
>> One of the weird thing that happens is that when the machine boots and it
>>> starts the HE, it mounts the storage domains and everything works. However,
>>> after few moments, 3 of my 4 storage domains (ISO, export, and another
>>> storage domain, but not the hosted_engine storage domain) is being
>>> automatically deactivted, with the following errors:
>>>
>>> VDSM command GetFileStatsVDS failed: Storage domain does not exist:
>>> (u'f241db01-2282-4204-8fe0-e27e36b3a909',)
>>> Refresh image list failed for domain(s): ISO (ISO file type). Please
>>> check domain activity.
>>> Storage Domain ISO (Data Center HetzLabs) was deactivated by system
>>> because it's not visible by any of the hosts.
>>> Storage Domain data-NAS3 (Data Center HetzLabs) was deactivated by
>>> system because it's not visible by any of the hosts.
>>> Storage Domain export (Data Center HetzLabs) was deactivated by system
>>> because it's not visible by any of the hosts.
>>>
>>> However, when I see those message and I'm manually re-activating those
>>> storage domains, all of them getting the status "UP" and there are no
>>> errors and I can see disks, images, etc...
>>>
>>
>> What happens if you wait 10 minutes? I guess the system will activate all
>> the domains
>> automatically.
>>
>
> I didn't wait 10 minutes. Too impatient ;)
>
>
>>
>>
>>> Should I open a bug in Bugzilla about this issue?
>>>
>>
>> Yes, this sounds like incorrect behavior, even if the storage domains are
>> activated
>> automatically later.
>>
>
> Under which product? (
> https://bugzilla.redhat.com/enter_bug.cgi?classification=oVirt)
>
>
>>
>> Nir
>>
>
> ___
> Devel mailing list -- devel@ovirt.org
> To unsubscribe send an email to devel-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/devel@ovirt.org/message/NEDYMH7ELQSJPCSM24BTOZO3GDXHU72H/
>
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/D4Z4P3LLM4GUQ4XFWKUFOFJ6OL4SJHFU/


[ovirt-devel] Re: oVirt master - Fedora 28

2018-07-23 Thread Yuval Turgeman
Adding the fc28 host to a fc28 engine is a bit of a problem - the EC issue
can be hacked on the host side by commenting out the
/etc/ssh/ssh_host_ecdsa_key host key from sshd_config, but it fails later
on (still checking why).
As for a fc28 host on an el7 engine, it works nicely after applying this
patch [1] and adding the appropriate interpreter in
/etc/sysconfig/ovirt-engine

[1] https://gerrit.ovirt.org/#/c/93232/

On Mon, Jul 23, 2018 at 1:05 PM, Dan Kenigsberg  wrote:

>
>
> On Mon, Jul 23, 2018 at 10:39 AM, Sandro Bonazzola 
> wrote:
>
>> Just an update on this topic, we are now building both oVirt Engine
>> Appliance and oVirt Node based on Fedora 28.
>>
>> You can find them in jenkins:
>> oVirt Node: https://jenkins.ovirt.org/job/ovirt-node-ng_master_bui
>> ld-artifacts-fc28-x86_64/lastSuccessfulBuild/artifact/exported-artifacts/
>> oVirt Engine Appliance: https://jenkins.ovirt.org/job/ovirt-appliance_mas
>> ter_build-artifacts-fc28-x86_64/lastSuccessfulBuild/artifac
>> t/exported-artifacts/
>>
>
> Thanks for the update (and the good work)
> But isn't adding the first onto the second still blocked due to EC ssh
> algorithm
> https://ovirt-jira.atlassian.net/browse/OVIRT-2259 ?
>
>
>>
>> 2018-06-08 16:10 GMT+02:00 Greg Sheremeta :
>>
>>> Thanks Sandro and team for the huge effort to get us back on fedora :)
>>> This is awesome news!
>>>
>>> Greg
>>>
>>> On Fri, Jun 8, 2018 at 10:00 AM, Sandro Bonazzola 
>>> wrote:
>>>
 Hi,
 you can now "dnf install ovirt-engine" on fedora 28 using
 https://resources.ovirt.org/repos/ovirt/tested/master/rpm/fc28/

 ovirt-master-snapshot is still syncing and will probably be aligned in
 1 hour or so.
 Mirrors should be aligned by tomorrow.
 Please note that no test has been done on the build.

 Host side we are currently blocked on a broken dependency on
 Bug 1588471  - nothing
 provides python-rhsm needed by pulp-rpm-handlers-2.15.2-1.fc28.noarch

 And we still miss vdsm build, pending https://gerrit.ovirt.org/91944
 review and merge for supporting fc28.

 A few notes on the engine build:
 - Dropped fluentd and related packages from the dependency tree:
 there's no commitment in supporting fluentd for metrics in 4.3 so we didn't
 invest time in packaging it for fedora.
 - Dropped limitation on novnc < 0.6. Fedora already has 0.6.1 in stable
 repository and we have Bug 1502652
  - [RFE] upgrade
 novnc which should be scoped for 4.3.0 or we should consider dropping novnc
 support.

 This should allow to start developing oVirt Engine appliance based on
 Fedora 28 and then OST suites.

 Thanks,

 --

 SANDRO BONAZZOLA

 ASSOCIATE MANAGER, SOFTWARE ENGINEERING, EMEA ENG VIRTUALIZATION R&D

 Red Hat EMEA 

 sbona...@redhat.com
 

 ___
 Devel mailing list -- devel@ovirt.org
 To unsubscribe send an email to devel-le...@ovirt.org
 Privacy Statement: https://www.ovirt.org/site/privacy-policy/
 oVirt Code of Conduct: https://www.ovirt.org/communit
 y/about/community-guidelines/
 List Archives: https://lists.ovirt.org/archiv
 es/list/devel@ovirt.org/message/5JUOLVZ62JP2QX2GNYHFLK2TVYLRNLWV/


>>>
>>
>>
>> --
>>
>> SANDRO BONAZZOLA
>>
>> MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV
>>
>> Red Hat EMEA 
>>
>> sbona...@redhat.com
>> 
>>
>> ___
>> Devel mailing list -- devel@ovirt.org
>> To unsubscribe send an email to devel-le...@ovirt.org
>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>> oVirt Code of Conduct: https://www.ovirt.org/communit
>> y/about/community-guidelines/
>> List Archives: https://lists.ovirt.org/archiv
>> es/list/devel@ovirt.org/message/DMYRGAV5KX7SZCQVIK4BHVD3BICGSWSA/
>>
>>
>
> ___
> Devel mailing list -- devel@ovirt.org
> To unsubscribe send an email to devel-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: https://www.ovirt.org/community/about/community-
> guidelines/
> List Archives: https://lists.ovirt.org/archives/list/devel@ovirt.org/
> message/HSDUHPLKUQ7T3T4SI2BUQTUDLTIM46FJ/
>
>
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/57REHTVIZ6EWRGC4OFJEMUAZBAH7XSUA/


[ovirt-devel] Re: Error adding host to ovirt-engine in local development environment

2018-07-11 Thread Yuval Turgeman
It looks like you don't have python3 installed, so otopi falls back to
/bin/python which is python2, but the python2 otopi module is not installed
- try to install the python2-otopi rpm and see if it works for you.

Thanks,
Yuval.

On Wed, Jul 11, 2018 at 4:13 PM, Kaustav Majumder 
wrote:

> Hi,
>
> 'pythonlib' directory is empty.Is it something to do with that?
>
> On Wednesday 11 July 2018 06:30 PM, Kaustav Majumder wrote:
>
> Hi,
>
> The output of sh -x otopi --> https://pastebin.com/WeeCJYdk
>
> tar in the attachment
>
> On Wednesday 11 July 2018 06:14 PM, Yedidyah Bar David wrote:
>
> On Wed, Jul 11, 2018 at 10:21 AM, Kaustav Majumder 
> wrote:
>
>> Hi,
>>
>> Following are the versions
>>
>> otopi-1.8.0-0.0.master.20180704073752.git9eed7fe.fc28
>>
>> ovirt-host-deploy-common-1.8.0-0.0.master.20180624095611.git
>> 827d6d1.fc28.noarch
>> python2-ovirt-host-deploy-1.8.0-0.0.master.20180624095611.gi
>> t827d6d1.fc28.noarch
>> I copied the tar to the  host  and when I am trying to install its giving
>> the following error
>>
>> [root@dhcp43-133 ~]# ./ovirt-host-deploy
>> ***L:ERROR: Python is required but missing
>>
>
> Can you please try running, from the same directory on the host, 'sh -x
> otopi' ?
>
> Also, can you please share the bundle tar file?
>
> Thanks,
>
>
>>
>>
>>
>>
>> On Wednesday 11 July 2018 11:04 AM, Yedidyah Bar David wrote:
>>
>> On Tue, Jul 10, 2018 at 1:24 PM, Kaustav Majumder 
>> wrote:
>>
>>> Hi,
>>>
>>> I am trying to setup ovirt engine dev environment in my local Fedora 28
>>> machine.
>>>
>>
>> Do you want to use fedora 28 specifically? Or just get a dev env working?
>> If latter, it's currently easier to use el7.
>>
>>
>>> I have followed this guide -> https://gerrit.ovirt.org/gitwe
>>> b?p=ovirt-engine.git;a=blob_plain;f=README.adoc;hb=HEAD
>>>
>>> When I am trying to add a new host (Centos 7) ,it is failing with the
>>> following error.
>>>
>>>
>>> [35eb76d9] EVENT_ID: VDS_INSTALL_IN_PROGRESS(509), Installing Host
>>> 10.70.43.157. Connected to host 10.70.43.157 with SSH key fingerprint:
>>> SHA256:rZfUGylVh3PLqfH2Siey0+CA9RUctK2ITQ2UGtV5ggA.
>>> 2018-07-10 15:47:50,447+05 INFO  
>>> [org.ovirt.engine.core.bll.hostdeploy.VdsDeployBase]
>>> (EE-ManagedThreadFactory-engine-Thread-4188) [35eb76d9] Installation of
>>> 10.70.43.157. Executing command via SSH umask 0077;
>>> MYTMP="$(TMPDIR="${OVIRT_TMPDIR}" mktemp -d -t ovirt-XX)"; trap
>>> "chmod -R u+rwX \"${MYTMP}\" > /dev/null 2>&1; rm -fr \"${MYTMP}\" >
>>> /dev/null 2>&1" 0; tar --warning=no-timestamp -C "${MYTMP}" -x &&
>>> "${MYTMP}"/ovirt-host-deploy DIALOG/dialect=str:machine
>>> DIALOG/customization=bool:True < /home/kaustavmajumder/work/ovi
>>> rt-engine-builds/07-07/var/cache/ovirt-engine/ovirt-host-deploy.tar
>>> 2018-07-10 15:47:50,447+05 INFO  
>>> [org.ovirt.engine.core.utils.archivers.tar.CachedTar]
>>> (EE-ManagedThreadFactory-engine-Thread-4188) [35eb76d9] Tarball
>>> '/home/kaustavmajumder/work/ovirt-engine-builds/07-07/var/ca
>>> che/ovirt-engine/ovirt-host-deploy.tar' refresh
>>> 2018-07-10 15:47:50,471+05 INFO  
>>> [org.ovirt.engine.core.uutils.ssh.SSHDialog]
>>> (EE-ManagedThreadFactory-engine-Thread-4188) [35eb76d9] SSH execute '
>>> root@10.70.43.157' 'umask 0077; MYTMP="$(TMPDIR="${OVIRT_TMPDIR}"
>>> mktemp -d -t ovirt-XX)"; trap "chmod -R u+rwX \"${MYTMP}\" >
>>> /dev/null 2>&1; rm -fr \"${MYTMP}\" > /dev/null 2>&1" 0; tar
>>> --warning=no-timestamp -C "${MYTMP}" -x &&  "${MYTMP}"/ovirt-host-deploy
>>> DIALOG/dialect=str:machine DIALOG/customization=bool:True'
>>> 2018-07-10 15:47:50,676+05 ERROR [org.ovirt.engine.core.dal.dbb
>>> roker.auditloghandling.AuditLogDirector] (VdsDeploy) [35eb76d9]
>>> EVENT_ID: VDS_INSTALL_IN_PROGRESS_ERROR(511), An error has occurred
>>> during installation of Host 10.70.43.157: Python is required but
>>> missing.
>>> 2018-07-10 15:47:50,685+05 ERROR 
>>> [org.ovirt.engine.core.bll.hostdeploy.VdsDeployBase]
>>> (VdsDeploy) [35eb76d9] Error during deploy dialog
>>> 2018-07-10 15:47:50,686+05 ERROR 
>>> [org.ovirt.engine.core.uutils.ssh.SSHDialog]
>>> (EE-ManagedThreadFactory-engine-Thread-4188) [35eb76d9] SSH error
>>> running command root@10.70.43.157:'umask 0077;
>>> MYTMP="$(TMPDIR="${OVIRT_TMPDIR}" mktemp -d -t ovirt-XX)"; trap
>>> "chmod -R u+rwX \"${MYTMP}\" > /dev/null 2>&1; rm -fr \"${MYTMP}\" >
>>> /dev/null 2>&1" 0; tar --warning=no-timestamp -C "${MYTMP}" -x &&
>>> "${MYTMP}"/ovirt-host-deploy DIALOG/dialect=str:machine
>>> DIALOG/customization=bool:True': IOException: Command returned failure
>>> code 1 during SSH session 'root@10.70.43.157'
>>> 2018-07-10 15:47:50,690+05 ERROR 
>>> [org.ovirt.engine.core.bll.hostdeploy.VdsDeployBase]
>>> (EE-ManagedThreadFactory-engine-Thread-4188) [35eb76d9] Error during
>>> host 10.70.43.157 install
>>> 2018-07-10 15:47:50,697+05 ERROR [org.ovirt.engine.core.dal.dbb
>>> roker.auditloghandling.AuditLogDirector] 
>>> (EE-ManagedThreadFactory-engine-Thread-4188)
>>> [35eb76d9] EVENT_ID: VDS_INSTALL_IN_PR

[ovirt-devel] Re: ovirt-system-tests_he-node-ng-suite-master is failing on not enough memory to run VMs

2018-07-10 Thread Yuval Turgeman
Gal took care of this already - https://gerrit.ovirt.org/#/c/92902/ :)

On Tue, Jul 10, 2018, 18:59 Dafna Ron  wrote:

> host memory is 4720 for both hosts. since one is running the engine should
> we increase it?
>
> {%- for i in range(hostCount) %}
>   lago-{{ env.suite_name }}-host-{{ i }}:
> vm-type: ovirt-host
> distro: el7
> service_provider: systemd
> memory: 4720
>
>
>
> On Sun, Jul 8, 2018 at 7:38 AM, Barak Korren  wrote:
>
>>
>>
>> On 6 July 2018 at 11:57, Sandro Bonazzola  wrote:
>>
>>>
>>>
>>> https://jenkins.ovirt.org/job/ovirt-system-tests_he-node-ng-suite-master/165/testReport/(root)/004_basic_sanity/vm_run/
>>>
>>> Cannot run VM. There is no host that satisfies current scheduling
>>> constraints. See below for details:, The host
>>> lago-he-node-ng-suite-master-host-0 did not satisfy internal filter Memory
>>> because its available memory is too low (656 MB) to run the VM.
>>>
>>>
>>
>> this sounds like something that needs to be fixed in the suit's
>> LagoInitFile.
>>
>>
>>
>>> --
>>>
>>> SANDRO BONAZZOLA
>>>
>>> MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV
>>>
>>> Red Hat EMEA 
>>>
>>> sbona...@redhat.com
>>> 
>>>
>>> ___
>>> Devel mailing list -- devel@ovirt.org
>>> To unsubscribe send an email to devel-le...@ovirt.org
>>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>>> oVirt Code of Conduct:
>>> https://www.ovirt.org/community/about/community-guidelines/
>>> List Archives:
>>> https://lists.ovirt.org/archives/list/devel@ovirt.org/message/TDKLML6YDBATFHS232GFJF7QVRTWUH74/
>>>
>>>
>>
>>
>> --
>> Barak Korren
>> RHV DevOps team , RHCE, RHCi
>> Red Hat EMEA
>> redhat.com | TRIED. TESTED. TRUSTED. | redhat.com/trusted
>>
>> ___
>> Devel mailing list -- devel@ovirt.org
>> To unsubscribe send an email to devel-le...@ovirt.org
>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>> oVirt Code of Conduct:
>> https://www.ovirt.org/community/about/community-guidelines/
>> List Archives:
>> https://lists.ovirt.org/archives/list/devel@ovirt.org/message/S6F3JGB3C5OSKNWMM3THXMIR2XYYUOGO/
>>
>>
> ___
> Devel mailing list -- devel@ovirt.org
> To unsubscribe send an email to devel-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/devel@ovirt.org/message/CUT2OW3NUPU26NOLNYMFPQHANC4AYOU2/
>
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/JW4GIP4HEAUQSCZG5BX7S66VYIQSJM3X/


[ovirt-devel] Re: CI failing: /usr/sbin/groupadd -g 1000 mock

2018-06-17 Thread Yuval Turgeman
IIRC we hit this issue before when trying to install a mock rpm of a
specific version inside an env that was created with a different version of
mock (group names have changed from mock to mockbuild or vice versa)

On Jun 17, 2018 22:53, "Nir Soffer"  wrote:

2 Fedora builds failed recently with this error:

19:40:34 ERROR: Command failed: 19:40:34 # /usr/sbin/groupadd -g 1000 mock

See latest builds of this patch:
https://gerrit.ovirt.org/#/c/91834/

I merged the patch regardless, but please take a look.

Nir
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct:
https://www.ovirt.org/community/about/community-guidelines/
List Archives:
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/Y3REN743FHTRGOLVSJD66JTX34FJLQ5S/
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/W3SESKPKUXHO27TVIS2XD5DQFW3ZIFUP/


Re: [ovirt-devel] ovirt-host-deploy and python3

2018-05-04 Thread Yuval Turgeman
Definitely a bug, we did some work on otopi (mostly packaging) to use
python3 if it exists on a system - most of the otopi code was compatible,
however, we didn't get to port host-deploy just yet.  If you could open a
bug for this issue that would be great.

Thanks,
Yuval

On May 3, 2018 22:59, "Tomáš Golembiovský"  wrote:

Hi,

I'm trying to reinstall a CentOS host (using master-snapshot) and I
noticed otopi is trying to use python3 while the ovirt-host-deploy is
not yet fully python3 compatible:


> 2018-05-03 21:35:56,855+0200 DEBUG otopi.plugins.otopi.system.info
info._init:39 SYSTEM INFORMATION - BEGIN
> 2018-05-03 21:35:56,855+0200 DEBUG otopi.plugins.otopi.system.info
info._init:40 executable /bin/python3
> 2018-05-03 21:35:56,855+0200 DEBUG otopi.plugins.otopi.system.info
info._init:41 python version 3.4.8 (default, Mar 23 2018, 10:04:27)
> [GCC 4.8.5 20150623 (Red Hat 4.8.5-16)]
> 2018-05-03 21:35:56,855+0200 DEBUG otopi.plugins.otopi.system.info
info._init:42 python /bin/python3
> 2018-05-03 21:35:56,856+0200 DEBUG otopi.plugins.otopi.system.info
info._init:43 platform linux
> 2018-05-03 21:35:56,856+0200 DEBUG otopi.plugins.otopi.system.info
info._init:44 distribution ('CentOS Linux', '7.4.1708', 'Core')
> 2018-05-03 21:35:56,856+0200 DEBUG otopi.plugins.otopi.system.info
info._init:45 host 'ovirt-host2'
> 2018-05-03 21:35:56,856+0200 DEBUG otopi.plugins.otopi.system.info
info._init:51 uid 0 euid 0 gid 0 egid 0
> 2018-05-03 21:35:56,856+0200 DEBUG otopi.plugins.otopi.system.info
info._init:53 SYSTEM INFORMATION - END

and then later:

> 2018-05-03 21:35:56,912+0200 DEBUG otopi.context
context._executeMethod:128 Stage init METHOD
otopi.plugins.ovirt_host_deploy.node.detect.Plugin._init
> 2018-05-03 21:35:56,914+0200 DEBUG otopi.context
context._executeMethod:143 method exception
> Traceback (most recent call last):
>   File "/tmp/ovirt-a0GNITSwX9/pythonlib/otopi/context.py", line 133, in
_executeMethod
> method['method']()
>   File
"/tmp/ovirt-a0GNITSwX9/otopi-plugins/ovirt-host-deploy/node/detect.py",
line 131, in _init
> odeploycons.FileLocations.OVIRT_NODE_VARIANT_VAL)
>   File
"/tmp/ovirt-a0GNITSwX9/otopi-plugins/ovirt-host-deploy/node/detect.py",
line 69, in hasconf
> io.StringIO('[default]\n' + f.read().decode('utf-8'))
> AttributeError: 'str' object has no attribute 'decode'
> 2018-05-03 21:35:56,915+0200 ERROR otopi.context
context._executeMethod:152 Failed to execute stage 'Initializing': 'str'
object has no attribute 'decode'

There is definitely a bug (or at least weirdness) in host-deploy -- why
use .decode() on object returned by codecs.open()?

But also, should host-deploy be already python3 compatible and is this
otopi behaviour expected?

Tomas



-- 
Tomáš Golembiovský 
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

Re: [ovirt-devel] Failed To Build ovirt-node following the guide

2018-04-02 Thread Yuval Turgeman
Hi,

You are trying to build legacy node, while ovirt 4x supports next
generation node (ovirt-node-ng).

Start by cloning ovirt-node-ng (https://gerrit.ovirt.org/ovirt-node-ng),
and follow the README.

Thanks,
Yuval.


On Thu, Mar 29, 2018 at 11:49 AM, sundw  wrote:

> Hello,guys!
>
> I built ovirt-node following this guide(https://www.ovirt.
> org/develop/projects/node/building/).
> But I failed.
> I got the following Error message after running "make iso publish":
>
> Error creating Live CD : Failed to find package 'ovirt-
> node-plugin-vdsm' : No package(s) available to install
>
> Could you please give some advice?
>
> *BTW: Is the content of this url
> "https://www.ovirt.org/develop/projects/node/building/
> " OUT OF DATE?*
>
> --
> *孙大巍*
> 北京科银京成技术有限公司/新产品研发部
> 13378105625 <(337)%20810-5625>
>
> ___
> Devel mailing list
> Devel@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/devel
>
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

Re: [ovirt-devel] [VDSM] loop device tests fail again on jenkins

2018-01-11 Thread Yuval Turgeman
from ovirt-node-ng:

build-artifacts.sh:  seq 0 9 | xargs -I {} mknod /dev/loop{} b 7 {} || :


On Thu, Jan 11, 2018 at 8:38 PM, Nir Soffer  wrote:

> On Thu, Jan 11, 2018 at 8:31 PM Barak Korren  wrote:
>
>> I thought I explained this before.
>>
>> Writing to losetup from inside mock gets you a device outside mock that
>> is invisible inside.
>>
>
> How the tests work 99.9% of the time?
>
>
>> The way to solve this is to run mknod a few times to get the needed
>> device files.
>>
>> You can see examples of this in the ovirt-node build scripts.
>>
>
> "git grep mknod" in ovirt-node source does not show anything.
>
>
>>
>> Barak Korren
>> bkor...@redhat.com
>> RHCE, RHCi, RHV-DevOps Team
>> https://ifireball.wordpress.com/
>>
>> בתאריך 11 בינו׳ 2018 04:35 PM,‏ "Nir Soffer"  כתב:
>>
>> We have random failures of loop device tests on jenkins (see example
>>> bellow).
>>>
>>> Barak commented that losetup -f /path does not work sometimes in the CI
>>> but I don't recall what it the alternative way to get a loop device.
>>>
>>> We need a reliable way to create a loop device for vdsm tests. What
>>> is the recommended way to do this?
>>>
>>> Until we have a reliable solution I'm going to mark this test as broken
>>> on ovirt CI:
>>> https://gerrit.ovirt.org/#/c/86241/
>>>
>>> Please report if you see other tests fail with this error - typically:
>>>
>>> Error: Command ['losetup', '--find', '--show', '/tmp/tmp17Wqri/file'] 
>>> failed with rc=1 out='' err='losetup: /tmp/tmp17Wqri/file: failed to set up 
>>> loop device: No such file or directory\n'
>>>
>>>
>>> http://jenkins.ovirt.org/job/vdsm_master_check-patch-el7-
>>> x86_64/20862/consoleFull
>>>
>>> *00:05:48.766* 
>>> ==*00:05:48.766*
>>>  ERROR: test_attach_detach_manually 
>>> (loopback_test.TestDevice)*00:05:48.767* 
>>> --*00:05:48.767*
>>>  Traceback (most recent call last):*00:05:48.767*   File 
>>> "/home/jenkins/workspace/vdsm_master_check-patch-el7-x86_64/vdsm/tests/testValidation.py",
>>>  line 191, in wrapper*00:05:48.768* return f(*args, 
>>> **kwargs)*00:05:48.768*   File 
>>> "/home/jenkins/workspace/vdsm_master_check-patch-el7-x86_64/vdsm/tests/loopback_test.py",
>>>  line 56, in test_attach_detach_manually*00:05:48.768* 
>>> device.attach()*00:05:48.768*   File 
>>> "/home/jenkins/workspace/vdsm_master_check-patch-el7-x86_64/vdsm/tests/loopback.py",
>>>  line 56, in attach*00:05:48.769* raise cmdutils.Error(cmd, rc, out, 
>>> err)*00:05:48.769* Error: Command ['losetup', '--find', '--show', 
>>> '/tmp/tmp17Wqri/file'] failed with rc=1 out='' err='losetup: 
>>> /tmp/tmp17Wqri/file: failed to set up loop device: No such file or 
>>> directory\n'*00:05:48.770*  >> begin captured logging 
>>> << *00:05:48.770* 2018-01-11 13:42:45,676 DEBUG 
>>> (MainThread) [root] /usr/bin/taskset --cpu-list 0-1 losetup --find --show 
>>> /tmp/tmp17Wqri/file (cwd None) (commands:65)*00:05:48.771* 2018-01-11 
>>> 13:42:45,733 DEBUG (MainThread) [root] FAILED:  = 'losetup: 
>>> /tmp/tmp17Wqri/file: failed to set up loop device: No such file or 
>>> directory\n';  = 1 (commands:86)*00:05:48.772* - >> 
>>> end captured logging << -
>>>
>>>
>>>
> ___
> Devel mailing list
> Devel@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/devel
>
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

Re: [ovirt-devel] [ OST Failure Report ] [ oVirt 4.1] [ 31 Oct 2017 ] [ 002_bootstrap.add_dc ]

2017-11-01 Thread Yuval Turgeman
I think you need to change the external bash call from `bash -s` to `bash
-se`

On Tue, Oct 31, 2017 at 10:03 PM, Barak Korren  wrote:

>
>
> On 31 October 2017 at 17:56, Gal Ben Haim  wrote:
>
>> The thing that bothers me is why the deploy script returned status 0 even
>> though yum failed:
>>
>> 2017-10-31 11:18:14,496::ssh.py::ssh::58::lago.ssh::DEBUG::Running 2fed0ea0 
>> on lago-basic-suite-4-1-host-0: bash -s < "#!/bin/bash -xe
>> yum -y install ovirt-host
>> rm -rf /dev/shm/*.rpm /dev/shm/yum
>> "
>> 2017-10-31 11:18:16,121::ssh.py::ssh::81::lago.ssh::DEBUG::Command 2fed0ea0 
>> on lago-basic-suite-4-1-host-0 returned with 0
>>
>>
> By the look of it, it may be that the script is not running with 'set -e'.
>
>
>
> --
> Barak Korren
> RHV DevOps team , RHCE, RHCi
> Red Hat EMEA
> redhat.com | TRIED. TESTED. TRUSTED. | redhat.com/trusted
>
> ___
> Devel mailing list
> Devel@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/devel
>
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel