Re: [ovirt-devel] [webadmin] Fixed a bug when pressing Enter while focused on dialog field

2017-10-19 Thread Dan Kenigsberg
On Thu, Oct 19, 2017 at 6:20 PM, Vojtech Szocs  wrote:
> Hi,
>
> when there's a dialog open and you're focused on some input (e.g. text)
> field and then press Enter, dialog's model is now properly updated with
> input's value before the dialog View's flush() method runs. See
> https://gerrit.ovirt.org/#/c/82944/ for details.

Thank you for solving it. Also let's thank Michael Burman for filing
it, and Ales Musil for treating it so seriously.

Bug 1485927 - Default vNIC profile not created if new network dialog
is confirmed with keyboard Enter(works via Ok button click)
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel


Re: [ovirt-devel] How to build a particular module of code

2017-10-19 Thread Greg Sheremeta
On Thu, Oct 19, 2017 at 3:08 PM, Roy Golan  wrote:

>
>
> On Thu, 19 Oct 2017 at 20:12 Greg Sheremeta  wrote:
>
>> Since I needed to do a bunch of ovirt-engine recompiles today and I'm
>> disabling checks to make it go faster, I figured I'd share:
>>
>> make install-dev PREFIX=/home/greg/ovirt-engine 
>> DEV_EXTRA_BUILD_FLAGS="-Danimal.sniffer.skip
>> -Dcheckstyle.skip -Dgwt.compiler.localWorkers=1" DEV_EXTRA_BUILD_FLAGS_GWT_
>> DEFAULTS="-Dgwt.cssResourceStyle=pretty -Dgwt.userAgent=safari"
>> BUILD_UT=0 BUILD_GWT=1
>>
>> make install-dev -- note I left out "clean", so that speeds things up
>>
>>
> Then this worth a new make rule for 'install-dev-quick' or similar that
> will have all of those inside.  another option is to add a SKIP_CHECKS=1
> that will achieve the same without extra rule:
>
>   make install-dev SKIP_CHECKS=1
>

Yeah, lol, that would be simpler :)


>
>
>> -Danimal.sniffer.skip -- skips animal sniffer [1], which takes quite some
>> time
>>
>> -Dcheckstyle.skip -- skips checkstyle, which also takes much time
>>
>> -Dgwt.cssResourceStyle=pretty -- doesn't completely obfuscate classes in
>> GWT, which allows you to use Dev Tools to inspect elements and see exactly
>> where they come from [example:  shows me I need to go right to
>> the NetworkFilterParameterEditor class to mess with this widget,
>> specifically the "wrapper" css style
>>
>> -Dgwt.userAgent=safari -- if building GWT, build only 1 permutation for
>> Chrome/Safari
>>
>> -Dgwt.compiler.localWorkers=1 -- use 1 thread for compiling GWT. Since I
>> only used Safari, it's not necessary to have this here on this particular
>> compile run, but you'll want to use this when doing more than 1
>> permutation/browser. It'll help prevent a crash during GWT compile, which,
>> of course, is the ultimate time waster :)
>>
>> BUILD_UT=0 -- skip unit tests
>>
>> BUILD_GWT=1 -- if you don't need a GWT rebuild, change to 0 for a *huge*
>> speedup :) [there must be a way to have that auto-detected ... hmm ...]
>>
>>
>
>
>> ...
>>
>> Before pushing a final version of a patch, you should enable the checks
>> and make sure they all pass. (They do run in CI, though.)
>>
>> Best wishes,
>> Greg
>>
>>
>> [1] http://www.mojohaus.org/animal-sniffer/
>>
>> On Tue, Oct 17, 2017 at 4:01 PM, shubham dubey 
>> wrote:
>>
>>> Thanks,it worked.
>>>
>>> On Wed, Oct 18, 2017 at 1:24 AM, Roy Golan  wrote:
>>>
 The answer is in the pom.xml of uicommonweb, in its groupId:
   grep parent -A 1 frontend/webadmin/modules/uicommonweb/pom.xml

 So change it to "-pl *org.ovirt.engine.ui*:uicommonweb"



 On Tue, 17 Oct 2017 at 21:54 shubham dubey  wrote:

> Hi,
> I have tried to build uicommonweb alone using
> make install-dev PREFIX="$HOME/ovirt-engine" EXTRA_BUILD_FLAGS="-pl
> org.ovirt.engine.core:uicommonweb"
>
> but getting error that
> [ERROR] Could not find the selected project in the reactor:
> org.ovirt.engine.core:uicommonweb
>
> Am I doing something wrong?
>
> On Tue, Oct 17, 2017 at 11:50 PM, shubham dubey 
> wrote:
>
>> Thanks,
>> This is exactly what I needed:)
>>
>> On Tue, Oct 17, 2017 at 11:37 PM, Greg Sheremeta > > wrote:
>>
>>> I never do it, but
>>>
>>> https://www.ovirt.org/develop/developer-guide/engine/engine-
>>> development-environment has an example :
>>>
>>>
>>> To rebuild a single artifact, for example utils:
>>>
>>>   make clean install-dev PREFIX=$HOME/ovirt-engine \
>>>   EXTRA_BUILD_FLAGS="-pl org.ovirt.engine.core:utils"
>>>
>>>
>>> You can also disable animal sniffer, check style, unit tests, and
>>> GWT to speed things up. Just make sure they actually run before you 
>>> push :)
>>>
>>> Greg
>>>
>>> On Oct 17, 2017 1:56 PM, "shubham dubey" 
>>> wrote:
>>>
 Hi,
 I have a simple query.

 Whenever I do any change in my code I
 run "make install-dev PREFIX="$HOME/ovirt-engine"".
 But it takes a large time to compile.
 I think there is a way to compile only that part of code
 which I have changed. Like if I make changes in
 uicommonweb then, how I compile only that part?

 Thanks in advance.

 Shubham

 ___
 Devel mailing list
 Devel@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/devel

>>>
>>
> ___
> Devel mailing list
> Devel@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/devel


>>>
>>
>>
>> --
>>
>> GREG SHEREMETA
>>
>> SENIOR SOFTWARE ENGINEER - TEAM LEAD - RHV UX
>>
>> Red Hat NA
>>
>> 

Re: [ovirt-devel] How to build a particular module of code

2017-10-19 Thread Roy Golan
On Thu, 19 Oct 2017 at 20:12 Greg Sheremeta  wrote:

> Since I needed to do a bunch of ovirt-engine recompiles today and I'm
> disabling checks to make it go faster, I figured I'd share:
>
> make install-dev PREFIX=/home/greg/ovirt-engine
> DEV_EXTRA_BUILD_FLAGS="-Danimal.sniffer.skip -Dcheckstyle.skip
> -Dgwt.compiler.localWorkers=1"
> DEV_EXTRA_BUILD_FLAGS_GWT_DEFAULTS="-Dgwt.cssResourceStyle=pretty
> -Dgwt.userAgent=safari" BUILD_UT=0 BUILD_GWT=1
>
> make install-dev -- note I left out "clean", so that speeds things up
>
>
Then this worth a new make rule for 'install-dev-quick' or similar that
will have all of those inside.  another option is to add a SKIP_CHECKS=1
that will achieve the same without extra rule:

  make install-dev SKIP_CHECKS=1


> -Danimal.sniffer.skip -- skips animal sniffer [1], which takes quite some
> time
>
> -Dcheckstyle.skip -- skips checkstyle, which also takes much time
>
> -Dgwt.cssResourceStyle=pretty -- doesn't completely obfuscate classes in
> GWT, which allows you to use Dev Tools to inspect elements and see exactly
> where they come from [example: 
> shows me I need to go right to the NetworkFilterParameterEditor class to
> mess with this widget, specifically the "wrapper" css style
>
> -Dgwt.userAgent=safari -- if building GWT, build only 1 permutation for
> Chrome/Safari
>
> -Dgwt.compiler.localWorkers=1 -- use 1 thread for compiling GWT. Since I
> only used Safari, it's not necessary to have this here on this particular
> compile run, but you'll want to use this when doing more than 1
> permutation/browser. It'll help prevent a crash during GWT compile, which,
> of course, is the ultimate time waster :)
>
> BUILD_UT=0 -- skip unit tests
>
> BUILD_GWT=1 -- if you don't need a GWT rebuild, change to 0 for a *huge*
> speedup :) [there must be a way to have that auto-detected ... hmm ...]
>
>


> ...
>
> Before pushing a final version of a patch, you should enable the checks
> and make sure they all pass. (They do run in CI, though.)
>
> Best wishes,
> Greg
>
>
> [1] http://www.mojohaus.org/animal-sniffer/
>
> On Tue, Oct 17, 2017 at 4:01 PM, shubham dubey 
> wrote:
>
>> Thanks,it worked.
>>
>> On Wed, Oct 18, 2017 at 1:24 AM, Roy Golan  wrote:
>>
>>> The answer is in the pom.xml of uicommonweb, in its groupId:
>>>   grep parent -A 1 frontend/webadmin/modules/uicommonweb/pom.xml
>>>
>>> So change it to "-pl *org.ovirt.engine.ui*:uicommonweb"
>>>
>>>
>>>
>>> On Tue, 17 Oct 2017 at 21:54 shubham dubey  wrote:
>>>
 Hi,
 I have tried to build uicommonweb alone using
 make install-dev PREFIX="$HOME/ovirt-engine" EXTRA_BUILD_FLAGS="-pl
 org.ovirt.engine.core:uicommonweb"

 but getting error that
 [ERROR] Could not find the selected project in the reactor:
 org.ovirt.engine.core:uicommonweb

 Am I doing something wrong?

 On Tue, Oct 17, 2017 at 11:50 PM, shubham dubey 
 wrote:

> Thanks,
> This is exactly what I needed:)
>
> On Tue, Oct 17, 2017 at 11:37 PM, Greg Sheremeta 
> wrote:
>
>> I never do it, but
>>
>>
>> https://www.ovirt.org/develop/developer-guide/engine/engine-development-environment
>> has an example :
>>
>>
>> To rebuild a single artifact, for example utils:
>>
>>   make clean install-dev PREFIX=$HOME/ovirt-engine \
>>   EXTRA_BUILD_FLAGS="-pl org.ovirt.engine.core:utils"
>>
>>
>> You can also disable animal sniffer, check style, unit tests, and GWT
>> to speed things up. Just make sure they actually run before you push :)
>>
>> Greg
>>
>> On Oct 17, 2017 1:56 PM, "shubham dubey"  wrote:
>>
>>> Hi,
>>> I have a simple query.
>>>
>>> Whenever I do any change in my code I
>>> run "make install-dev PREFIX="$HOME/ovirt-engine"".
>>> But it takes a large time to compile.
>>> I think there is a way to compile only that part of code
>>> which I have changed. Like if I make changes in
>>> uicommonweb then, how I compile only that part?
>>>
>>> Thanks in advance.
>>>
>>> Shubham
>>>
>>> ___
>>> Devel mailing list
>>> Devel@ovirt.org
>>> http://lists.ovirt.org/mailman/listinfo/devel
>>>
>>
>
 ___
 Devel mailing list
 Devel@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/devel
>>>
>>>
>>
>
>
> --
>
> GREG SHEREMETA
>
> SENIOR SOFTWARE ENGINEER - TEAM LEAD - RHV UX
>
> Red Hat NA
>
> 
>
> gsher...@redhat.comIRC: gshereme
> 
> ___
> Devel mailing list
> Devel@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/devel
___
Devel mailing list

Re: [ovirt-devel] How to build a particular module of code

2017-10-19 Thread Greg Sheremeta
Since I needed to do a bunch of ovirt-engine recompiles today and I'm
disabling checks to make it go faster, I figured I'd share:

make install-dev PREFIX=/home/greg/ovirt-engine
DEV_EXTRA_BUILD_FLAGS="-Danimal.sniffer.skip -Dcheckstyle.skip
-Dgwt.compiler.localWorkers=1"
DEV_EXTRA_BUILD_FLAGS_GWT_DEFAULTS="-Dgwt.cssResourceStyle=pretty
-Dgwt.userAgent=safari" BUILD_UT=0 BUILD_GWT=1

make install-dev -- note I left out "clean", so that speeds things up

-Danimal.sniffer.skip -- skips animal sniffer [1], which takes quite some
time

-Dcheckstyle.skip -- skips checkstyle, which also takes much time

-Dgwt.cssResourceStyle=pretty -- doesn't completely obfuscate classes in
GWT, which allows you to use Dev Tools to inspect elements and see exactly
where they come from [example: 
shows me I need to go right to the NetworkFilterParameterEditor class to
mess with this widget, specifically the "wrapper" css style

-Dgwt.userAgent=safari -- if building GWT, build only 1 permutation for
Chrome/Safari

-Dgwt.compiler.localWorkers=1 -- use 1 thread for compiling GWT. Since I
only used Safari, it's not necessary to have this here on this particular
compile run, but you'll want to use this when doing more than 1
permutation/browser. It'll help prevent a crash during GWT compile, which,
of course, is the ultimate time waster :)

BUILD_UT=0 -- skip unit tests

BUILD_GWT=1 -- if you don't need a GWT rebuild, change to 0 for a *huge*
speedup :) [there must be a way to have that auto-detected ... hmm ...]

...

Before pushing a final version of a patch, you should enable the checks and
make sure they all pass. (They do run in CI, though.)

Best wishes,
Greg


[1] http://www.mojohaus.org/animal-sniffer/

On Tue, Oct 17, 2017 at 4:01 PM, shubham dubey  wrote:

> Thanks,it worked.
>
> On Wed, Oct 18, 2017 at 1:24 AM, Roy Golan  wrote:
>
>> The answer is in the pom.xml of uicommonweb, in its groupId:
>>   grep parent -A 1 frontend/webadmin/modules/uicommonweb/pom.xml
>>
>> So change it to "-pl *org.ovirt.engine.ui*:uicommonweb"
>>
>>
>>
>> On Tue, 17 Oct 2017 at 21:54 shubham dubey  wrote:
>>
>>> Hi,
>>> I have tried to build uicommonweb alone using
>>> make install-dev PREFIX="$HOME/ovirt-engine" EXTRA_BUILD_FLAGS="-pl
>>> org.ovirt.engine.core:uicommonweb"
>>>
>>> but getting error that
>>> [ERROR] Could not find the selected project in the reactor:
>>> org.ovirt.engine.core:uicommonweb
>>>
>>> Am I doing something wrong?
>>>
>>> On Tue, Oct 17, 2017 at 11:50 PM, shubham dubey 
>>> wrote:
>>>
 Thanks,
 This is exactly what I needed:)

 On Tue, Oct 17, 2017 at 11:37 PM, Greg Sheremeta 
 wrote:

> I never do it, but
>
> https://www.ovirt.org/develop/developer-guide/engine/engine-
> development-environment has an example :
>
>
> To rebuild a single artifact, for example utils:
>
>   make clean install-dev PREFIX=$HOME/ovirt-engine \
>   EXTRA_BUILD_FLAGS="-pl org.ovirt.engine.core:utils"
>
>
> You can also disable animal sniffer, check style, unit tests, and GWT
> to speed things up. Just make sure they actually run before you push :)
>
> Greg
>
> On Oct 17, 2017 1:56 PM, "shubham dubey"  wrote:
>
>> Hi,
>> I have a simple query.
>>
>> Whenever I do any change in my code I
>> run "make install-dev PREFIX="$HOME/ovirt-engine"".
>> But it takes a large time to compile.
>> I think there is a way to compile only that part of code
>> which I have changed. Like if I make changes in
>> uicommonweb then, how I compile only that part?
>>
>> Thanks in advance.
>>
>> Shubham
>>
>> ___
>> Devel mailing list
>> Devel@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/devel
>>
>

>>> ___
>>> Devel mailing list
>>> Devel@ovirt.org
>>> http://lists.ovirt.org/mailman/listinfo/devel
>>
>>
>


-- 

GREG SHEREMETA

SENIOR SOFTWARE ENGINEER - TEAM LEAD - RHV UX

Red Hat NA



gsher...@redhat.comIRC: gshereme

___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

Re: [ovirt-devel] [ OST Failure Report ] [ oVirt Master ] [ bootstrap.verify_add_hosts ] [ 18/10/17 ]

2017-10-19 Thread Allon Mureinik
not my finest hour.
thanks, Barak, it's merged now.

On Thu, Oct 19, 2017 at 6:29 PM, Barak Korren  wrote:

>
>
> On 19 October 2017 at 17:48, Allon Mureinik  wrote:
>
>> Fix merged based on Alona and Martin's reviews.
>> It seems to do the trick with my testing on my local engine, let's hope
>> that's really it.
>>
>>
> Umm... It does not seem to be merged yet...
>
>
> --
> Barak Korren
> RHV DevOps team , RHCE, RHCi
> Red Hat EMEA
> redhat.com | TRIED. TESTED. TRUSTED. | redhat.com/trusted
>
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

Re: [ovirt-devel] [ OST Failure Report ] [ oVirt Master ] [ bootstrap.verify_add_hosts ] [ 18/10/17 ]

2017-10-19 Thread Barak Korren
On 19 October 2017 at 17:48, Allon Mureinik  wrote:

> Fix merged based on Alona and Martin's reviews.
> It seems to do the trick with my testing on my local engine, let's hope
> that's really it.
>
>
Umm... It does not seem to be merged yet...


-- 
Barak Korren
RHV DevOps team , RHCE, RHCi
Red Hat EMEA
redhat.com | TRIED. TESTED. TRUSTED. | redhat.com/trusted
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

[ovirt-devel] [webadmin] Fixed a bug when pressing Enter while focused on dialog field

2017-10-19 Thread Vojtech Szocs
Hi,

when there's a dialog open and you're focused on some input (e.g. text)
field and then press Enter, dialog's model is now properly updated with
input's value before the dialog View's flush() method runs. See
https://gerrit.ovirt.org/#/c/82944/ for details.

Regards,
Vojtech
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

Re: [ovirt-devel] [ OST Failure Report ] [ oVirt Master ] [ bootstrap.verify_add_hosts ] [ 18/10/17 ]

2017-10-19 Thread Allon Mureinik
Fix merged based on Alona and Martin's reviews.
It seems to do the trick with my testing on my local engine, let's hope
that's really it.

On Thu, Oct 19, 2017 at 4:37 PM, Allon Mureinik  wrote:

> Bloody hell. The original was also completely broken, and worked by
> chance. Damn it.
>
> This should fix it:
> https://gerrit.ovirt.org/#/c/82989/
>
> On Thu, Oct 19, 2017 at 3:49 PM, Martin Perina  wrote:
>
>> So the real issue on adding a host is the same as I've today described in
>> [2] and most probably caused by [3] (I reverted engine in my dev env prior
>> this patch and host deploy finished successfully).
>>
>> Allon, do you have time to post a fix? If not I'll try to dig into your
>> change and related networking code to post it ...
>>
>>
>> [2] https://bugzilla.redhat.com/show_bug.cgi?id=1504005
>> [3] https://gerrit.ovirt.org/#/c/82545/
>>
>> On Thu, Oct 19, 2017 at 11:12 AM, Martin Perina 
>> wrote:
>>
>>>
>>>
>>> On Thu, Oct 19, 2017 at 11:04 AM, Martin Perina 
>>> wrote:
>>>


 On Thu, Oct 19, 2017 at 10:58 AM, Dan Kenigsberg 
 wrote:

> On Thu, Oct 19, 2017 at 10:29 AM, Martin Perina 
> wrote:
> >
> >
> > On Thu, Oct 19, 2017 at 7:35 AM, Dan Kenigsberg 
> wrote:
> >>
> >> On Wed, Oct 18, 2017 at 2:40 PM, Daniel Belenky <
> dbele...@redhat.com>
> >> wrote:
> >>>
> >>> Hi all,
> >>>
> >>> The following test is failing: 002_bootstrap.verify_add_hosts
> >>> All logs from failing job
> >>> Only 2 engine patches participated in the test, so the suspected
> patches
> >>> are:
> >>>
> >>> https://gerrit.ovirt.org/#/c/82542/2
> >>> https://gerrit.ovirt.org/#/c/82545/3
> >>>
> >>> Due to the fact that when this error first introduced we had
> another
> >>> error, the CI can't automatically detect the specific patch.
> >>>
> >>> Error snippet from logs: ovirt-host-deploy-ansible log (Full log)
> >>>
> >>> TASK [ovirt-host-deploy-firewalld : Enable firewalld rules]
> >>> 
> >>> failed: [lago-basic-suite-master-host-0] (item={u'service':
> >>> u'glusterfs'}) => {"changed": false, "failed": true, "item":
> {"service":
> >>> "glusterfs"}, "msg": "ERROR: Exception caught:
> >>> org.fedoraproject.FirewallD1.Exception: INVALID_SERVICE:
> 'glusterfs' not
> >>> among existing services Permanent and Non-Permanent(immediate)
> operation,
> >>> Services are defined by port/tcp relationship and named as they
> are in
> >>> /etc/services (on most systems)"}
> >>>
> >>>
> >>> Error from HOST 0 firewalld log:
> >>> lago-basic-suite-master-host-0/_var_log/firewalld/ (Full log)
> >>>
> >>> 2017-10-15 16:51:24 ERROR: INVALID_SERVICE: 'glusterfs' not among
> >>> existing services
> >>
> >>
> >> Ondra, would such an error propagate through the playbook to Engine
> and
> >> fail the add-host flow? (I think it should!)
> >
> >
> > We didn't do that so far, because of EL 7.3
> > . We need firewalld from 7.4 to have all available services in place
> (I
> > don't remember but I think imageio service was the one delivered
> only in
> > firewalld from 7.4). So  up until now we ingore non-existent
> firewalld
> > service, but if needed we can turn this on and fail host deploy.
>
> Ok, so for now your "luckily" off the hook and not the reason of
> failure.
>
> >>
> >>
> >> Do you know which package provide the glusterfs firewalld service,
> and why
> >> it is missing from the host?
> >
> >
> > So we have used 'glusterfs' firewalld service per Sahina
> recommendation,
> > which is included in glusterfs-server package from version 3.7.6
> [1]. But
> > this package is not installed when installing packages for cluster
> with
> > gluster capabilities enabled. So now I'm confused: don't we need
> > glusterfs-server package? If not and we need those ports open
> because they
> > are used by services from different already installed glusterfs
> packages,
> > shouldn't the firewalld configuration be moved from glusterfs-server
> to
> > glusterfs package?
>
> glusterfs-cli.rpm is required to consume gluster storage (virt use
> case), but I don't recall that it needs open ports.
>

 ​It was there even for IPTables, if gluster support is enabled on
 cluster, then gluster specific ports were enabled even with IPTables.
 FirewallD feature continues to use that.
 ​


> glusterfs-server.rpm is required to provide gluster storage (gluster
> use case).
> If I recall correctly, firewalld feature has differentiated between
> the two; opening needed ports only when relevant.

Re: [ovirt-devel] [ OST Failure Report ] [ oVirt Master ] [ bootstrap.verify_add_hosts ] [ 18/10/17 ]

2017-10-19 Thread Allon Mureinik
Bloody hell. The original was also completely broken, and worked by chance.
Damn it.

This should fix it:
https://gerrit.ovirt.org/#/c/82989/

On Thu, Oct 19, 2017 at 3:49 PM, Martin Perina  wrote:

> So the real issue on adding a host is the same as I've today described in
> [2] and most probably caused by [3] (I reverted engine in my dev env prior
> this patch and host deploy finished successfully).
>
> Allon, do you have time to post a fix? If not I'll try to dig into your
> change and related networking code to post it ...
>
>
> [2] https://bugzilla.redhat.com/show_bug.cgi?id=1504005
> [3] https://gerrit.ovirt.org/#/c/82545/
>
> On Thu, Oct 19, 2017 at 11:12 AM, Martin Perina 
> wrote:
>
>>
>>
>> On Thu, Oct 19, 2017 at 11:04 AM, Martin Perina 
>> wrote:
>>
>>>
>>>
>>> On Thu, Oct 19, 2017 at 10:58 AM, Dan Kenigsberg 
>>> wrote:
>>>
 On Thu, Oct 19, 2017 at 10:29 AM, Martin Perina 
 wrote:
 >
 >
 > On Thu, Oct 19, 2017 at 7:35 AM, Dan Kenigsberg 
 wrote:
 >>
 >> On Wed, Oct 18, 2017 at 2:40 PM, Daniel Belenky 
 >> wrote:
 >>>
 >>> Hi all,
 >>>
 >>> The following test is failing: 002_bootstrap.verify_add_hosts
 >>> All logs from failing job
 >>> Only 2 engine patches participated in the test, so the suspected
 patches
 >>> are:
 >>>
 >>> https://gerrit.ovirt.org/#/c/82542/2
 >>> https://gerrit.ovirt.org/#/c/82545/3
 >>>
 >>> Due to the fact that when this error first introduced we had another
 >>> error, the CI can't automatically detect the specific patch.
 >>>
 >>> Error snippet from logs: ovirt-host-deploy-ansible log (Full log)
 >>>
 >>> TASK [ovirt-host-deploy-firewalld : Enable firewalld rules]
 >>> 
 >>> failed: [lago-basic-suite-master-host-0] (item={u'service':
 >>> u'glusterfs'}) => {"changed": false, "failed": true, "item":
 {"service":
 >>> "glusterfs"}, "msg": "ERROR: Exception caught:
 >>> org.fedoraproject.FirewallD1.Exception: INVALID_SERVICE:
 'glusterfs' not
 >>> among existing services Permanent and Non-Permanent(immediate)
 operation,
 >>> Services are defined by port/tcp relationship and named as they are
 in
 >>> /etc/services (on most systems)"}
 >>>
 >>>
 >>> Error from HOST 0 firewalld log:
 >>> lago-basic-suite-master-host-0/_var_log/firewalld/ (Full log)
 >>>
 >>> 2017-10-15 16:51:24 ERROR: INVALID_SERVICE: 'glusterfs' not among
 >>> existing services
 >>
 >>
 >> Ondra, would such an error propagate through the playbook to Engine
 and
 >> fail the add-host flow? (I think it should!)
 >
 >
 > We didn't do that so far, because of EL 7.3
 > . We need firewalld from 7.4 to have all available services in place
 (I
 > don't remember but I think imageio service was the one delivered only
 in
 > firewalld from 7.4). So  up until now we ingore non-existent firewalld
 > service, but if needed we can turn this on and fail host deploy.

 Ok, so for now your "luckily" off the hook and not the reason of
 failure.

 >>
 >>
 >> Do you know which package provide the glusterfs firewalld service,
 and why
 >> it is missing from the host?
 >
 >
 > So we have used 'glusterfs' firewalld service per Sahina
 recommendation,
 > which is included in glusterfs-server package from version 3.7.6 [1].
 But
 > this package is not installed when installing packages for cluster
 with
 > gluster capabilities enabled. So now I'm confused: don't we need
 > glusterfs-server package? If not and we need those ports open because
 they
 > are used by services from different already installed glusterfs
 packages,
 > shouldn't the firewalld configuration be moved from glusterfs-server
 to
 > glusterfs package?

 glusterfs-cli.rpm is required to consume gluster storage (virt use
 case), but I don't recall that it needs open ports.

>>>
>>> ​It was there even for IPTables, if gluster support is enabled on
>>> cluster, then gluster specific ports were enabled even with IPTables.
>>> FirewallD feature continues to use that.
>>> ​
>>>
>>>
 glusterfs-server.rpm is required to provide gluster storage (gluster
 use case).
 If I recall correctly, firewalld feature has differentiated between
 the two; opening needed ports only when relevant.

>>>
>>> ​Right, but if gluster services are configured for firewalld it means
>>> that the host has been added to the cluster with gluster feature enabled
>>> and not only virt
>>> ​
>>>
>>>

 >
 >
 > [1] https://bugzilla.redhat.com/show_bug.cgi?id=1057295
 >
 >

>>>
>>>
>>
>
___
Devel 

Re: [ovirt-devel] [ OST Failure Report ] [ oVirt Master ] [ bootstrap.verify_add_hosts ] [ 18/10/17 ]

2017-10-19 Thread Martin Perina
So the real issue on adding a host is the same as I've today described in
[2] and most probably caused by [3] (I reverted engine in my dev env prior
this patch and host deploy finished successfully).

Allon, do you have time to post a fix? If not I'll try to dig into your
change and related networking code to post it ...


[2] https://bugzilla.redhat.com/show_bug.cgi?id=1504005
[3] https://gerrit.ovirt.org/#/c/82545/

On Thu, Oct 19, 2017 at 11:12 AM, Martin Perina  wrote:

>
>
> On Thu, Oct 19, 2017 at 11:04 AM, Martin Perina 
> wrote:
>
>>
>>
>> On Thu, Oct 19, 2017 at 10:58 AM, Dan Kenigsberg 
>> wrote:
>>
>>> On Thu, Oct 19, 2017 at 10:29 AM, Martin Perina 
>>> wrote:
>>> >
>>> >
>>> > On Thu, Oct 19, 2017 at 7:35 AM, Dan Kenigsberg 
>>> wrote:
>>> >>
>>> >> On Wed, Oct 18, 2017 at 2:40 PM, Daniel Belenky 
>>> >> wrote:
>>> >>>
>>> >>> Hi all,
>>> >>>
>>> >>> The following test is failing: 002_bootstrap.verify_add_hosts
>>> >>> All logs from failing job
>>> >>> Only 2 engine patches participated in the test, so the suspected
>>> patches
>>> >>> are:
>>> >>>
>>> >>> https://gerrit.ovirt.org/#/c/82542/2
>>> >>> https://gerrit.ovirt.org/#/c/82545/3
>>> >>>
>>> >>> Due to the fact that when this error first introduced we had another
>>> >>> error, the CI can't automatically detect the specific patch.
>>> >>>
>>> >>> Error snippet from logs: ovirt-host-deploy-ansible log (Full log)
>>> >>>
>>> >>> TASK [ovirt-host-deploy-firewalld : Enable firewalld rules]
>>> >>> 
>>> >>> failed: [lago-basic-suite-master-host-0] (item={u'service':
>>> >>> u'glusterfs'}) => {"changed": false, "failed": true, "item":
>>> {"service":
>>> >>> "glusterfs"}, "msg": "ERROR: Exception caught:
>>> >>> org.fedoraproject.FirewallD1.Exception: INVALID_SERVICE:
>>> 'glusterfs' not
>>> >>> among existing services Permanent and Non-Permanent(immediate)
>>> operation,
>>> >>> Services are defined by port/tcp relationship and named as they are
>>> in
>>> >>> /etc/services (on most systems)"}
>>> >>>
>>> >>>
>>> >>> Error from HOST 0 firewalld log:
>>> >>> lago-basic-suite-master-host-0/_var_log/firewalld/ (Full log)
>>> >>>
>>> >>> 2017-10-15 16:51:24 ERROR: INVALID_SERVICE: 'glusterfs' not among
>>> >>> existing services
>>> >>
>>> >>
>>> >> Ondra, would such an error propagate through the playbook to Engine
>>> and
>>> >> fail the add-host flow? (I think it should!)
>>> >
>>> >
>>> > We didn't do that so far, because of EL 7.3
>>> > . We need firewalld from 7.4 to have all available services in place (I
>>> > don't remember but I think imageio service was the one delivered only
>>> in
>>> > firewalld from 7.4). So  up until now we ingore non-existent firewalld
>>> > service, but if needed we can turn this on and fail host deploy.
>>>
>>> Ok, so for now your "luckily" off the hook and not the reason of failure.
>>>
>>> >>
>>> >>
>>> >> Do you know which package provide the glusterfs firewalld service,
>>> and why
>>> >> it is missing from the host?
>>> >
>>> >
>>> > So we have used 'glusterfs' firewalld service per Sahina
>>> recommendation,
>>> > which is included in glusterfs-server package from version 3.7.6 [1].
>>> But
>>> > this package is not installed when installing packages for cluster with
>>> > gluster capabilities enabled. So now I'm confused: don't we need
>>> > glusterfs-server package? If not and we need those ports open because
>>> they
>>> > are used by services from different already installed glusterfs
>>> packages,
>>> > shouldn't the firewalld configuration be moved from glusterfs-server to
>>> > glusterfs package?
>>>
>>> glusterfs-cli.rpm is required to consume gluster storage (virt use
>>> case), but I don't recall that it needs open ports.
>>>
>>
>> ​It was there even for IPTables, if gluster support is enabled on
>> cluster, then gluster specific ports were enabled even with IPTables.
>> FirewallD feature continues to use that.
>> ​
>>
>>
>>> glusterfs-server.rpm is required to provide gluster storage (gluster use
>>> case).
>>> If I recall correctly, firewalld feature has differentiated between
>>> the two; opening needed ports only when relevant.
>>>
>>
>> ​Right, but if gluster services are configured for firewalld it means
>> that the host has been added to the cluster with gluster feature enabled
>> and not only virt
>> ​
>>
>>
>>>
>>> >
>>> >
>>> > [1] https://bugzilla.redhat.com/show_bug.cgi?id=1057295
>>> >
>>> >
>>>
>>
>>
>
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

Re: [ovirt-devel] [ovirt-users] Cockpit oVirt support

2017-10-19 Thread Yaniv Kaul
On Thu, Oct 19, 2017 at 3:06 PM, Roy Golan  wrote:

>
>
> On Thu, 19 Oct 2017 at 14:02 Michal Skrivanek 
> wrote:
>
>>
>> > On 18 Oct 2017, at 11:42, Roy Golan  wrote:
>> >
>> >
>> >
>> > On Wed, 18 Oct 2017 at 10:25 Michal Skrivanek <
>> michal.skriva...@redhat.com> wrote:
>> > Hi all,
>> > I’m happy to announce that we finally finished initial contribution of
>> oVirt specific support into the Cockpit management platform
>> > See below for more details
>> >
>> > There are only limited amount of operations you can do at the moment,
>> but it may already be interesting for troubleshooting and simple admin
>> actions where you don’t want to launch the full blown webadmin UI
>> >
>> > Worth noting that if you were ever intimidated by the complexity of the
>> GWT UI of oVirt portals and it held you back from contributing, please take
>> another look!
>> >
>> > Thanks,
>> > michal
>> >
>> >
>> > Congrats Michal, Marek and team, this is very nice! The unified look &
>> feel is such a powerful thing (I didn't realize for a while that you left
>> webadmin).
>>
>> and thanks to this[1] it’s going to be even more seamless when you click
>> in Host view on Host Console button
>>
>>
> +1
> So why won't we integrate that as an optional tab using a ui plugin?
>

I don't think Cockpit looks so good crammed into a tab.
We used to have it in a subtab, which was unusable.
Y.


>
> [1] https://github.com/mareklibra/ovirt-cockpit-sso
>>
>> >> Begin forwarded message:
>> >>
>> >> From: Marek Libra 
>> >> Subject: Re: Cockpit 153 released
>> >> Date: 17 October 2017 at 16:02:59 GMT+2
>> >> To: Development discussion for the Cockpit Project <
>> cockpit-de...@lists.fedorahosted.org>
>> >> Reply-To: Development discussion for the Cockpit Project <
>> cockpit-de...@lists.fedorahosted.org>
>> >>
>> >> Walk-through video for the new "oVirt Machines" page can be found
>> here: https://youtu.be/5i-kshT6c5A
>> >>
>> >> On Tue, Oct 17, 2017 at 12:08 PM, Martin Pitt 
>> wrote:
>> >> http://cockpit-project.org/blog/cockpit-153.html
>> >>
>> >> Cockpit is the modern Linux admin interface. We release regularly. Here
>> >> are the release notes from version 153.
>> >>
>> >>
>> >> Add oVirt package
>> >> -
>> >>
>> >> This version introduces the "oVirt Machines" page on Fedora for
>> controlling
>> >> oVirt virtual machine clusters.  This code was moved into Cockpit as
>> it shares
>> >> a lot of code with the existing "Machines" page, which manages virtual
>> machines
>> >> through libvirt.
>> >>
>> >> This feature is packaged in cockpit-ovirt and when installed it will
>> replace
>> >> the "Machines" page.
>> >>
>> >> Thanks to Marek Libra for working on this!
>> >>
>> >> Screenshot:
>> >>
>> >> http://cockpit-project.org/images/ovirt-overview.png
>> >>
>> >> Change: https://github.com/cockpit-project/cockpit/pull/7139
>> >>
>> >>
>> >> Packaging cleanup
>> >> -
>> >>
>> >> This release fixes a lot of small packaging issues that were spotted by
>> >> rpmlint/lintian.
>> >>
>> >> Get it
>> >> --
>> >>
>> >> You can get Cockpit here:
>> >>
>> >> http://cockpit-project.org/running.html
>> >>
>> >> Cockpit 153 is available in Fedora 27:
>> >>
>> >> https://bodhi.fedoraproject.org/updates/cockpit-153-1.fc27
>> >>
>> >> Or download the tarball here:
>> >>
>> >> https://github.com/cockpit-project/cockpit/releases/tag/153
>> >>
>> >>
>> >> Take care,
>> >>
>> >> Martin Pitt
>> >>
>> >> ___
>> >> cockpit-devel mailing list -- cockpit-de...@lists.fedorahosted.org
>> >> To unsubscribe send an email to cockpit-devel-leave@lists.
>> fedorahosted.org
>> >>
>> >>
>> >>
>> >>
>> >> --
>> >> MAREK LIBRA
>> >> SENIOR SOFTWARE ENGINEER
>> >> Red Hat Czech
>> >>
>> >> ___
>> >> cockpit-devel mailing list -- cockpit-de...@lists.fedorahosted.org
>> >> To unsubscribe send an email to cockpit-devel-leave@lists.
>> fedorahosted.org
>> >
>> > ___
>> > Devel mailing list
>> > Devel@ovirt.org
>> > http://lists.ovirt.org/mailman/listinfo/devel
>>
>>
> ___
> Users mailing list
> us...@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

Re: [ovirt-devel] Cockpit oVirt support

2017-10-19 Thread Roy Golan
On Thu, 19 Oct 2017 at 14:02 Michal Skrivanek 
wrote:

>
> > On 18 Oct 2017, at 11:42, Roy Golan  wrote:
> >
> >
> >
> > On Wed, 18 Oct 2017 at 10:25 Michal Skrivanek <
> michal.skriva...@redhat.com> wrote:
> > Hi all,
> > I’m happy to announce that we finally finished initial contribution of
> oVirt specific support into the Cockpit management platform
> > See below for more details
> >
> > There are only limited amount of operations you can do at the moment,
> but it may already be interesting for troubleshooting and simple admin
> actions where you don’t want to launch the full blown webadmin UI
> >
> > Worth noting that if you were ever intimidated by the complexity of the
> GWT UI of oVirt portals and it held you back from contributing, please take
> another look!
> >
> > Thanks,
> > michal
> >
> >
> > Congrats Michal, Marek and team, this is very nice! The unified look &
> feel is such a powerful thing (I didn't realize for a while that you left
> webadmin).
>
> and thanks to this[1] it’s going to be even more seamless when you click
> in Host view on Host Console button
>
>
+1
So why won't we integrate that as an optional tab using a ui plugin?

[1] https://github.com/mareklibra/ovirt-cockpit-sso
>
> >> Begin forwarded message:
> >>
> >> From: Marek Libra 
> >> Subject: Re: Cockpit 153 released
> >> Date: 17 October 2017 at 16:02:59 GMT+2
> >> To: Development discussion for the Cockpit Project <
> cockpit-de...@lists.fedorahosted.org>
> >> Reply-To: Development discussion for the Cockpit Project <
> cockpit-de...@lists.fedorahosted.org>
> >>
> >> Walk-through video for the new "oVirt Machines" page can be found here:
> https://youtu.be/5i-kshT6c5A
> >>
> >> On Tue, Oct 17, 2017 at 12:08 PM, Martin Pitt  wrote:
> >> http://cockpit-project.org/blog/cockpit-153.html
> >>
> >> Cockpit is the modern Linux admin interface. We release regularly. Here
> >> are the release notes from version 153.
> >>
> >>
> >> Add oVirt package
> >> -
> >>
> >> This version introduces the "oVirt Machines" page on Fedora for
> controlling
> >> oVirt virtual machine clusters.  This code was moved into Cockpit as it
> shares
> >> a lot of code with the existing "Machines" page, which manages virtual
> machines
> >> through libvirt.
> >>
> >> This feature is packaged in cockpit-ovirt and when installed it will
> replace
> >> the "Machines" page.
> >>
> >> Thanks to Marek Libra for working on this!
> >>
> >> Screenshot:
> >>
> >> http://cockpit-project.org/images/ovirt-overview.png
> >>
> >> Change: https://github.com/cockpit-project/cockpit/pull/7139
> >>
> >>
> >> Packaging cleanup
> >> -
> >>
> >> This release fixes a lot of small packaging issues that were spotted by
> >> rpmlint/lintian.
> >>
> >> Get it
> >> --
> >>
> >> You can get Cockpit here:
> >>
> >> http://cockpit-project.org/running.html
> >>
> >> Cockpit 153 is available in Fedora 27:
> >>
> >> https://bodhi.fedoraproject.org/updates/cockpit-153-1.fc27
> >>
> >> Or download the tarball here:
> >>
> >> https://github.com/cockpit-project/cockpit/releases/tag/153
> >>
> >>
> >> Take care,
> >>
> >> Martin Pitt
> >>
> >> ___
> >> cockpit-devel mailing list -- cockpit-de...@lists.fedorahosted.org
> >> To unsubscribe send an email to
> cockpit-devel-le...@lists.fedorahosted.org
> >>
> >>
> >>
> >>
> >> --
> >> MAREK LIBRA
> >> SENIOR SOFTWARE ENGINEER
> >> Red Hat Czech
> >>
> >> ___
> >> cockpit-devel mailing list -- cockpit-de...@lists.fedorahosted.org
> >> To unsubscribe send an email to
> cockpit-devel-le...@lists.fedorahosted.org
> >
> > ___
> > Devel mailing list
> > Devel@ovirt.org
> > http://lists.ovirt.org/mailman/listinfo/devel
>
>
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

Re: [ovirt-devel] Cockpit oVirt support

2017-10-19 Thread Michal Skrivanek

> On 18 Oct 2017, at 11:42, Roy Golan  wrote:
> 
> 
> 
> On Wed, 18 Oct 2017 at 10:25 Michal Skrivanek  
> wrote:
> Hi all,
> I’m happy to announce that we finally finished initial contribution of oVirt 
> specific support into the Cockpit management platform
> See below for more details
> 
> There are only limited amount of operations you can do at the moment, but it 
> may already be interesting for troubleshooting and simple admin actions where 
> you don’t want to launch the full blown webadmin UI
> 
> Worth noting that if you were ever intimidated by the complexity of the GWT 
> UI of oVirt portals and it held you back from contributing, please take 
> another look!
> 
> Thanks,
> michal
> 
> 
> Congrats Michal, Marek and team, this is very nice! The unified look & feel 
> is such a powerful thing (I didn't realize for a while that you left 
> webadmin). 

and thanks to this[1] it’s going to be even more seamless when you click in 
Host view on Host Console button

[1] https://github.com/mareklibra/ovirt-cockpit-sso

>> Begin forwarded message:
>> 
>> From: Marek Libra 
>> Subject: Re: Cockpit 153 released
>> Date: 17 October 2017 at 16:02:59 GMT+2
>> To: Development discussion for the Cockpit Project 
>> 
>> Reply-To: Development discussion for the Cockpit Project 
>> 
>> 
>> Walk-through video for the new "oVirt Machines" page can be found here: 
>> https://youtu.be/5i-kshT6c5A
>> 
>> On Tue, Oct 17, 2017 at 12:08 PM, Martin Pitt  wrote:
>> http://cockpit-project.org/blog/cockpit-153.html
>> 
>> Cockpit is the modern Linux admin interface. We release regularly. Here
>> are the release notes from version 153.
>> 
>> 
>> Add oVirt package
>> -
>> 
>> This version introduces the "oVirt Machines" page on Fedora for controlling
>> oVirt virtual machine clusters.  This code was moved into Cockpit as it 
>> shares
>> a lot of code with the existing "Machines" page, which manages virtual 
>> machines
>> through libvirt.
>> 
>> This feature is packaged in cockpit-ovirt and when installed it will replace
>> the "Machines" page.
>> 
>> Thanks to Marek Libra for working on this!
>> 
>> Screenshot:
>> 
>> http://cockpit-project.org/images/ovirt-overview.png
>> 
>> Change: https://github.com/cockpit-project/cockpit/pull/7139
>> 
>> 
>> Packaging cleanup
>> -
>> 
>> This release fixes a lot of small packaging issues that were spotted by
>> rpmlint/lintian.
>> 
>> Get it
>> --
>> 
>> You can get Cockpit here:
>> 
>> http://cockpit-project.org/running.html
>> 
>> Cockpit 153 is available in Fedora 27:
>> 
>> https://bodhi.fedoraproject.org/updates/cockpit-153-1.fc27
>> 
>> Or download the tarball here:
>> 
>> https://github.com/cockpit-project/cockpit/releases/tag/153
>> 
>> 
>> Take care,
>> 
>> Martin Pitt
>> 
>> ___
>> cockpit-devel mailing list -- cockpit-de...@lists.fedorahosted.org
>> To unsubscribe send an email to cockpit-devel-le...@lists.fedorahosted.org
>> 
>> 
>> 
>> 
>> -- 
>> MAREK LIBRA
>> SENIOR SOFTWARE ENGINEER
>> Red Hat Czech
>> 
>> ___
>> cockpit-devel mailing list -- cockpit-de...@lists.fedorahosted.org
>> To unsubscribe send an email to cockpit-devel-le...@lists.fedorahosted.org
> 
> ___
> Devel mailing list
> Devel@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/devel

___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

Re: [ovirt-devel] [ OST Failure Report ] [ oVirt Master ] [ bootstrap.verify_add_hosts ] [ 18/10/17 ]

2017-10-19 Thread Martin Perina
On Thu, Oct 19, 2017 at 11:04 AM, Martin Perina  wrote:

>
>
> On Thu, Oct 19, 2017 at 10:58 AM, Dan Kenigsberg 
> wrote:
>
>> On Thu, Oct 19, 2017 at 10:29 AM, Martin Perina 
>> wrote:
>> >
>> >
>> > On Thu, Oct 19, 2017 at 7:35 AM, Dan Kenigsberg 
>> wrote:
>> >>
>> >> On Wed, Oct 18, 2017 at 2:40 PM, Daniel Belenky 
>> >> wrote:
>> >>>
>> >>> Hi all,
>> >>>
>> >>> The following test is failing: 002_bootstrap.verify_add_hosts
>> >>> All logs from failing job
>> >>> Only 2 engine patches participated in the test, so the suspected
>> patches
>> >>> are:
>> >>>
>> >>> https://gerrit.ovirt.org/#/c/82542/2
>> >>> https://gerrit.ovirt.org/#/c/82545/3
>> >>>
>> >>> Due to the fact that when this error first introduced we had another
>> >>> error, the CI can't automatically detect the specific patch.
>> >>>
>> >>> Error snippet from logs: ovirt-host-deploy-ansible log (Full log)
>> >>>
>> >>> TASK [ovirt-host-deploy-firewalld : Enable firewalld rules]
>> >>> 
>> >>> failed: [lago-basic-suite-master-host-0] (item={u'service':
>> >>> u'glusterfs'}) => {"changed": false, "failed": true, "item":
>> {"service":
>> >>> "glusterfs"}, "msg": "ERROR: Exception caught:
>> >>> org.fedoraproject.FirewallD1.Exception: INVALID_SERVICE: 'glusterfs'
>> not
>> >>> among existing services Permanent and Non-Permanent(immediate)
>> operation,
>> >>> Services are defined by port/tcp relationship and named as they are in
>> >>> /etc/services (on most systems)"}
>> >>>
>> >>>
>> >>> Error from HOST 0 firewalld log:
>> >>> lago-basic-suite-master-host-0/_var_log/firewalld/ (Full log)
>> >>>
>> >>> 2017-10-15 16:51:24 ERROR: INVALID_SERVICE: 'glusterfs' not among
>> >>> existing services
>> >>
>> >>
>> >> Ondra, would such an error propagate through the playbook to Engine and
>> >> fail the add-host flow? (I think it should!)
>> >
>> >
>> > We didn't do that so far, because of EL 7.3
>> > . We need firewalld from 7.4 to have all available services in place (I
>> > don't remember but I think imageio service was the one delivered only in
>> > firewalld from 7.4). So  up until now we ingore non-existent firewalld
>> > service, but if needed we can turn this on and fail host deploy.
>>
>> Ok, so for now your "luckily" off the hook and not the reason of failure.
>>
>> >>
>> >>
>> >> Do you know which package provide the glusterfs firewalld service, and
>> why
>> >> it is missing from the host?
>> >
>> >
>> > So we have used 'glusterfs' firewalld service per Sahina recommendation,
>> > which is included in glusterfs-server package from version 3.7.6 [1].
>> But
>> > this package is not installed when installing packages for cluster with
>> > gluster capabilities enabled. So now I'm confused: don't we need
>> > glusterfs-server package? If not and we need those ports open because
>> they
>> > are used by services from different already installed glusterfs
>> packages,
>> > shouldn't the firewalld configuration be moved from glusterfs-server to
>> > glusterfs package?
>>
>> glusterfs-cli.rpm is required to consume gluster storage (virt use
>> case), but I don't recall that it needs open ports.
>>
>
> ​It was there even for IPTables, if gluster support is enabled on cluster,
> then gluster specific ports were enabled even with IPTables. FirewallD
> feature continues to use that.
> ​
>
>
>> glusterfs-server.rpm is required to provide gluster storage (gluster use
>> case).
>> If I recall correctly, firewalld feature has differentiated between
>> the two; opening needed ports only when relevant.
>>
>
> ​Right, but if gluster services are configured for firewalld it means that
> the host has been added to the cluster with gluster feature enabled and not
> only virt
> ​
>
>
>>
>> >
>> >
>> > [1] https://bugzilla.redhat.com/show_bug.cgi?id=1057295
>> >
>> >
>>
>
>
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

Re: [ovirt-devel] [ OST Failure Report ] [ oVirt Master ] [ bootstrap.verify_add_hosts ] [ 18/10/17 ]

2017-10-19 Thread Martin Perina
On Thu, Oct 19, 2017 at 10:53 AM, Eyal Edri  wrote:

>
>
> On Thu, Oct 19, 2017 at 11:51 AM, Barak Korren  wrote:
>
>>
>>
>> On 19 October 2017 at 11:43, Eyal Edri  wrote:
>>
>>>
>>>
>>> On Thu, Oct 19, 2017 at 10:50 AM, Allon Mureinik 
>>> wrote:
>>>
 The missing deps issue happened again this morning [1]:

 Traceback (most recent call last):
   File "/tmp/ovirt-q04eYYi5Ym/pythonlib/otopi/context.py", line 133, in 
 _executeMethod
 method['method']()
   File 
 "/tmp/ovirt-q04eYYi5Ym/otopi-plugins/otopi/packagers/yumpackager.py", line 
 256, in _packages
 if self._miniyum.buildTransaction():
   File "/tmp/ovirt-q04eYYi5Ym/pythonlib/otopi/miniyum.py", line 920, in 
 buildTransaction
 raise yum.Errors.YumBaseError(msg)
 YumBaseError: [u'vdsm-4.20.3-205.git15d3b78.el7.centos.x86_64 requires 
 libvirt-daemon-kvm >= 3.2.0-14.el7_4.3', 
 u'10:qemu-kvm-ev-2.9.0-16.el7_4.5.1.x86_64 requires ipxe-roms-qemu >= 
 20170123-1.git4e85b27.el7_4.1']
 2017-10-19 01:36:37,275-0400 ERROR otopi.context 
 context._executeMethod:152 Failed to execute stage 'Package installation': 
 [u'vdsm-4.20.3-205.git15d3b78.el7.centos.x86_64 requires 
 libvirt-daemon-kvm >= 3.2.0-14.el7_4.3', 
 u'10:qemu-kvm-ev-2.9.0-16.el7_4.5.1.x86_64 requires ipxe-roms-qemu >= 
 20170123-1.git4e85b27.el7_4.1']

 We need to fix the missing packages (broken repo?) issue ASAP, as it would 
 mast any other real problems we may have there


>>> We're looking into it now, it's strange that official qemu-kvm-ev is
>>> requiring a version of ipxe-roms-qemu with git sha
>>> 20170123-1.git4e85b27.el7_4.1.
>>> It looks like the same pkg is coming from centos base, updateds and
>>> kvm-commons, and some repos include older version without the '4.1' suffix.
>>>
>>> However, its strange that some jobs does pass, e.g - last finished run
>>> from 1.5 hours ago:
>>>
>>> http://jenkins.ovirt.org/view/Change%20queue%20jobs/job/ovir
>>> t-master_change-queue-tester/3362/
>>>
>>>
>>
>> There is nothing strange about this.
>> The failure Alon linked to is in OST check-patch for Denial's patch [1]
>> that closes the external repos, so it is expected to fail on issues like
>> this. This is not merged so doesn't cause any issues for "normal" (E.g.
>> change-queues) OST runs.
>>
>
> I can't see that missing pkg error anymore actually, so whatever it was it
> might be fixed ( it failed also on another patch, not just Daniel's ).
> There might be a second issue here with the error Daniel sent on
> glusterfs-server and firewalld, I think we should focus on investigating
> that.
>

​This is not an issue, which should the reason for jobs to fail (we are
ignoring this error during firewalld setup), because AFAIK we are not doing
any gluster related tests in basic OST.​


Anyway the discussion about missing gluster firewalld service continues ...
​

>
>
>>
>> [1] : https://gerrit.ovirt.org/c/82602/
>>
>> --
>> Barak Korren
>> RHV DevOps team , RHCE, RHCi
>> Red Hat EMEA
>> redhat.com | TRIED. TESTED. TRUSTED. | redhat.com/trusted
>>
>
>
>
> --
>
> Eyal edri
>
>
> MANAGER
>
> RHV DevOps
>
> EMEA VIRTUALIZATION R
>
>
> Red Hat EMEA 
>  TRIED. TESTED. TRUSTED. 
> phone: +972-9-7692018 <+972%209-769-2018>
> irc: eedri (on #tlv #rhev-dev #rhev-integ)
>
> ___
> Devel mailing list
> Devel@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/devel
>
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

Re: [ovirt-devel] [ OST Failure Report ] [ oVirt Master ] [ bootstrap.verify_add_hosts ] [ 18/10/17 ]

2017-10-19 Thread Gal Ben Haim
Taken from the ansible-playbook log of host-0:

TASK [ovirt-host-deploy-firewalld : Enable firewalld rules] 
failed: [lago-basic-suite-master-host-0] (item={u'service': u'glusterfs'}) =>

{"changed": false, "failed": true, "item": {"service": "glusterfs"}, "msg":

"ERROR: Exception caught: org.fedoraproject.FirewallD1.Exception:

INVALID_SERVICE: 'glusterfs' not among existing services Permanent and
Non-Permanent(immediate)

operation, Services are defined by port/tcp relationship and named as
they are in /etc/services

(on most systems)"}


Shouldn't we fail the the playbook on firewall configuration failure?




On Thu, Oct 19, 2017 at 12:04 PM, Martin Perina  wrote:

>
>
> On Thu, Oct 19, 2017 at 10:58 AM, Dan Kenigsberg 
> wrote:
>
>> On Thu, Oct 19, 2017 at 10:29 AM, Martin Perina 
>> wrote:
>> >
>> >
>> > On Thu, Oct 19, 2017 at 7:35 AM, Dan Kenigsberg 
>> wrote:
>> >>
>> >> On Wed, Oct 18, 2017 at 2:40 PM, Daniel Belenky 
>> >> wrote:
>> >>>
>> >>> Hi all,
>> >>>
>> >>> The following test is failing: 002_bootstrap.verify_add_hosts
>> >>> All logs from failing job
>> >>> Only 2 engine patches participated in the test, so the suspected
>> patches
>> >>> are:
>> >>>
>> >>> https://gerrit.ovirt.org/#/c/82542/2
>> >>> https://gerrit.ovirt.org/#/c/82545/3
>> >>>
>> >>> Due to the fact that when this error first introduced we had another
>> >>> error, the CI can't automatically detect the specific patch.
>> >>>
>> >>> Error snippet from logs: ovirt-host-deploy-ansible log (Full log)
>> >>>
>> >>> TASK [ovirt-host-deploy-firewalld : Enable firewalld rules]
>> >>> 
>> >>> failed: [lago-basic-suite-master-host-0] (item={u'service':
>> >>> u'glusterfs'}) => {"changed": false, "failed": true, "item":
>> {"service":
>> >>> "glusterfs"}, "msg": "ERROR: Exception caught:
>> >>> org.fedoraproject.FirewallD1.Exception: INVALID_SERVICE: 'glusterfs'
>> not
>> >>> among existing services Permanent and Non-Permanent(immediate)
>> operation,
>> >>> Services are defined by port/tcp relationship and named as they are in
>> >>> /etc/services (on most systems)"}
>> >>>
>> >>>
>> >>> Error from HOST 0 firewalld log:
>> >>> lago-basic-suite-master-host-0/_var_log/firewalld/ (Full log)
>> >>>
>> >>> 2017-10-15 16:51:24 ERROR: INVALID_SERVICE: 'glusterfs' not among
>> >>> existing services
>> >>
>> >>
>> >> Ondra, would such an error propagate through the playbook to Engine and
>> >> fail the add-host flow? (I think it should!)
>> >
>> >
>> > We didn't do that so far, because of EL 7.3
>> > . We need firewalld from 7.4 to have all available services in place (I
>> > don't remember but I think imageio service was the one delivered only in
>> > firewalld from 7.4). So  up until now we ingore non-existent firewalld
>> > service, but if needed we can turn this on and fail host deploy.
>>
>> Ok, so for now your "luckily" off the hook and not the reason of failure.
>>
>> >>
>> >>
>> >> Do you know which package provide the glusterfs firewalld service, and
>> why
>> >> it is missing from the host?
>> >
>> >
>> > So we have used 'glusterfs' firewalld service per Sahina recommendation,
>> > which is included in glusterfs-server package from version 3.7.6 [1].
>> But
>> > this package is not installed when installing packages for cluster with
>> > gluster capabilities enabled. So now I'm confused: don't we need
>> > glusterfs-server package? If not and we need those ports open because
>> they
>> > are used by services from different already installed glusterfs
>> packages,
>> > shouldn't the firewalld configuration be moved from glusterfs-server to
>> > glusterfs package?
>>
>> glusterfs-cli.rpm is required to consume gluster storage (virt use
>> case), but I don't recall that it needs open ports.
>>
>
> ​It was there even for IPTables, if gluster support is enabled on cluster,
> then gluster specific ports were enabled even with IPTables. FirewallD
> feature continues to use that.
> ​
>
>
>> glusterfs-server.rpm is required to provide gluster storage (gluster use
>> case).
>> If I recall correctly, firewalld feature has differentiated between
>> the two; opening needed ports only when relevant.
>>
>
> ​Right, but if gluster services are configured for firewalld it means that
> the host has been added to the cluster with gluster feature enabled and not
> only virt
> ​
>
>
>>
>> >
>> >
>> > [1] https://bugzilla.redhat.com/show_bug.cgi?id=1057295
>> >
>> >
>>
>
>
> ___
> Devel mailing list
> Devel@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/devel
>



-- 
*GAL bEN HAIM*
RHV DEVOPS
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

Re: [ovirt-devel] [ OST Failure Report ] [ oVirt Master ] [ bootstrap.verify_add_hosts ] [ 18/10/17 ]

2017-10-19 Thread Martin Perina
On Thu, Oct 19, 2017 at 10:58 AM, Dan Kenigsberg  wrote:

> On Thu, Oct 19, 2017 at 10:29 AM, Martin Perina 
> wrote:
> >
> >
> > On Thu, Oct 19, 2017 at 7:35 AM, Dan Kenigsberg 
> wrote:
> >>
> >> On Wed, Oct 18, 2017 at 2:40 PM, Daniel Belenky 
> >> wrote:
> >>>
> >>> Hi all,
> >>>
> >>> The following test is failing: 002_bootstrap.verify_add_hosts
> >>> All logs from failing job
> >>> Only 2 engine patches participated in the test, so the suspected
> patches
> >>> are:
> >>>
> >>> https://gerrit.ovirt.org/#/c/82542/2
> >>> https://gerrit.ovirt.org/#/c/82545/3
> >>>
> >>> Due to the fact that when this error first introduced we had another
> >>> error, the CI can't automatically detect the specific patch.
> >>>
> >>> Error snippet from logs: ovirt-host-deploy-ansible log (Full log)
> >>>
> >>> TASK [ovirt-host-deploy-firewalld : Enable firewalld rules]
> >>> 
> >>> failed: [lago-basic-suite-master-host-0] (item={u'service':
> >>> u'glusterfs'}) => {"changed": false, "failed": true, "item":
> {"service":
> >>> "glusterfs"}, "msg": "ERROR: Exception caught:
> >>> org.fedoraproject.FirewallD1.Exception: INVALID_SERVICE: 'glusterfs'
> not
> >>> among existing services Permanent and Non-Permanent(immediate)
> operation,
> >>> Services are defined by port/tcp relationship and named as they are in
> >>> /etc/services (on most systems)"}
> >>>
> >>>
> >>> Error from HOST 0 firewalld log:
> >>> lago-basic-suite-master-host-0/_var_log/firewalld/ (Full log)
> >>>
> >>> 2017-10-15 16:51:24 ERROR: INVALID_SERVICE: 'glusterfs' not among
> >>> existing services
> >>
> >>
> >> Ondra, would such an error propagate through the playbook to Engine and
> >> fail the add-host flow? (I think it should!)
> >
> >
> > We didn't do that so far, because of EL 7.3
> > . We need firewalld from 7.4 to have all available services in place (I
> > don't remember but I think imageio service was the one delivered only in
> > firewalld from 7.4). So  up until now we ingore non-existent firewalld
> > service, but if needed we can turn this on and fail host deploy.
>
> Ok, so for now your "luckily" off the hook and not the reason of failure.
>
> >>
> >>
> >> Do you know which package provide the glusterfs firewalld service, and
> why
> >> it is missing from the host?
> >
> >
> > So we have used 'glusterfs' firewalld service per Sahina recommendation,
> > which is included in glusterfs-server package from version 3.7.6 [1]. But
> > this package is not installed when installing packages for cluster with
> > gluster capabilities enabled. So now I'm confused: don't we need
> > glusterfs-server package? If not and we need those ports open because
> they
> > are used by services from different already installed glusterfs packages,
> > shouldn't the firewalld configuration be moved from glusterfs-server to
> > glusterfs package?
>
> glusterfs-cli.rpm is required to consume gluster storage (virt use
> case), but I don't recall that it needs open ports.
>

​It was there even for IPTables, if gluster support is enabled on cluster,
then gluster specific ports were enabled even with IPTables. FirewallD
feature continues to use that.
​


> glusterfs-server.rpm is required to provide gluster storage (gluster use
> case).
> If I recall correctly, firewalld feature has differentiated between
> the two; opening needed ports only when relevant.
>

​Right, but if gluster services are configured for firewalld it means that
the host has been added to the cluster with gluster feature enabled and not
only virt
​


>
> >
> >
> > [1] https://bugzilla.redhat.com/show_bug.cgi?id=1057295
> >
> >
>
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

Re: [ovirt-devel] [ OST Failure Report ] [ oVirt Master ] [ bootstrap.verify_add_hosts ] [ 18/10/17 ]

2017-10-19 Thread Dan Kenigsberg
On Thu, Oct 19, 2017 at 10:29 AM, Martin Perina  wrote:
>
>
> On Thu, Oct 19, 2017 at 7:35 AM, Dan Kenigsberg  wrote:
>>
>> On Wed, Oct 18, 2017 at 2:40 PM, Daniel Belenky 
>> wrote:
>>>
>>> Hi all,
>>>
>>> The following test is failing: 002_bootstrap.verify_add_hosts
>>> All logs from failing job
>>> Only 2 engine patches participated in the test, so the suspected patches
>>> are:
>>>
>>> https://gerrit.ovirt.org/#/c/82542/2
>>> https://gerrit.ovirt.org/#/c/82545/3
>>>
>>> Due to the fact that when this error first introduced we had another
>>> error, the CI can't automatically detect the specific patch.
>>>
>>> Error snippet from logs: ovirt-host-deploy-ansible log (Full log)
>>>
>>> TASK [ovirt-host-deploy-firewalld : Enable firewalld rules]
>>> 
>>> failed: [lago-basic-suite-master-host-0] (item={u'service':
>>> u'glusterfs'}) => {"changed": false, "failed": true, "item": {"service":
>>> "glusterfs"}, "msg": "ERROR: Exception caught:
>>> org.fedoraproject.FirewallD1.Exception: INVALID_SERVICE: 'glusterfs' not
>>> among existing services Permanent and Non-Permanent(immediate) operation,
>>> Services are defined by port/tcp relationship and named as they are in
>>> /etc/services (on most systems)"}
>>>
>>>
>>> Error from HOST 0 firewalld log:
>>> lago-basic-suite-master-host-0/_var_log/firewalld/ (Full log)
>>>
>>> 2017-10-15 16:51:24 ERROR: INVALID_SERVICE: 'glusterfs' not among
>>> existing services
>>
>>
>> Ondra, would such an error propagate through the playbook to Engine and
>> fail the add-host flow? (I think it should!)
>
>
> We didn't do that so far, because of EL 7.3
> . We need firewalld from 7.4 to have all available services in place (I
> don't remember but I think imageio service was the one delivered only in
> firewalld from 7.4). So  up until now we ingore non-existent firewalld
> service, but if needed we can turn this on and fail host deploy.

Ok, so for now your "luckily" off the hook and not the reason of failure.

>>
>>
>> Do you know which package provide the glusterfs firewalld service, and why
>> it is missing from the host?
>
>
> So we have used 'glusterfs' firewalld service per Sahina recommendation,
> which is included in glusterfs-server package from version 3.7.6 [1]. But
> this package is not installed when installing packages for cluster with
> gluster capabilities enabled. So now I'm confused: don't we need
> glusterfs-server package? If not and we need those ports open because they
> are used by services from different already installed glusterfs packages,
> shouldn't the firewalld configuration be moved from glusterfs-server to
> glusterfs package?

glusterfs-cli.rpm is required to consume gluster storage (virt use
case), but I don't recall that it needs open ports.
glusterfs-server.rpm is required to provide gluster storage (gluster use case).
If I recall correctly, firewalld feature has differentiated between
the two; opening needed ports only when relevant.

>
>
> [1] https://bugzilla.redhat.com/show_bug.cgi?id=1057295
>
>
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel


Re: [ovirt-devel] [ OST Failure Report ] [ oVirt Master ] [ bootstrap.verify_add_hosts ] [ 18/10/17 ]

2017-10-19 Thread Eyal Edri
On Thu, Oct 19, 2017 at 11:51 AM, Barak Korren  wrote:

>
>
> On 19 October 2017 at 11:43, Eyal Edri  wrote:
>
>>
>>
>> On Thu, Oct 19, 2017 at 10:50 AM, Allon Mureinik 
>> wrote:
>>
>>> The missing deps issue happened again this morning [1]:
>>>
>>> Traceback (most recent call last):
>>>   File "/tmp/ovirt-q04eYYi5Ym/pythonlib/otopi/context.py", line 133, in 
>>> _executeMethod
>>> method['method']()
>>>   File 
>>> "/tmp/ovirt-q04eYYi5Ym/otopi-plugins/otopi/packagers/yumpackager.py", line 
>>> 256, in _packages
>>> if self._miniyum.buildTransaction():
>>>   File "/tmp/ovirt-q04eYYi5Ym/pythonlib/otopi/miniyum.py", line 920, in 
>>> buildTransaction
>>> raise yum.Errors.YumBaseError(msg)
>>> YumBaseError: [u'vdsm-4.20.3-205.git15d3b78.el7.centos.x86_64 requires 
>>> libvirt-daemon-kvm >= 3.2.0-14.el7_4.3', 
>>> u'10:qemu-kvm-ev-2.9.0-16.el7_4.5.1.x86_64 requires ipxe-roms-qemu >= 
>>> 20170123-1.git4e85b27.el7_4.1']
>>> 2017-10-19 01:36:37,275-0400 ERROR otopi.context context._executeMethod:152 
>>> Failed to execute stage 'Package installation': 
>>> [u'vdsm-4.20.3-205.git15d3b78.el7.centos.x86_64 requires libvirt-daemon-kvm 
>>> >= 3.2.0-14.el7_4.3', u'10:qemu-kvm-ev-2.9.0-16.el7_4.5.1.x86_64 requires 
>>> ipxe-roms-qemu >= 20170123-1.git4e85b27.el7_4.1']
>>>
>>> We need to fix the missing packages (broken repo?) issue ASAP, as it would 
>>> mast any other real problems we may have there
>>>
>>>
>> We're looking into it now, it's strange that official qemu-kvm-ev is
>> requiring a version of ipxe-roms-qemu with git sha
>> 20170123-1.git4e85b27.el7_4.1.
>> It looks like the same pkg is coming from centos base, updateds and
>> kvm-commons, and some repos include older version without the '4.1' suffix.
>>
>> However, its strange that some jobs does pass, e.g - last finished run
>> from 1.5 hours ago:
>>
>> http://jenkins.ovirt.org/view/Change%20queue%20jobs/job/ovir
>> t-master_change-queue-tester/3362/
>>
>>
>
> There is nothing strange about this.
> The failure Alon linked to is in OST check-patch for Denial's patch [1]
> that closes the external repos, so it is expected to fail on issues like
> this. This is not merged so doesn't cause any issues for "normal" (E.g.
> change-queues) OST runs.
>

I can't see that missing pkg error anymore actually, so whatever it was it
might be fixed ( it failed also on another patch, not just Daniel's ).
There might be a second issue here with the error Daniel sent on
glusterfs-server and firewalld, I think we should focus on investigating
that.


>
> [1] : https://gerrit.ovirt.org/c/82602/
>
> --
> Barak Korren
> RHV DevOps team , RHCE, RHCi
> Red Hat EMEA
> redhat.com | TRIED. TESTED. TRUSTED. | redhat.com/trusted
>



-- 

Eyal edri


MANAGER

RHV DevOps

EMEA VIRTUALIZATION R


Red Hat EMEA 
 TRIED. TESTED. TRUSTED. 
phone: +972-9-7692018
irc: eedri (on #tlv #rhev-dev #rhev-integ)
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

Re: [ovirt-devel] [ OST Failure Report ] [ oVirt Master ] [ bootstrap.verify_add_hosts ] [ 18/10/17 ]

2017-10-19 Thread Barak Korren
On 19 October 2017 at 11:43, Eyal Edri  wrote:

>
>
> On Thu, Oct 19, 2017 at 10:50 AM, Allon Mureinik 
> wrote:
>
>> The missing deps issue happened again this morning [1]:
>>
>> Traceback (most recent call last):
>>   File "/tmp/ovirt-q04eYYi5Ym/pythonlib/otopi/context.py", line 133, in 
>> _executeMethod
>> method['method']()
>>   File "/tmp/ovirt-q04eYYi5Ym/otopi-plugins/otopi/packagers/yumpackager.py", 
>> line 256, in _packages
>> if self._miniyum.buildTransaction():
>>   File "/tmp/ovirt-q04eYYi5Ym/pythonlib/otopi/miniyum.py", line 920, in 
>> buildTransaction
>> raise yum.Errors.YumBaseError(msg)
>> YumBaseError: [u'vdsm-4.20.3-205.git15d3b78.el7.centos.x86_64 requires 
>> libvirt-daemon-kvm >= 3.2.0-14.el7_4.3', 
>> u'10:qemu-kvm-ev-2.9.0-16.el7_4.5.1.x86_64 requires ipxe-roms-qemu >= 
>> 20170123-1.git4e85b27.el7_4.1']
>> 2017-10-19 01:36:37,275-0400 ERROR otopi.context context._executeMethod:152 
>> Failed to execute stage 'Package installation': 
>> [u'vdsm-4.20.3-205.git15d3b78.el7.centos.x86_64 requires libvirt-daemon-kvm 
>> >= 3.2.0-14.el7_4.3', u'10:qemu-kvm-ev-2.9.0-16.el7_4.5.1.x86_64 requires 
>> ipxe-roms-qemu >= 20170123-1.git4e85b27.el7_4.1']
>>
>> We need to fix the missing packages (broken repo?) issue ASAP, as it would 
>> mast any other real problems we may have there
>>
>>
> We're looking into it now, it's strange that official qemu-kvm-ev is
> requiring a version of ipxe-roms-qemu with git sha
> 20170123-1.git4e85b27.el7_4.1.
> It looks like the same pkg is coming from centos base, updateds and
> kvm-commons, and some repos include older version without the '4.1' suffix.
>
> However, its strange that some jobs does pass, e.g - last finished run
> from 1.5 hours ago:
>
> http://jenkins.ovirt.org/view/Change%20queue%20jobs/job/
> ovirt-master_change-queue-tester/3362/
>
>

There is nothing strange about this.
The failure Alon linked to is in OST check-patch for Denial's patch [1]
that closes the external repos, so it is expected to fail on issues like
this. This is not merged so doesn't cause any issues for "normal" (E.g.
change-queues) OST runs.

[1] : https://gerrit.ovirt.org/c/82602/

-- 
Barak Korren
RHV DevOps team , RHCE, RHCi
Red Hat EMEA
redhat.com | TRIED. TESTED. TRUSTED. | redhat.com/trusted
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

Re: [ovirt-devel] [ OST Failure Report ] [ oVirt Master ] [ bootstrap.verify_add_hosts ] [ 18/10/17 ]

2017-10-19 Thread Barak Korren
On 19 October 2017 at 10:50, Allon Mureinik  wrote:

> The missing deps issue happened again this morning [1]:
>
>
Why are you looking at OST check-patch job? it has little to do with how
OST runs when it is used to check other projects (For example it runs all
suits as opposed to just the stable ones, and it does not use or repo
protection mecahnisms...). Also that particular patch plays with repo
configuration so it is expected to fail on repo issues...

OST is stable ATM for all projects except engine, here are some passing
examples:
-
http://jenkins.ovirt.org/view/Change%20queue%20jobs/job/ovirt-master_change-queue-tester/3361/
-
http://jenkins.ovirt.org/view/Change%20queue%20jobs/job/ovirt-master_change-queue-tester/3362/
-
http://jenkins.ovirt.org/view/Change%20queue%20jobs/job/ovirt-master_change-queue-tester/3349/

I think that is strong enough evidence that the issue is in engine code and
not in OST/repos/other places people like to point to.

-- 
Barak Korren
RHV DevOps team , RHCE, RHCi
Red Hat EMEA
redhat.com | TRIED. TESTED. TRUSTED. | redhat.com/trusted
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

Re: [ovirt-devel] [ OST Failure Report ] [ oVirt Master ] [ bootstrap.verify_add_hosts ] [ 18/10/17 ]

2017-10-19 Thread Eyal Edri
On Thu, Oct 19, 2017 at 10:50 AM, Allon Mureinik 
wrote:

> The missing deps issue happened again this morning [1]:
>
> Traceback (most recent call last):
>   File "/tmp/ovirt-q04eYYi5Ym/pythonlib/otopi/context.py", line 133, in 
> _executeMethod
> method['method']()
>   File "/tmp/ovirt-q04eYYi5Ym/otopi-plugins/otopi/packagers/yumpackager.py", 
> line 256, in _packages
> if self._miniyum.buildTransaction():
>   File "/tmp/ovirt-q04eYYi5Ym/pythonlib/otopi/miniyum.py", line 920, in 
> buildTransaction
> raise yum.Errors.YumBaseError(msg)
> YumBaseError: [u'vdsm-4.20.3-205.git15d3b78.el7.centos.x86_64 requires 
> libvirt-daemon-kvm >= 3.2.0-14.el7_4.3', 
> u'10:qemu-kvm-ev-2.9.0-16.el7_4.5.1.x86_64 requires ipxe-roms-qemu >= 
> 20170123-1.git4e85b27.el7_4.1']
> 2017-10-19 01:36:37,275-0400 ERROR otopi.context context._executeMethod:152 
> Failed to execute stage 'Package installation': 
> [u'vdsm-4.20.3-205.git15d3b78.el7.centos.x86_64 requires libvirt-daemon-kvm 
> >= 3.2.0-14.el7_4.3', u'10:qemu-kvm-ev-2.9.0-16.el7_4.5.1.x86_64 requires 
> ipxe-roms-qemu >= 20170123-1.git4e85b27.el7_4.1']
>
> We need to fix the missing packages (broken repo?) issue ASAP, as it would 
> mast any other real problems we may have there
>
>
We're looking into it now, it's strange that official qemu-kvm-ev is
requiring a version of ipxe-roms-qemu with git sha
20170123-1.git4e85b27.el7_4.1.
It looks like the same pkg is coming from centos base, updateds and
kvm-commons, and some repos include older version without the '4.1' suffix.

However, its strange that some jobs does pass, e.g - last finished run from
1.5 hours ago:

http://jenkins.ovirt.org/view/Change%20queue%20jobs/job/ovirt-master_change-queue-tester/3362/


>
> [1] 
> http://jenkins.ovirt.org/view/oVirt%20system%20tests/job/ovirt-system-tests_master_check-patch-el7-x86_64/1977/artifact/exported-artifacts/basic-suite-master__logs/test_logs/basic-suite-master/post-002_bootstrap.py/lago-basic-suite-master-engine/_var_log/ovirt-engine/host-deploy/ovirt-host-deploy-20171019013637-lago-basic-suite-master-host-0-4a91e03d.log/*view*/
>
>
> On Thu, Oct 19, 2017 at 10:29 AM, Martin Perina 
> wrote:
>
>>
>>
>> On Thu, Oct 19, 2017 at 7:35 AM, Dan Kenigsberg 
>> wrote:
>>
>>> On Wed, Oct 18, 2017 at 2:40 PM, Daniel Belenky 
>>> wrote:
>>>
 Hi all,

 *The following test is failing:* 002_bootstrap.verify_add_hosts
 *All logs from failing job
 *
 *Only 2 engine patches participated in the test, so the suspected
 patches are:*

1. *https://gerrit.ovirt.org/#/c/82542/2*

2. *https://gerrit.ovirt.org/#/c/82545/3
*

 Due to the fact that when this error first introduced we had another
 error, the CI can't automatically detect the specific patch.

 *Error snippet from logs: **ovirt-host-deploy-ansible log
 
 (Full log)*

 TASK [ovirt-host-deploy-firewalld : Enable firewalld rules] 
 
 failed: [lago-basic-suite-master-host-0] (item={u'service': u'glusterfs'}) 
 => {"changed": false, "failed": true, "item": {"service": "glusterfs"}, 
 "msg": "ERROR: Exception caught: org.fedoraproject.FirewallD1.Exception: 
 INVALID_SERVICE: 'glusterfs' not among existing services Permanent and 
 Non-Permanent(immediate) operation, Services are defined by port/tcp 
 relationship and named as they are in /etc/services (on most systems)"}


 *Error from HOST 0 firewalld
 log: lago-basic-suite-master-host-0/_var_log/firewalld/
 
  (Full
 log)*

 2017-10-15 16:51:24 ERROR: INVALID_SERVICE: 'glusterfs' not among existing 
 services


>>> Ondra, would such an error propagate through the playbook to Engine and
>>> fail the add-host flow? (I think it should!)
>>>
>>
>> ​We didn't do that so far, because of EL 7.3​
>> ​. We need firewalld from 7.4 to have all available services in place (I
>> don't remember but I think imageio service was the one delivered only in
>> 

Re: [ovirt-devel] Cockpit oVirt support

2017-10-19 Thread Michal Skrivanek

> On 18 Oct 2017, at 13:32, Barak Korren  wrote:
> 
> 
> 
> On 18 October 2017 at 10:24, Michal Skrivanek  > wrote:
> Hi all,
> I’m happy to announce that we finally finished initial contribution of oVirt 
> specific support into the Cockpit management platform
> See below for more details
> 
> There are only limited amount of operations you can do at the moment, but it 
> may already be interesting for troubleshooting and simple admin actions where 
> you don’t want to launch the full blown webadmin UI
> 
> Worth noting that if you were ever intimidated by the complexity of the GWT 
> UI of oVirt portals and it held you back from contributing, please take 
> another look!
> 
> Thanks,
> michal
> 
>  
> Very nice work!

thanks!
also note the cockpit automation testing framework which now covers operations 
on top of stable oVirt environment.

> 
> Where is this going? Are all WebAdmin features planned to be supported at 
> some point? Its kinda nice to be able to access and manage the systems from 
> any one of the hosts instead of having to know where the engine is…

note for anything meaningful it does need engine API connection, that’s not 
going to change really due to the oVirt architecture. 
But who knows how it goes.…:) There are pieces of functionality and 
configuration which can be done against oVirt VMs at the libvirt level, like 
Marek mentioned “shutdown” action, but mostly it’s introspection. With the 
parallel “VM XML” effort which gets in 4.2 we do have a complete VM definition 
in form of libvirt XML….so the cockpit code should be able to figure out a lot 
of properties from what it can see, and connect it with engine's information 
(e.g. it sees network interfaces from libvirt XML and it can correlate it to 
engine’s logical networks)
But for large part of webadmin functionality it’s not really feasible, engine 
does a lot of things, and making changes without the visibility to the whole 
setup is not a good idea…

Thanks,
michal

> 
> 
> -- 
> Barak Korren
> RHV DevOps team , RHCE, RHCi
> Red Hat EMEA
> redhat.com  | TRIED. TESTED. TRUSTED. | 
> redhat.com/trusted 
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

Re: [ovirt-devel] [ OST Failure Report ] [ oVirt Master ] [ bootstrap.verify_add_hosts ] [ 18/10/17 ]

2017-10-19 Thread Allon Mureinik
The missing deps issue happened again this morning [1]:

Traceback (most recent call last):
  File "/tmp/ovirt-q04eYYi5Ym/pythonlib/otopi/context.py", line 133,
in _executeMethod
method['method']()
  File "/tmp/ovirt-q04eYYi5Ym/otopi-plugins/otopi/packagers/yumpackager.py",
line 256, in _packages
if self._miniyum.buildTransaction():
  File "/tmp/ovirt-q04eYYi5Ym/pythonlib/otopi/miniyum.py", line 920,
in buildTransaction
raise yum.Errors.YumBaseError(msg)
YumBaseError: [u'vdsm-4.20.3-205.git15d3b78.el7.centos.x86_64 requires
libvirt-daemon-kvm >= 3.2.0-14.el7_4.3',
u'10:qemu-kvm-ev-2.9.0-16.el7_4.5.1.x86_64 requires ipxe-roms-qemu >=
20170123-1.git4e85b27.el7_4.1']
2017-10-19 01:36:37,275-0400 ERROR otopi.context
context._executeMethod:152 Failed to execute stage 'Package
installation': [u'vdsm-4.20.3-205.git15d3b78.el7.centos.x86_64
requires libvirt-daemon-kvm >= 3.2.0-14.el7_4.3',
u'10:qemu-kvm-ev-2.9.0-16.el7_4.5.1.x86_64 requires ipxe-roms-qemu >=
20170123-1.git4e85b27.el7_4.1']

We need to fix the missing packages (broken repo?) issue ASAP, as it
would mast any other real problems we may have there


[1] 
http://jenkins.ovirt.org/view/oVirt%20system%20tests/job/ovirt-system-tests_master_check-patch-el7-x86_64/1977/artifact/exported-artifacts/basic-suite-master__logs/test_logs/basic-suite-master/post-002_bootstrap.py/lago-basic-suite-master-engine/_var_log/ovirt-engine/host-deploy/ovirt-host-deploy-20171019013637-lago-basic-suite-master-host-0-4a91e03d.log/*view*/


On Thu, Oct 19, 2017 at 10:29 AM, Martin Perina  wrote:

>
>
> On Thu, Oct 19, 2017 at 7:35 AM, Dan Kenigsberg  wrote:
>
>> On Wed, Oct 18, 2017 at 2:40 PM, Daniel Belenky 
>> wrote:
>>
>>> Hi all,
>>>
>>> *The following test is failing:* 002_bootstrap.verify_add_hosts
>>> *All logs from failing job
>>> *
>>> *Only 2 engine patches participated in the test, so the suspected
>>> patches are:*
>>>
>>>1. *https://gerrit.ovirt.org/#/c/82542/2*
>>>
>>>2. *https://gerrit.ovirt.org/#/c/82545/3
>>>*
>>>
>>> Due to the fact that when this error first introduced we had another
>>> error, the CI can't automatically detect the specific patch.
>>>
>>> *Error snippet from logs: **ovirt-host-deploy-ansible log
>>> 
>>> (Full log)*
>>>
>>> TASK [ovirt-host-deploy-firewalld : Enable firewalld rules] 
>>> 
>>> failed: [lago-basic-suite-master-host-0] (item={u'service': u'glusterfs'}) 
>>> => {"changed": false, "failed": true, "item": {"service": "glusterfs"}, 
>>> "msg": "ERROR: Exception caught: org.fedoraproject.FirewallD1.Exception: 
>>> INVALID_SERVICE: 'glusterfs' not among existing services Permanent and 
>>> Non-Permanent(immediate) operation, Services are defined by port/tcp 
>>> relationship and named as they are in /etc/services (on most systems)"}
>>>
>>>
>>> *Error from HOST 0 firewalld
>>> log: lago-basic-suite-master-host-0/_var_log/firewalld/
>>> 
>>>  (Full
>>> log)*
>>>
>>> 2017-10-15 16:51:24 ERROR: INVALID_SERVICE: 'glusterfs' not among existing 
>>> services
>>>
>>>
>> Ondra, would such an error propagate through the playbook to Engine and
>> fail the add-host flow? (I think it should!)
>>
>
> ​We didn't do that so far, because of EL 7.3​
> ​. We need firewalld from 7.4 to have all available services in place (I
> don't remember but I think imageio service was the one delivered only in
> firewalld from 7.4). So  up until now we ingore non-existent firewalld
> service, but if needed we can turn this on and fail host deploy.
> ​
> ​
>
>>
>> Do you know which package provide the glusterfs firewalld service, and
>> why it is missing from the host?
>>
>
> ​So we have used 'glusterfs​' firewalld service per Sahina recommendation,
> which is included in glusterfs-server package from version 3.7.6 [1]. But
> this package is not installed when installing packages for cluster with
> gluster capabilities enabled. So now I'm confused: don't we need
> glusterfs-server package? If not and we need those ports open because they
> are used by services from different already installed 

Re: [ovirt-devel] [ovirt-users] Cockpit oVirt support

2017-10-19 Thread Michal Skrivanek

> On 19 Oct 2017, at 09:06, Marek Libra  wrote:
> 
> Regarding libvirt, there's fallback to Libvirt provider (part of 
> cockpit-machines) for VMs which are not managed by oVirt.

It’s layered, so the oVirt specifics are on top of the libvirt-based code. So 
the whole VM management functionality primarily works with libvirt VMs, and 
oVirt is “just” an extension in functionality and data presentation.
We first develop it with pure libvirt in mind, and then extend it with what 
oVirt can give us in addition.
Great thing about it is that every improvement to the generic code is 
beneficial for oVirt too, and we can selectively choose for which actions 
engine is required or not


> For the oVirt ones, oVirt API handles all the actions.
> 
> It's not yet implemented, but I'm considering to fallback to Libvirt for 
> selected actions in case the oVirt API can't be reached. Like for shut down 
> [1].
> 
> Anyway, there's still open question with the Libvirt connection since it's 
> secured on an oVirt host.
> 
> [1] https://github.com/cockpit-project/cockpit/issues/7670 
> 
> 
> On Wed, Oct 18, 2017 at 3:18 PM, Ryan Barry  > wrote:
> This looks great, guys. Congrats!
> 
> Does this also work with plain libvirt?
> 
> On Wed, Oct 18, 2017 at 3:24 AM, Michal Skrivanek 
> > wrote:
> Hi all,
> I’m happy to announce that we finally finished initial contribution of oVirt 
> specific support into the Cockpit management platform
> See below for more details
> 
> There are only limited amount of operations you can do at the moment, but it 
> may already be interesting for troubleshooting and simple admin actions where 
> you don’t want to launch the full blown webadmin UI
> 
> Worth noting that if you were ever intimidated by the complexity of the GWT 
> UI of oVirt portals and it held you back from contributing, please take 
> another look!
> 
> Thanks,
> michal
> 
>> Begin forwarded message:
>> 
>> From: Marek Libra >
>> Subject: Re: Cockpit 153 released
>> Date: 17 October 2017 at 16:02:59 GMT+2
>> To: Development discussion for the Cockpit Project 
>> > >
>> Reply-To: Development discussion for the Cockpit Project 
>> > >
>> 
>> Walk-through video for the new "oVirt Machines" page can be found here: 
>> https://youtu.be/5i-kshT6c5A 
>> 
>> On Tue, Oct 17, 2017 at 12:08 PM, Martin Pitt > > wrote:
>> http://cockpit-project.org/blog/cockpit-153.html 
>> 
>> 
>> Cockpit is the modern Linux admin interface. We release regularly. Here
>> are the release notes from version 153.
>> 
>> 
>> Add oVirt package
>> -
>> 
>> This version introduces the "oVirt Machines" page on Fedora for controlling
>> oVirt virtual machine clusters.  This code was moved into Cockpit as it 
>> shares
>> a lot of code with the existing "Machines" page, which manages virtual 
>> machines
>> through libvirt.
>> 
>> This feature is packaged in cockpit-ovirt and when installed it will replace
>> the "Machines" page.
>> 
>> Thanks to Marek Libra for working on this!
>> 
>> Screenshot:
>> 
>> http://cockpit-project.org/images/ovirt-overview.png 
>> 
>> 
>> Change: https://github.com/cockpit-project/cockpit/pull/7139 
>> 
>> 
>> 
>> Packaging cleanup
>> -
>> 
>> This release fixes a lot of small packaging issues that were spotted by
>> rpmlint/lintian.
>> 
>> Get it
>> --
>> 
>> You can get Cockpit here:
>> 
>> http://cockpit-project.org/running.html 
>> 
>> 
>> Cockpit 153 is available in Fedora 27:
>> 
>> https://bodhi.fedoraproject.org/updates/cockpit-153-1.fc27 
>> 
>> 
>> Or download the tarball here:
>> 
>> https://github.com/cockpit-project/cockpit/releases/tag/153 
>> 
>> 
>> 
>> Take care,
>> 
>> Martin Pitt
>> 
>> ___
>> cockpit-devel mailing list -- cockpit-de...@lists.fedorahosted.org 
>> 
>> To unsubscribe send an email to cockpit-devel-le...@lists.fedorahosted.org 
>> 
>> 
>> 
>> 
>> 
>> -- 
>> MAREK LIBRA
>> SENIOR SOFTWARE ENGINEER
>> Red Hat Czech
>> 
>>  
>> ___
>> cockpit-devel mailing list -- 

Re: [ovirt-devel] [ OST Failure Report ] [ oVirt Master ] [ bootstrap.verify_add_hosts ] [ 18/10/17 ]

2017-10-19 Thread Martin Perina
On Thu, Oct 19, 2017 at 7:35 AM, Dan Kenigsberg  wrote:

> On Wed, Oct 18, 2017 at 2:40 PM, Daniel Belenky 
> wrote:
>
>> Hi all,
>>
>> *The following test is failing:* 002_bootstrap.verify_add_hosts
>> *All logs from failing job
>> *
>> *Only 2 engine patches participated in the test, so the suspected patches
>> are:*
>>
>>1. *https://gerrit.ovirt.org/#/c/82542/2*
>>
>>2. *https://gerrit.ovirt.org/#/c/82545/3
>>*
>>
>> Due to the fact that when this error first introduced we had another
>> error, the CI can't automatically detect the specific patch.
>>
>> *Error snippet from logs: **ovirt-host-deploy-ansible log
>> 
>> (Full log)*
>>
>> TASK [ovirt-host-deploy-firewalld : Enable firewalld rules] 
>> 
>> failed: [lago-basic-suite-master-host-0] (item={u'service': u'glusterfs'}) 
>> => {"changed": false, "failed": true, "item": {"service": "glusterfs"}, 
>> "msg": "ERROR: Exception caught: org.fedoraproject.FirewallD1.Exception: 
>> INVALID_SERVICE: 'glusterfs' not among existing services Permanent and 
>> Non-Permanent(immediate) operation, Services are defined by port/tcp 
>> relationship and named as they are in /etc/services (on most systems)"}
>>
>>
>> *Error from HOST 0 firewalld
>> log: lago-basic-suite-master-host-0/_var_log/firewalld/
>> 
>>  (Full
>> log)*
>>
>> 2017-10-15 16:51:24 ERROR: INVALID_SERVICE: 'glusterfs' not among existing 
>> services
>>
>>
> Ondra, would such an error propagate through the playbook to Engine and
> fail the add-host flow? (I think it should!)
>

​We didn't do that so far, because of EL 7.3​
​. We need firewalld from 7.4 to have all available services in place (I
don't remember but I think imageio service was the one delivered only in
firewalld from 7.4). So  up until now we ingore non-existent firewalld
service, but if needed we can turn this on and fail host deploy.
​
​

>
> Do you know which package provide the glusterfs firewalld service, and why
> it is missing from the host?
>

​So we have used 'glusterfs​' firewalld service per Sahina recommendation,
which is included in glusterfs-server package from version 3.7.6 [1]. But
this package is not installed when installing packages for cluster with
gluster capabilities enabled. So now I'm confused: don't we need
glusterfs-server package? If not and we need those ports open because they
are used by services from different already installed glusterfs packages,
shouldn't the firewalld configuration be moved from glusterfs-server to
glusterfs package?


[1] https://bugzilla.redhat.com/show_bug.cgi?id=1057295
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

Re: [ovirt-devel] [ovirt-users] Cockpit oVirt support

2017-10-19 Thread Marek Libra
Regarding libvirt, there's fallback to Libvirt provider (part of
cockpit-machines) for VMs which are not managed by oVirt.
For the oVirt ones, oVirt API handles all the actions.

It's not yet implemented, but I'm considering to fallback to Libvirt for
selected actions in case the oVirt API can't be reached. Like for shut down
[1].

Anyway, there's still open question with the Libvirt connection since it's
secured on an oVirt host.

[1] https://github.com/cockpit-project/cockpit/issues/7670

On Wed, Oct 18, 2017 at 3:18 PM, Ryan Barry  wrote:

> This looks great, guys. Congrats!
>
> Does this also work with plain libvirt?
>
> On Wed, Oct 18, 2017 at 3:24 AM, Michal Skrivanek <
> michal.skriva...@redhat.com> wrote:
>
>> Hi all,
>> I’m happy to announce that we finally finished initial contribution of
>> oVirt specific support into the Cockpit management platform
>> See below for more details
>>
>> There are only limited amount of operations you can do at the moment, but
>> it may already be interesting for troubleshooting and simple admin actions
>> where you don’t want to launch the full blown webadmin UI
>>
>> Worth noting that if you were ever intimidated by the complexity of the
>> GWT UI of oVirt portals and it held you back from contributing, please take
>> another look!
>>
>> Thanks,
>> michal
>>
>> Begin forwarded message:
>>
>> *From: *Marek Libra 
>> *Subject: **Re: Cockpit 153 released*
>> *Date: *17 October 2017 at 16:02:59 GMT+2
>> *To: *Development discussion for the Cockpit Project <
>> cockpit-de...@lists.fedorahosted.org>
>> *Reply-To: *Development discussion for the Cockpit Project <
>> cockpit-de...@lists.fedorahosted.org>
>>
>> Walk-through video for the new "oVirt Machines" page can be found here:
>> https://youtu.be/5i-kshT6c5A
>>
>> On Tue, Oct 17, 2017 at 12:08 PM, Martin Pitt  wrote:
>>
>>> http://cockpit-project.org/blog/cockpit-153.html
>>>
>>> Cockpit is the modern Linux admin interface. We release regularly. Here
>>> are the release notes from version 153.
>>>
>>>
>>> Add oVirt package
>>> -
>>>
>>> This version introduces the "oVirt Machines" page on Fedora for
>>> controlling
>>> oVirt virtual machine clusters.  This code was moved into Cockpit as it
>>> shares
>>> a lot of code with the existing "Machines" page, which manages virtual
>>> machines
>>> through libvirt.
>>>
>>> This feature is packaged in cockpit-ovirt and when installed it will
>>> replace
>>> the "Machines" page.
>>>
>>> Thanks to Marek Libra for working on this!
>>>
>>> Screenshot:
>>>
>>> http://cockpit-project.org/images/ovirt-overview.png
>>>
>>> Change: https://github.com/cockpit-project/cockpit/pull/7139
>>>
>>>
>>> Packaging cleanup
>>> -
>>>
>>> This release fixes a lot of small packaging issues that were spotted by
>>> rpmlint/lintian.
>>>
>>> Get it
>>> --
>>>
>>> You can get Cockpit here:
>>>
>>> http://cockpit-project.org/running.html
>>>
>>> Cockpit 153 is available in Fedora 27:
>>>
>>> https://bodhi.fedoraproject.org/updates/cockpit-153-1.fc27
>>>
>>> Or download the tarball here:
>>>
>>> https://github.com/cockpit-project/cockpit/releases/tag/153
>>>
>>>
>>> Take care,
>>>
>>> Martin Pitt
>>>
>>> ___
>>> cockpit-devel mailing list -- cockpit-de...@lists.fedorahosted.org
>>> To unsubscribe send an email to cockpit-devel-le...@lists.fedo
>>> rahosted.org
>>>
>>>
>>
>>
>> --
>> Marek Libra
>>
>> senior software engineer
>> Red Hat Czech
>>
>> 
>> ___
>> cockpit-devel mailing list -- cockpit-de...@lists.fedorahosted.org
>> To unsubscribe send an email to cockpit-devel-le...@lists.fedo
>> rahosted.org
>>
>>
>>
>> ___
>> Users mailing list
>> us...@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>>
>>
>
>
> --
>
> RYAN BARRY
>
> SENIOR SOFTWARE ENGINEER - TEAM LEAD - RHEV HYPERVISOR
>
> Red Hat NA 
>
> rba...@redhat.comM: +1-651-815-9306 IM: rbarry
> 
>
> ___
> Devel mailing list
> Devel@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/devel
>



-- 

Marek Libra

senior software engineer

Red Hat Czech


___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

Re: [ovirt-devel] [ OST Failure Report ] [ oVirt Master ] [ bootstrap.verify_add_hosts ] [ 18/10/17 ]

2017-10-19 Thread Dan Kenigsberg
On Thu, Oct 19, 2017 at 8:35 AM, Dan Kenigsberg  wrote:
> On Wed, Oct 18, 2017 at 2:40 PM, Daniel Belenky  wrote:
>>
>> Hi all,
>>
>> The following test is failing: 002_bootstrap.verify_add_hosts
>> All logs from failing job
>> Only 2 engine patches participated in the test, so the suspected patches
>> are:
>>
>> https://gerrit.ovirt.org/#/c/82542/2
>> https://gerrit.ovirt.org/#/c/82545/3
>>
>> Due to the fact that when this error first introduced we had another
>> error, the CI can't automatically detect the specific patch.
>>
>> Error snippet from logs: ovirt-host-deploy-ansible log (Full log)
>>
>> TASK [ovirt-host-deploy-firewalld : Enable firewalld rules]
>> 
>> failed: [lago-basic-suite-master-host-0] (item={u'service': u'glusterfs'})
>> => {"changed": false, "failed": true, "item": {"service": "glusterfs"},
>> "msg": "ERROR: Exception caught: org.fedoraproject.FirewallD1.Exception:
>> INVALID_SERVICE: 'glusterfs' not among existing services Permanent and
>> Non-Permanent(immediate) operation, Services are defined by port/tcp
>> relationship and named as they are in /etc/services (on most systems)"}
>>
>>
>> Error from HOST 0 firewalld log:
>> lago-basic-suite-master-host-0/_var_log/firewalld/ (Full log)
>>
>> 2017-10-15 16:51:24 ERROR: INVALID_SERVICE: 'glusterfs' not among existing
>> services
>
>
> Ondra, would such an error propagate through the playbook to Engine and fail
> the add-host flow? (I think it should!)
>
> Do you know which package provide the glusterfs firewalld service, and why
> it is missing from the host?
>

I see that /usr/lib/firewalld/services/glusterfs.xml is shipped by
glusterfs-server.rpm, but it provides a server named
"glusterfs-static". In any case, a virt-only cluster should not be
installing glusterfs-server.rpm nor requiring any glusterfs firewall
service.
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel