[ovirt-devel] Re: PoC - using pre-built VM images in OST

2020-04-27 Thread Barak Korren
בתאריך יום ב׳, 27 באפר׳ 2020, 17:15, מאת Marcin Sobczyk ‏<
msobc...@redhat.com>:

> Hi,
>
> recently I've been working on a PoC for OST that replaces the usage
> of lago templates with pre-built, layered VM images packed in RPMs [2][7].
>
>
> What's the motivation?
>
> There are two big pains around OST - first one is that it's slow
> and the second one is it uses lago, which is unmaintained.
>
>
> How is OST working currently?
>
> Lago launches VMs based on templates. It actually has its own mechanism
> for VM
> templating - you can find the ones that we currently use here [1]. How
> these
> templates are created? There is a multiple-page doc somewhere that
> describes the process,
> but few are familiar with it. These templates are nothing special really -
> just a xzipped
> qcow with some metadata attached. The proposition here is to replace those
> templates with
> RPMs with qcows inside. The RPMs themselves would be built by a CI
> pipeline. An example
> of a pipeline like this can be found here [2].
>
>
> Why RPMs?
>
> It ticks all the boxes really. RPMs provide:
> - tried and well known mechanisms for packaging, versioning, and
> distribution instead
>   of lago's custom ones
> - dependencies which permit to layer the VM images in a controllable way
> - we already install RPMs when running OST, so using the new ones is a
> matter of adding
>   some dependencies
>
>
> How the image building pipeline works? [3]
>
> - we download a dvd iso for installation of the distro
> - we use 'virt-install' with the dvd iso + kickstart file to build a
> 'base' layer
>   qcow image
> - we create another qcow image that has the 'base' image as the backing
> one. In this
>   image we use 'virt-customize' to run 'dnf upgrade'.  This is our
> 'upgrade' layer.
> - we create two more qcow images that have the 'upgrade' image as the
> backing one. On one
>   of them we install the 'ovirt-host' package and on the other the
> 'ovirt-engine'. These are
>   our 'host-installed' and 'engine-installed' layers.
> - we create 4 RPMs for these qcows:
>   * ost-images-base
>   * ost-images-upgrade
>   * ost-images-host-installed
>   * ost-images-engine-installed
> - we publish the RPMs to templates.ovirt.org/yum/ DNF repository (not
> implemented yet)
>
> Each of those RPMs holds their respective qcow image. They also have
> proper dependencies
> set up - since 'upgrade' layer requires 'base' layer to be functional, it
> has an RPM
> requirement to that package. Same thing happens for '*-installed' packages
> which depend on
> 'upgrade' package.
>
> Since this is only a PoC there's still a lot of room for improvement
> around the pipeline.
> The 'base' RPM would be actually built very rarely, since it's a bare
> distro, and the
> 'upgrade' and '*-installed' RPMs would be built nightly. This would allow
> us to simply
> type 'dnf upgrade' on any machine and have a fresh set of VMs ready to be
> used with OST.
>
>
> Advantages:
>
> - we have CI for building OST images instead of current, obscure template
> creating process
> - we get rid of lots of unnecessary preparations that are done during each
> OST run
>   by moving stuff from 'deploy scripts' [4] to image-building pipeline -
> this should
>   speed up the runs a lot
> - if the nightly pipeline for building images is not successful, the RPMs
> won't be
>   published - OST will use the older ones. This makes a nice "early error
> detection"
>   mechanism and can partially mitigate situations where everything is
> blocked because
>   of some, i.e. dependency issues.
> - it's another step for removing responsibilities from lago
> - the pre-built VM images can be used for much more than OST - functional
> testing of
>   vdsm/engine on a VM? We have an image for that
> - we can build images for multiple distros, both u/s and d/s, easily
>
>
> Caveats:
>
> - we have to download the RPMs before running OST and that takes time,
> since they're big.
>   This can be handled by having them cached on the CI slaves though.
> - current limitations of CI and lago force us to make a copy of the images
> after
>   installation so they can be seen both by the processes in the chroot and
> libvirt, which
>   is running outside of chroot. Right now they're placed in '/dev/shm'
> (which would
>   actually make some sense if they could be shared among all OST runs on
> the slave, but
>   that's another story). There are some possible workarounds around that
> problem too (like
>   running pipelines on bare metal machines with libvirt running inside
> chroot)
> - multiple qcow layers can slow down the runs because there's a lot of
> jumping around.
>   This can be handled by i.e. introducing a meta package that squashes all
> the layers into
>   one.
> - we need a way to run OST with custom-built artifacts. There are multiple
> ways we can
>   approach it:
>   * use 'upgrade' layer and not '*-installed' one
>   * first build your artifacts, then build VM image RPMs that have your
> artifacts
> 

[ovirt-devel] Re: running podman in mock

2020-01-29 Thread Barak Korren
I surprised you managed to make podman work at all - we know it is quite
unfriendly toward running inside chroots.

Please consider using our new container backend
<https://ovirt-infra-docs.readthedocs.io/en/latest/CI/STDCI-Containers/index.html>
instead of running your own containers via Podman/Docker.

On Wed, 29 Jan 2020 at 11:24, Miguel Duarte de Mora Barroso <
mdbarr...@redhat.com> wrote:

>
>
> On Wed, Jan 29, 2020 at 10:12 AM Galit Rosenthal 
> wrote:
>
>> Hi Miguel,
>>
>> Are you running on mock el7, when you fail?
>>
>
> It works when mock creates an el7 env; fails when it creates an el8 / fc30
> env.
>
>
>>
>> Regards,
>> Galit
>>
>>
>> On Wed, Jan 29, 2020 at 10:59 AM Miguel Duarte de Mora Barroso <
>> mdbarr...@redhat.com> wrote:
>>
>>> On Wed, Jan 29, 2020 at 9:53 AM Miguel Duarte de Mora Barroso
>>>  wrote:
>>> >
>>> > Hi,
>>> >
>>> > When attempting to make the ovirt-provider-ovn integration tests run
>>> > in el8 (which requires podman instead of docker), I'm getting trouble
>>> > even running it (it being podman).
>>> >
>>> > It fails with the following error - e.g. on a podman info command:
>>> > Error: could not get runtime: kernel does not support overlay fs:
>>> > overlay: the backing xfs filesystem is formatted without d_type
>>> > support, which leads to incorrect behavior. Reformat the filesystem
>>> > with ftype=1 to enable d_type support. Running without d_type is not
>>> > supported.: driver not supported
>>>
>>> I forgot to add that this happens on the CI environment; locally (on
>>> fc30) I'm being able to run it perfectly fine.
>>>
>>> >
>>> > Has anyone faced anything like this / is able to provide some pointers
>>> ?
>>> >
>>> > Thanks in advance,
>>> > Miguel
>>> ___
>>> Devel mailing list -- devel@ovirt.org
>>> To unsubscribe send an email to devel-le...@ovirt.org
>>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>>> oVirt Code of Conduct:
>>> https://www.ovirt.org/community/about/community-guidelines/
>>> List Archives:
>>> https://lists.ovirt.org/archives/list/devel@ovirt.org/message/6VNVO4YKXVJPAZC6MAZHHDYIWAIEQN2S/
>>>
>>
>>
>> --
>>
>> GALIT ROSENTHAL
>>
>> SOFTWARE ENGINEER
>>
>> Red Hat
>>
>> <https://www.redhat.com/>
>>
>> ga...@redhat.comT: 972-9-7692230
>> <https://red.ht/sig>
>>
>

-- 
Barak Korren
RHV DevOps team , RHCE, RHCi
Red Hat EMEA
redhat.com | TRIED. TESTED. TRUSTED. | redhat.com/trusted
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/GPVQ3JEW4IAHZHUMEIXXAMPBQVUPPZPF/


[ovirt-devel] Re: Container-based CI backend is now available for use

2019-12-31 Thread Barak Korren
*Update #2: *We have now merged all the patches that deal with artifact and
log collection. And have updated the documentation
<https://ovirt-infra-docs.readthedocs.io/en/latest/CI/STDCI-Containers/index.html>
 accordingly.
The container-based backend should now be usable for the vast majority of
the CI use cases.

We do have some more features coming down the line geared towards more
sophisticated use cases such as running OST suits and integrating with
gating and change-queue flows. those Include:

   1. Supporting the use of privileged containers
   2. Invoking the container-based backed from the gating jobs
   3. Generating and providing the `extra_sources` file
   4. Runtime injection of YUM mirror URLs
   5. Support for storing and using secret data such as password and auth
   tokens.

I invite everyone to start moving workloads to the new system and enjoy the
enhanced speed and reliability.

On Sun, 15 Dec 2019 at 14:23, Barak Korren  wrote:

> *Update: *We have now merged the automated cloning support feature, the
> currently merged code should already be applicable for a wide range of uses
> including running 'check-patch' workloads.
>
> On Thu, 12 Dec 2019 at 09:00, Barak Korren  wrote:
>
>> A little less then a month ago I sent an email to this list telling you
>> all about ongoing work to create a new container-based backend for the
>> oVirt CI system.
>>
>> I'm pleased to announce that we have managed to finally merged an initial
>> set of patches implementing that backend yesterday, and it is now
>> available for general use.
>>
>> *What? Were? How do I use it?*
>>
>> Documentation about how to use the new backend is now available in read
>> the docs
>> <https://ovirt-infra-docs.readthedocs.io/en/latest/CI/STDCI-Containers/index.html>
>> .
>>
>> *Wait! I needed it to do X which it doesn't!*
>>
>> For the time being the new backend lacks some features that some may
>> consider to be essential, such as automated cloning of patch source code
>> and build artifact collection. We already have implemented patches
>> providing a substantial amount of additional functionality, and hopefully
>> we will be able to merge them soon. Following is a list of those patches
>> and the features they implement:
>>
>>1. Automated source cloning support:
>>- 104213 <https://gerrit.ovirt.org/104213>: Implement STDCI DSL
>>   support for initContainers
>>   - 104590 <https://gerrit.ovirt.org/104590>: STDCI DSL: Add the
>>   `decorate` option
>>   - 104668 <https://gerrit.ovirt.org/104668>: Document source
>>   cloning extension for containers
>>   2. Artifact collection support
>>   - 104690 <https://gerrit.ovirt.org/104690>: Added NFS server
>>   container image
>>   - 104273 <https://gerrit.ovirt.org/104273>: STDCI PODS: Unique UID
>>   for each job build's POD
>>   - 104756 <https://gerrit.ovirt.org/104756>: pipeline-loader:
>>   refactor: separate podspec func
>>   - 104757 <https://gerrit.ovirt.org/104757>: pipeline-loader:
>>   refactor: Use podspec struct def
>>   - 104766 <https://gerrit.ovirt.org/104766>: STDCI PODs: Add
>>   artifact collection logic
>>   - 105522 <https://gerrit.ovirt.org/105522>: Documented artifact
>>   collection in containers
>>   3. Extended log collection
>>   - 104842 <https://gerrit.ovirt.org/104842>: STDCI PODs: Add POD
>>   log collection
>>   - 105523 <https://gerrit.ovirt.org/105523>: Documented log
>>   collection in containers
>>4. Privileged container support
>>   - 104786 <https://gerrit.ovirt.org/104786>: STDCI DSL: Enable
>>   privileged containers
>>   5. Support for using containers in gating jobs:
>>   - 104804 <https://gerrit.ovirt.org/104804>: standard-stage:
>>   refactor: move DSL to a library
>>   - 104811 <https://gerrit.ovirt.org/104811>: gate: Support getting
>>   suits from STDCI DSL
>>   6. Providing the `extra_sources` file to OST suit containers:
>>   - 104843 <https://gerrit.ovirt.org/104843>: stdci_runner: Create
>>   extra_sources for PODs
>>   7. Support for mirror injection and upstream source cloning
>>   - 104917 <https://gerrit.ovirt.org/104917>: Added a container with
>>   STDCI tools
>>   - 104918 <https://gerrit.ovirt.org/104918>: decorate.py: Add script
>>   - 104989 <https://gerrit.ovirt.org/104989>: STDCI DSL: Use `tools`
>>   container for `decorate`

[ovirt-devel] Re: Container-based CI backend is now available for use

2019-12-15 Thread Barak Korren
*Update: *We have now merged the automated cloning support feature, the
currently merged code should already be applicable for a wide range of uses
including running 'check-patch' workloads.

On Thu, 12 Dec 2019 at 09:00, Barak Korren  wrote:

> A little less then a month ago I sent an email to this list telling you
> all about ongoing work to create a new container-based backend for the
> oVirt CI system.
>
> I'm pleased to announce that we have managed to finally merged an initial
> set of patches implementing that backend yesterday, and it is now
> available for general use.
>
> *What? Were? How do I use it?*
>
> Documentation about how to use the new backend is now available in read
> the docs
> <https://ovirt-infra-docs.readthedocs.io/en/latest/CI/STDCI-Containers/index.html>
> .
>
> *Wait! I needed it to do X which it doesn't!*
>
> For the time being the new backend lacks some features that some may
> consider to be essential, such as automated cloning of patch source code
> and build artifact collection. We already have implemented patches
> providing a substantial amount of additional functionality, and hopefully
> we will be able to merge them soon. Following is a list of those patches
> and the features they implement:
>
>1. Automated source cloning support:
>- 104213 <https://gerrit.ovirt.org/104213>: Implement STDCI DSL
>   support for initContainers
>   - 104590 <https://gerrit.ovirt.org/104590>: STDCI DSL: Add the
>   `decorate` option
>   - 104668 <https://gerrit.ovirt.org/104668>: Document source cloning
>   extension for containers
>   2. Artifact collection support
>   - 104690 <https://gerrit.ovirt.org/104690>: Added NFS server
>   container image
>   - 104273 <https://gerrit.ovirt.org/104273>: STDCI PODS: Unique UID
>   for each job build's POD
>   - 104756 <https://gerrit.ovirt.org/104756>: pipeline-loader:
>   refactor: separate podspec func
>   - 104757 <https://gerrit.ovirt.org/104757>: pipeline-loader:
>   refactor: Use podspec struct def
>   - 104766 <https://gerrit.ovirt.org/104766>: STDCI PODs: Add
>   artifact collection logic
>   - 105522 <https://gerrit.ovirt.org/105522>: Documented artifact
>   collection in containers
>   3. Extended log collection
>   - 104842 <https://gerrit.ovirt.org/104842>: STDCI PODs: Add POD log
>   collection
>   - 105523 <https://gerrit.ovirt.org/105523>: Documented log
>   collection in containers
>4. Privileged container support
>   - 104786 <https://gerrit.ovirt.org/104786>: STDCI DSL: Enable
>   privileged containers
>   5. Support for using containers in gating jobs:
>   - 104804 <https://gerrit.ovirt.org/104804>: standard-stage:
>   refactor: move DSL to a library
>   - 104811 <https://gerrit.ovirt.org/104811>: gate: Support getting
>   suits from STDCI DSL
>   6. Providing the `extra_sources` file to OST suit containers:
>   - 104843 <https://gerrit.ovirt.org/104843>: stdci_runner: Create
>   extra_sources for PODs
>   7. Support for mirror injection and upstream source cloning
>   - 104917 <https://gerrit.ovirt.org/104917>: Added a container with
>   STDCI tools
>   - 104918 <https://gerrit.ovirt.org/104918>: decorate.py: Add script
>   - 104989 <https://gerrit.ovirt.org/104989>: STDCI DSL: Use `tools`
>   container for `decorate`
>   - 104994 <https://gerrit.ovirt.org/104994>: stdci_runner: Inject
>   mirrors in PODs
>
>
> As you can see, we have quite a big pile of reviews to do, as always, help
> is very welcome...
>
> Regards,
> Barak.
>
> --
> Barak Korren
> RHV DevOps team , RHCE, RHCi
> Red Hat EMEA
> redhat.com | TRIED. TESTED. TRUSTED. | redhat.com/trusted
>


-- 
Barak Korren
RHV DevOps team , RHCE, RHCi
Red Hat EMEA
redhat.com | TRIED. TESTED. TRUSTED. | redhat.com/trusted
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/XNZMWHDSTG77LZGP6DNDS6BRCH72JDXH/


[ovirt-devel] Container-based CI backend is now available for use

2019-12-11 Thread Barak Korren
A little less then a month ago I sent an email to this list telling you all
about ongoing work to create a new container-based backend for the oVirt CI
system.

I'm pleased to announce that we have managed to finally merged an initial
set of patches implementing that backend yesterday, and it is now
available for general use.

*What? Were? How do I use it?*

Documentation about how to use the new backend is now available in read the
docs
<https://ovirt-infra-docs.readthedocs.io/en/latest/CI/STDCI-Containers/index.html>
.

*Wait! I needed it to do X which it doesn't!*

For the time being the new backend lacks some features that some may
consider to be essential, such as automated cloning of patch source code
and build artifact collection. We already have implemented patches
providing a substantial amount of additional functionality, and hopefully
we will be able to merge them soon. Following is a list of those patches
and the features they implement:

   1. Automated source cloning support:
   - 104213 <https://gerrit.ovirt.org/104213>: Implement STDCI DSL support
  for initContainers
  - 104590 <https://gerrit.ovirt.org/104590>: STDCI DSL: Add the
  `decorate` option
  - 104668 <https://gerrit.ovirt.org/104668>: Document source cloning
  extension for containers
  2. Artifact collection support
  - 104690 <https://gerrit.ovirt.org/104690>: Added NFS server
  container image
  - 104273 <https://gerrit.ovirt.org/104273>: STDCI PODS: Unique UID
  for each job build's POD
  - 104756 <https://gerrit.ovirt.org/104756>: pipeline-loader:
  refactor: separate podspec func
  - 104757 <https://gerrit.ovirt.org/104757>: pipeline-loader:
  refactor: Use podspec struct def
  - 104766 <https://gerrit.ovirt.org/104766>: STDCI PODs: Add artifact
  collection logic
  - 105522 <https://gerrit.ovirt.org/105522>: Documented artifact
  collection in containers
  3. Extended log collection
  - 104842 <https://gerrit.ovirt.org/104842>: STDCI PODs: Add POD log
  collection
  - 105523 <https://gerrit.ovirt.org/105523>: Documented log collection
  in containers
   4. Privileged container support
  - 104786 <https://gerrit.ovirt.org/104786>: STDCI DSL: Enable
  privileged containers
  5. Support for using containers in gating jobs:
  - 104804 <https://gerrit.ovirt.org/104804>: standard-stage: refactor:
  move DSL to a library
  - 104811 <https://gerrit.ovirt.org/104811>: gate: Support getting
  suits from STDCI DSL
  6. Providing the `extra_sources` file to OST suit containers:
  - 104843 <https://gerrit.ovirt.org/104843>: stdci_runner: Create
  extra_sources for PODs
  7. Support for mirror injection and upstream source cloning
  - 104917 <https://gerrit.ovirt.org/104917>: Added a container with
  STDCI tools
  - 104918 <https://gerrit.ovirt.org/104918>: decorate.py: Add script
  - 104989 <https://gerrit.ovirt.org/104989>: STDCI DSL: Use `tools`
  container for `decorate`
  - 104994 <https://gerrit.ovirt.org/104994>: stdci_runner: Inject
  mirrors in PODs


As you can see, we have quite a big pile of reviews to do, as always, help
is very welcome...

Regards,
Barak.

-- 
Barak Korren
RHV DevOps team , RHCE, RHCi
Red Hat EMEA
redhat.com | TRIED. TESTED. TRUSTED. | redhat.com/trusted
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/2B4SGYBPK3W7UN4G4PUQJEIAUBFSFQPA/


[ovirt-devel] Re: manual system test runner broken

2019-11-19 Thread Barak Korren
On Tue, 19 Nov 2019 at 12:31, Sandro Bonazzola  wrote:

>
>
> Il giorno mar 19 nov 2019 alle ore 11:02 Barak Korren 
> ha scritto:
>
>> The fallback is defined in the reposync file of the suit - the option in
>> the job that talks about this does nothing AFAIK. If the non-default value
>> is selected it tries to add the non existent `experimental` repo to
>> extra_sources. We should probably just drop that option from the Job GUI.
>>
>
>
> So we need a way to rewrite the fallback in the reposync from the manual
> runner job or it won't really be that useful.
>

Open a ticket...
I can't think of a way to do this that will not be very fragile though,
maybe it's best if it would just be passed as an env var to the suit and
have it decide what and how to do it.


>
>
>>
>> On Tue, 19 Nov 2019 at 11:49, Sandro Bonazzola 
>> wrote:
>>
>>> Hi,
>>> just executed basic suite for testing 4.3.7 rc4 and instead of falling
>>> back on latest released it was falling back on latest tested which is newer
>>> than 4.3.7 rc4 so made the test useless.
>>> Job execution is here:
>>> https://jenkins.ovirt.org/job/ovirt-system-tests_manual/6071/
>>> who's maintaining manual ost runner? I have the feeling it's not a suite
>>> bug, but a job bug.
>>>
>>> --
>>>
>>> Sandro Bonazzola
>>>
>>> MANAGER, SOFTWARE ENGINEERING, EMEA R RHV
>>>
>>> Red Hat EMEA <https://www.redhat.com/>
>>>
>>> sbona...@redhat.com
>>> <https://www.redhat.com/>*Red Hat respects your work life balance.
>>> Therefore there is no need to answer this email out of your office hours.*
>>> ___
>>> Infra mailing list -- in...@ovirt.org
>>> To unsubscribe send an email to infra-le...@ovirt.org
>>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>>> oVirt Code of Conduct:
>>> https://www.ovirt.org/community/about/community-guidelines/
>>> List Archives:
>>> https://lists.ovirt.org/archives/list/in...@ovirt.org/message/4EBDHNB24R7GTE64MIIJOICMZ5LGQ2KQ/
>>>
>>
>>
>> --
>> Barak Korren
>> RHV DevOps team , RHCE, RHCi
>> Red Hat EMEA
>> redhat.com | TRIED. TESTED. TRUSTED. | redhat.com/trusted
>>
>
>
> --
>
> Sandro Bonazzola
>
> MANAGER, SOFTWARE ENGINEERING, EMEA R RHV
>
> Red Hat EMEA <https://www.redhat.com/>
>
> sbona...@redhat.com
> <https://www.redhat.com/>*Red Hat respects your work life balance.
> Therefore there is no need to answer this email out of your office hours.
> <https://mojo.redhat.com/docs/DOC-1199578>*
>


-- 
Barak Korren
RHV DevOps team , RHCE, RHCi
Red Hat EMEA
redhat.com | TRIED. TESTED. TRUSTED. | redhat.com/trusted
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/R4PBN3FFNB2XBCIYWRLYSKDNYVWKAD4A/


[ovirt-devel] Re: manual system test runner broken

2019-11-19 Thread Barak Korren
The fallback is defined in the reposync file of the suit - the option in
the job that talks about this does nothing AFAIK. If the non-default value
is selected it tries to add the non existent `experimental` repo to
extra_sources. We should probably just drop that option from the Job GUI.

On Tue, 19 Nov 2019 at 11:49, Sandro Bonazzola  wrote:

> Hi,
> just executed basic suite for testing 4.3.7 rc4 and instead of falling
> back on latest released it was falling back on latest tested which is newer
> than 4.3.7 rc4 so made the test useless.
> Job execution is here:
> https://jenkins.ovirt.org/job/ovirt-system-tests_manual/6071/
> who's maintaining manual ost runner? I have the feeling it's not a suite
> bug, but a job bug.
>
> --
>
> Sandro Bonazzola
>
> MANAGER, SOFTWARE ENGINEERING, EMEA R RHV
>
> Red Hat EMEA <https://www.redhat.com/>
>
> sbona...@redhat.com
> <https://www.redhat.com/>*Red Hat respects your work life balance.
> Therefore there is no need to answer this email out of your office hours.*
> ___
> Infra mailing list -- in...@ovirt.org
> To unsubscribe send an email to infra-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/in...@ovirt.org/message/4EBDHNB24R7GTE64MIIJOICMZ5LGQ2KQ/
>


-- 
Barak Korren
RHV DevOps team , RHCE, RHCi
Red Hat EMEA
redhat.com | TRIED. TESTED. TRUSTED. | redhat.com/trusted
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/H5NZUWPI2RHTYDB76NLIKJOUIAN3F3DA/


[ovirt-devel] Re: workaround for the lack of native el8 slaves in the CI

2019-10-23 Thread Barak Korren
On Wed, 23 Oct 2019 at 16:28, Michal Skrivanek 
wrote:

>
>
> On 22 Oct 2019, at 14:04, Nir Soffer  wrote:
>
> On Wed, Oct 16, 2019 at 11:53 AM Marcin Sobczyk 
> wrote:
>
>> Hi all,
>>
>> we're trying to move vdsm tests to el8.
>>
>> AFAIK we don't have native el8 slaves in the CI yet, so our el8 stages
>> run on el7 with el8 mocked.
>> There are some issues with scripts setting up the env for storage tests -
>> they are refusing to work on older
>> kernel versions.
>>
>
> This is true, but the issue is only xfs. We can enable the tests by
> switching to ext4 temporarily.
>
> We can revert this:
>
> commit b520d0df06129dd9d8d7f76f08425a3007962ab5
> Author: Nir Soffer 
> Date:   Tue Aug 27 18:32:19 2019 +0300
>
> tests: Revert "tests: use ext4 for userstorage"
>
> Then moving to https://github.com/nirs/userstorage will make it possible
> to have an
> optional xfs backend that will be skipped only on Jenkins, but will run
> the xfs tests on
> travis and when running locally. This is what we do now in imageio:
> https://gerrit.ovirt.org/c/103958/
>
> We can enable xfs again when we have a way to run el8 slaves on fedora >=
> 29.
>
>
> That sounds great, can you go for it? (at least the first part, to unblock
> the jenkins CI )
>
>
Reminder that we have a one-liner patch that would probably unblock things
as well:

- https://gerrit.ovirt.org/c/104165/


>
> I have an idea for a workaround - until now we were using 'host-distro:
>> same' in our
>> 'stdci.yaml'. I can see that this option can also take a value of
>> 'better/newer' [1], but that won't work
>> for el8. Could we possibly implement something like [2] ? That would
>> allow us to i.e. define an el8
>> mocked env with 'host-distro: fc30' enforcing native fc30 host. I assume
>> this would be a temporary
>> workaround only, until we have native el8 slaves.
>>
>> Regards, Marcin
>>
>> [1]
>> https://github.com/oVirt/jenkins/blob/a796094bfd15f1a85b9f630aaeb25ce6e0dab35d/pipelines/libs/stdci_runner.groovy#L127
>> [2] https://gerrit.ovirt.org/#/c/104091/
>>
>>
>

-- 
Barak Korren
RHV DevOps team , RHCE, RHCi
Red Hat EMEA
redhat.com | TRIED. TESTED. TRUSTED. | redhat.com/trusted
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/BAA6X7ALSTMVJKXCG3PGFFS35RBV6XQF/


[ovirt-devel] Re: workaround for the lack of native el8 slaves in the CI

2019-10-22 Thread Barak Korren
On Tue, 22 Oct 2019 at 13:54, Eyal Edri  wrote:

>
>
> On Tue, Oct 22, 2019 at 1:50 PM Barak Korren  wrote:
>
>>
>>
>> On Wed, 16 Oct 2019 at 12:21, Vojtech Juranek 
>> wrote:
>>
>>> On středa 16. října 2019 10:53:45 CEST Marcin Sobczyk wrote:
>>> > Hi all,
>>> >
>>> > we're trying to move vdsm tests to el8.
>>> >
>>> > AFAIK we don't have native el8 slaves in the CI yet, so our el8 stages
>>> run
>>> > on el7 with el8 mocked.
>>> > There are some issues with scripts setting up the env for storage
>>> tests -
>>> > they are refusing to work on older
>>> > kernel versions.
>>> >
>>> > I have an idea for a workaround - until now we were using 'host-distro:
>>> > same' in our
>>> > 'stdci.yaml'. I can see that this option can also take a value of
>>> > 'better/newer' [1], but that won't work
>>> > for el8. Could we possibly implement something like [2] ? That would
>>> allow
>>> > us to i.e. define an el8
>>> > mocked env with 'host-distro: fc30' enforcing native fc30 host. I
>>> assume
>>> > this would be a temporary
>>> > workaround only, until we have native el8 slaves.
>>>
>>> if it works, it would be great
>>>
>>
>> Given the rapid pace in which Fedora versions go EOL, I wouldn't want to
>> support this kind of syntax. Instead, I suggest we just make sure that
>> `newer` means fc30+ for el8.
>>
>
> Barak, how soon can we apply this workaround so we can unblock CI for
> projects?
>

Here is a patch that does that:
https://gerrit.ovirt.org/c/104165/

Now its just a matter of reviews



>
>
>>
>> We're not really planning to ever enable EL8 slaves, since there is a
>> huge amount of issues we'll need to solve and test to make that happen, and
>> we're looking to deprecate the static way in which we currently manage
>> slaves. In other words, any workaround you implement now, may not be as
>> temporary as you think.
>>
>>
>>>
>>> > Regards, Marcin
>>> >
>>> > [1]
>>> >
>>> https://github.com/oVirt/jenkins/blob/a796094bfd15f1a85b9f630aaeb25ce6e0dab3
>>> > 5d/pipelines/libs/stdci_runner.groovy#L127 [2]
>>> > https://gerrit.ovirt.org/#/c/104091/
>>>
>>> ___
>>> Devel mailing list -- devel@ovirt.org
>>> To unsubscribe send an email to devel-le...@ovirt.org
>>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>>> oVirt Code of Conduct:
>>> https://www.ovirt.org/community/about/community-guidelines/
>>> List Archives:
>>> https://lists.ovirt.org/archives/list/devel@ovirt.org/message/V765YTTZLZV4O7QOCTGUXAOJLKECDFTB/
>>>
>>
>>
>> --
>> Barak Korren
>> RHV DevOps team , RHCE, RHCi
>> Red Hat EMEA
>> redhat.com | TRIED. TESTED. TRUSTED. | redhat.com/trusted
>> _______
>> Devel mailing list -- devel@ovirt.org
>> To unsubscribe send an email to devel-le...@ovirt.org
>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>> oVirt Code of Conduct:
>> https://www.ovirt.org/community/about/community-guidelines/
>> List Archives:
>> https://lists.ovirt.org/archives/list/devel@ovirt.org/message/EDVX2VCPEHDSL2YTBCCN7MO7PXYURKZR/
>>
>
>
> --
>
> Eyal edri
>
> He / Him / His
>
>
> MANAGER
>
> CONTINUOUS PRODUCTIZATION
>
> SYSTEM ENGINEERING
>
> Red Hat <https://www.redhat.com/>
> <https://red.ht/sig>
> phone: +972-9-7692018
> irc: eedri (on #tlv #rhev-dev #rhev-integ #cp-devel)
>


-- 
Barak Korren
RHV DevOps team , RHCE, RHCi
Red Hat EMEA
redhat.com | TRIED. TESTED. TRUSTED. | redhat.com/trusted
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/LPTOS5VI4FQ6YHKZNMT5GZKUNHVPURPM/


[ovirt-devel] Re: workaround for the lack of native el8 slaves in the CI

2019-10-22 Thread Barak Korren
On Wed, 16 Oct 2019 at 12:21, Vojtech Juranek  wrote:

> On středa 16. října 2019 10:53:45 CEST Marcin Sobczyk wrote:
> > Hi all,
> >
> > we're trying to move vdsm tests to el8.
> >
> > AFAIK we don't have native el8 slaves in the CI yet, so our el8 stages
> run
> > on el7 with el8 mocked.
> > There are some issues with scripts setting up the env for storage tests -
> > they are refusing to work on older
> > kernel versions.
> >
> > I have an idea for a workaround - until now we were using 'host-distro:
> > same' in our
> > 'stdci.yaml'. I can see that this option can also take a value of
> > 'better/newer' [1], but that won't work
> > for el8. Could we possibly implement something like [2] ? That would
> allow
> > us to i.e. define an el8
> > mocked env with 'host-distro: fc30' enforcing native fc30 host. I assume
> > this would be a temporary
> > workaround only, until we have native el8 slaves.
>
> if it works, it would be great
>

Given the rapid pace in which Fedora versions go EOL, I wouldn't want to
support this kind of syntax. Instead, I suggest we just make sure that
`newer` means fc30+ for el8.

We're not really planning to ever enable EL8 slaves, since there is a huge
amount of issues we'll need to solve and test to make that happen, and
we're looking to deprecate the static way in which we currently manage
slaves. In other words, any workaround you implement now, may not be as
temporary as you think.


>
> > Regards, Marcin
> >
> > [1]
> >
> https://github.com/oVirt/jenkins/blob/a796094bfd15f1a85b9f630aaeb25ce6e0dab3
> > 5d/pipelines/libs/stdci_runner.groovy#L127 [2]
> > https://gerrit.ovirt.org/#/c/104091/
>
> ___
> Devel mailing list -- devel@ovirt.org
> To unsubscribe send an email to devel-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/devel@ovirt.org/message/V765YTTZLZV4O7QOCTGUXAOJLKECDFTB/
>


-- 
Barak Korren
RHV DevOps team , RHCE, RHCi
Red Hat EMEA
redhat.com | TRIED. TESTED. TRUSTED. | redhat.com/trusted
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/EDVX2VCPEHDSL2YTBCCN7MO7PXYURKZR/


[ovirt-devel] Re: ovirt-engine fails CQ due to broken engine-setup

2019-10-02 Thread Barak Korren
On Wed, 2 Oct 2019 at 14:22, Eyal Edri  wrote:

>
>
> On Mon, Sep 30, 2019 at 11:19 AM Martin Perina  wrote:
>
>> Hi,
>>
>> as Didi mentioned in relevant email thread "[ovirt-devel] [ACTION
>> REQUIRED] unicode patches merged, update otopi" both otopi and ovirt-engine
>> must be updated to latest version. So most probably both otopi and
>> ovirt-engine have to go though CQ at once.
>>
>
> Can you specify which patches for engine and otopi must run together?
> Barak, can you help Dusan with running both patches together? should both
> use 'ci re-merge' together?
>
>

Generally if the queue is not empty and there are no regression - doing
re-merge for both at once would work (btw, if the builds are not too old,
rerunning a queue `add` build is enough, and it can skip building)

if the queue is empty:
1. re-add the 1st one - it will start being tested on its own (and will
fail)
2. re-add the 1st one again (it will wait for the running test to finish)
3. re-add the 2nd one (both will wait in the queue and therefore be tested
together)




>
>> M.
>>
>> On Sat, Sep 28, 2019 at 11:43 PM Dusan Fodor  wrote:
>>
>>> Hello all,
>>> initialize_engine test fails in CQ due to utf8/unicode type mismatch in
>>> engine-setup/ answer file:
>>> [ ERROR ] Failed to execute stage 'Clean up': must be unicode, not str
>>>
>>> The suspected change:
>>> https://gerrit.ovirt.org/#/c/102934/
>>>
>>> Failed job example:
>>>
>>> https://jenkins.ovirt.org/view/Change%20queue%20jobs/job/ovirt-master_change-queue-tester/16110
>>> <https://jenkins.ovirt.org/view/Change%20queue%20jobs/job/ovirt-master_change-queue-tester/16110/testReport/junit/(root)/001_upgrade_engine/running_tests___upgrade_from_release_suite_el7_x86_64___test_initialize_engine/>
>>>
>>> Can you please this?
>>> Thanks
>>> ___
>>> Devel mailing list -- devel@ovirt.org
>>> To unsubscribe send an email to devel-le...@ovirt.org
>>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>>> oVirt Code of Conduct:
>>> https://www.ovirt.org/community/about/community-guidelines/
>>> List Archives:
>>> https://lists.ovirt.org/archives/list/devel@ovirt.org/message/CA2OUDAWZPWLSEASQOP6AUZ3MK6PP6JV/
>>>
>>
>>
>> --
>> Martin Perina
>> Manager, Software Engineering
>> Red Hat Czech s.r.o.
>> ___
>> Devel mailing list -- devel@ovirt.org
>> To unsubscribe send an email to devel-le...@ovirt.org
>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>> oVirt Code of Conduct:
>> https://www.ovirt.org/community/about/community-guidelines/
>> List Archives:
>> https://lists.ovirt.org/archives/list/devel@ovirt.org/message/RYG7BQMTZ6CYRND3GQ45EBBAVEJMUTVE/
>>
>
>
> --
>
> Eyal edri
>
> He / Him / His
>
>
> MANAGER
>
> CONTINUOUS PRODUCTIZATION
>
> SYSTEM ENGINEERING
>
> Red Hat <https://www.redhat.com/>
> <https://red.ht/sig>
> phone: +972-9-7692018
> irc: eedri (on #tlv #rhev-dev #rhev-integ #cp-devel)
>


-- 
Barak Korren
RHV DevOps team , RHCE, RHCi
Red Hat EMEA
redhat.com | TRIED. TESTED. TRUSTED. | redhat.com/trusted
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/C5S6K33XBCXOD6QXXX67D7Q2M7YUP4BH/


[ovirt-devel] Re: Patch Gating summary + FAQ

2019-09-27 Thread Barak Korren
On Wed, 25 Sep 2019 at 15:30, Michal Skrivanek 
wrote:

>
>
> On 22 Sep 2019, at 09:33, Barak Korren  wrote:
>
>
>
> On Thu, 19 Sep 2019 at 17:13, Ehud Yonasi  wrote:
>
>> Hey everyone,
>> Following the presentation we did last week [1]
>> <https://bluejeans.com/s/zAjyX/>, I wanted to summarize the new patch
>> gating workflow that will be pushed to oVirt soon and will impact all the
>> developers.
>>
>> Summary:
>>
>> The purpose of the new workflow is to verify patches earlier ( shift left
>> ) before they are merged and provide much faster feedback for developers if
>> their patch fails OST.
>>
>>
>>1. Feedback from OST will now be posted directly to Gerrit instead of
>>requiring human intervention from the infra team to notify developers
>>
>>
>> We expect developers to check why their patch is not passing the gate
>> (OST), debug it, find the root cause and fix it before merging the patch.
>>
>>
>>1. Any concerns regarding the stability of OST should be communicated
>>and addressed ASAP.
>>
>> The status today is that if OST fails post-merge gating packages are not
>> pushed to tested and QE doesn’t get to test them. The change with pre-merge
>> gating is that patches won’t be merged if OST fails, so if there are any
>> fragile or flaky tests they should be examined by their maintainers and
>> fixed, skipped or removed by the maintainers.
>>
>>
>>1. FYI, we are not removing the Merge button at this point, so
>>maintainers will still be able to merge patches that they believe 100% is
>>not breaking the build and failing OST tests.
>>
>>
>> Please note that merging patches that break OST will cause it to start
>> failing for all other patches, we urge you to avoid trying to bypass it if
>> at all possible.
>>
>> In the following section, I will explain more on Patch Gating, how to
>> onboard it, etc.
>>
>>
>> FAQ on oVirt’s Gating System and how to onboard your project on it:
>>
>> Q. What is Patch Gating?
>> A. It is triggered pre-merge on patches and running OST as the gate
>> system tests, unlike today
>> where we have post-merge OST that runs the patches after the projects
>> are merged. This means developers get early feedback on their patches if
>> it is passing OST.
>>
>> Q. What causes the gating process to start?
>> A. Once a patch is verified, passed CI and has Code-Review +2 labels,
>> the gating process will be started. You will receive a message in the patch
>>
>> Q. How does it report results to my patches?
>> A. A comment will be posted in your patch with the job URL failure.
>>
>>
>> Q. How will my patch get merged?
>> A. If the patch has passed the gating (OST), Zuul (The new CI system for
>> patch gating) will merge the patch automatically.
>>
>>
>> Q. How do I onboard my project?
>> A.
>>
>>1. Open a JIRA ticket or mail to infra-supp...@ovirt.org
>>2. Creating a file named 'zuul.yaml' under your project root OR
>>`zuul.d/zuul.yaml` and fill with the following content:
>>
>>
>> - project:
>> templates:
>>   - ost-gated-project
>>
>>
>> Q. My projects run on STDCI V1, is that ok?
>> A. No, the patch gating logic runs on STDCI V2 only! meaning that you
>> will have to shift your project to V2.
>> If you need help regarding the transition to V2 you can open a JIRA[2]
>> <https://ovirt-jira.atlassian.net/>ticket or mail to
>> infra-supp...@ovirt.org
>> and visit the docs [3]
>> <https://ovirt-infra-docs.readthedocs.io/en/latest/>.
>>
>> Q. What if I want to merge the patch regardless of OST results?
>> A. If you are a maintainer of the project, you can still merge it. we
>> are not removing the merge button option.
>> But, merging when failing OST can break your project so merging on
>> failure is unadvertised.
>>
>> Q. What if my patch failing because of dependency on different project
>> patch?
>> A. Patch Gating (Zuul) has a mechanism for cross-project dependency! All
>> you need to do is to add to the
>> commit message the patch URL you are dependent on:
>>
>> Depends-On: https://gerrit.ovirt.org/patch_number
>>
>> And they will be tested together.
>>
>> Note: you can have multiple dependencies.
>>
>> Q. How do I debug OST?
>> A. There are various ways of looking in the logs and output for the
>> errors:
>>
>>1.
>>
>>Blue

[ovirt-devel] Re: CI is not triggered for pushed gerrit updates

2019-09-25 Thread Barak Korren
This is a known issue - tacker ticket:

https://ovirt-jira.atlassian.net/browse/OVIRT-2802

Will close new ticket as a duplicate

On Wed, 25 Sep 2019 at 13:08, Nir Soffer  wrote:

> On Wed, Sep 25, 2019 at 12:42 PM Amit Bawer  wrote:
>
>> CI has stopped from being triggered for pushed gerrit updates.
>>
>
> Adding infra-suppo...@ovirt.org - this file a bug in infra bug tracker.
>
> Example at: https://gerrit.ovirt.org/#/c/103320/
>> last PS did not trigger CI tests.
>>
>
> More examples:
> https://gerrit.ovirt.org/c/103490/2
> https://gerrit.ovirt.org/c/103455/4
>
> Last triggered build was seen here:
> https://gerrit.ovirt.org/c/103549/1
> Sep 24 18:42.
>
> "ci test" also did not trigger a build, but I tried now again using "ci
> please test"
> and it worked.
> https://gerrit.ovirt.org/c/103490/2#message-8d40f6de_bd0564ff
>
> Maybe someone changed the pattern?
>
>
>

-- 
Barak Korren
RHV DevOps team , RHCE, RHCi
Red Hat EMEA
redhat.com | TRIED. TESTED. TRUSTED. | redhat.com/trusted
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/Q245YZZMBQGWHT7BQF53LN32ZJEFNC43/


[ovirt-devel] Re: Patch Gating summary + FAQ

2019-09-22 Thread Barak Korren
>the failed tests and their output and OST maintainers and your team
>leads should be able to assist.
>4.
>
>For further learning on how to debug OST please visit the OST FAQ
>
> <https://drive.google.com/open?id=1Sohq7bdgZS341gs5-lvB9lyS0GXawqaIvuUziOWwoGQ>
>.
>
>
> Q. Will the current infrastructure be able to support all the patches?
>
> A. The CI team has made tremendous work in utilizing the infrastructure.
>
> The OST gating will run inside OpenShift pods unlike before as bare metals
> and we can
>
> gain from that right now approximately 50 pods in parallel to run 
> simultaneously
> and we will review adding more if the need arises.
>
> Q. When I have multiple patches, in which order will they be tested by the
> gating system?
>
> A. The patches will be tested as the flow they will be merged. The gating
> system knows how to simulate patches post-merge.
>
> Q. What do I do if I think OST failed because of an infra issue and not my
> patch?
>
> A. You can contact the CI team by sending mail to infra-supp...@ovirt.org
> and explain your concerns + sending the patch URL.
>
> Q. Will check-merged scripts be used by the gating system?
>
> A. No, they will be used in the current workflow with OST post-merge
> gating system called Change-Queue.
>
> Q. Can I add my own tests to the gating system?
>
> A. The gating system is running OST tests, so if it’s a test that should
> be included in the OST, then yes.
>
> Q. What will happen to the old change-queue system now that we have gating?
>
> A. At this time, the change-queue system will stay and gate post-merge
> jobs until all of oVirt projects will onboard to patch gating.
>
> We might consider using the change-queue for further coverage of tests in
> the future.
>
> Q. How can I re-trigger a failed patch to the gate again?
>
> A. There are 2 options to retrigger:
>
>-
>
>If the case is to fix your patch, just uploaded a new patchset and
>turn the Code-Review, Verified and CI labels again.
>-
>
>If you want to re-trigger the same patchset again just write a comment
>in Gerrit:
>
> ‘ci emulate-gate please’
>
>
The command here is "*ci gate please*", the "emulate-gate" command is for
doing a gate dry-run for debugging the gate itself. It should not be used
by oVirt developers.



>
>
>  Q. I usually write a series of related patches that should be merged
> together, can the Gating system test all of them in a single test?
>
> A. No, they will be tested in parallel as the number of patches in the
> series. This is why we’ve increased our capacity to run OST for this case.
>
>
> Architectural design document [4]
> <https://drive.google.com/open?id=1qV_iNJL6jHARlti7zpnRZRfQM_Q9Z-w8ONvcKEmaAgA>
> would provide you the understanding of the patch gating process with the
> new services we will be using.
>
>
>
> [1]: https://bluejeans.com/s/zAjyX/
>
> [2]: https://ovirt-jira.atlassian.net
>
> [3]: https://ovirt-infra-docs.readthedocs.io/en/latest/
>
> [4]
> https://drive.google.com/open?id=1qV_iNJL6jHARlti7zpnRZRfQM_Q9Z-w8ONvcKEmaAgA
>
>
> ___
> Devel mailing list -- devel@ovirt.org
> To unsubscribe send an email to devel-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/devel@ovirt.org/message/KKEHCJHRS645H4LNDPB4PKRBUBZEYLOY/
>


-- 
Barak Korren
RHV DevOps team , RHCE, RHCi
Red Hat EMEA
redhat.com | TRIED. TESTED. TRUSTED. | redhat.com/trusted
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/557ULYA4WT6ZW7327W2XSPVRICGUZTU4/


[ovirt-devel] Re: Patch Gating summary + FAQ

2019-09-22 Thread Barak Korren
?id=1qV_iNJL6jHARlti7zpnRZRfQM_Q9Z-w8ONvcKEmaAgA>
> would provide you the understanding of the patch gating process with the
> new services we will be using.
>
>
>
> [1]: https://bluejeans.com/s/zAjyX/
>
> [2]: https://ovirt-jira.atlassian.net
>
> [3]: https://ovirt-infra-docs.readthedocs.io/en/latest/
>
> [4]
> https://drive.google.com/open?id=1qV_iNJL6jHARlti7zpnRZRfQM_Q9Z-w8ONvcKEmaAgA
>
>
>
> _______
> Devel mailing list -- devel@ovirt.org
> To unsubscribe send an email to devel-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/devel@ovirt.org/message/KKEHCJHRS645H4LNDPB4PKRBUBZEYLOY/
>
> ___
> Devel mailing list -- devel@ovirt.org
> To unsubscribe send an email to devel-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/devel@ovirt.org/message/KXDJTQ7QSFPSP7HJGEUHLB75Y2N2CZ44/
>


-- 
Barak Korren
RHV DevOps team , RHCE, RHCi
Red Hat EMEA
redhat.com | TRIED. TESTED. TRUSTED. | redhat.com/trusted
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/JGNV63YVKH35CFN6XNSCP6X7X5OF2VH5/


[ovirt-devel] Re: Removing big (hc-* and metrics) suits from OST's check-patch (How to make OST faster in CI)

2019-09-19 Thread Barak Korren
On Thu, 19 Sep 2019 at 16:21, Yedidyah Bar David  wrote:

> On Thu, Sep 19, 2019 at 3:47 PM Barak Korren  wrote:
> >
> > I haven't seen any comments on this thread, so we are going to move
> forward with the change.
>
> I started writing some reply, then realized that the only effect on
> developers is when pushing patches to OST, not to their own project.
> Right? CQ will continue as normal, nightly runs, etc.? So I didn't
> reply...
>

Yeah, this only has to do with the big suits that are listed in $subject,
none of those are used by the CQ ATM.


>
> If so, that's fine for me.
>
> Please document that somewhere. Specifically, how to do the last two
> points in [1]:
>
> >
> > On Mon, 2 Sep 2019 at 09:03, Barak Korren  wrote:
> >>
> >> Adding Evgeny and Shirly who are AFAIK the owners of the metrics suit.
> >>
> >> On Sun, 1 Sep 2019 at 17:07, Barak Korren  wrote:
> >>>
> >>> If you have been using or monitoring any OST suits recently, you may
> have noticed we've been suffering from long delays in allocating CI
> hardware resources for running OST suits. I'd like to briefly discuss the
> reasons behind this, what are planning to do to resolve this and the
> implication of those actions for big suit owners.
> >>>
> >>> As you might know, we have moved a while ago from running OST suits
> each on its own dedicated server to running them inside containers managed
> by OpenShift. That had allowed us to run multiple OST suits on the same
> bare-metal host which in turn increased our overall capacity by 50% while
> still allowing us to free up hardware for accommodating the kubevirt
> project on our CI hardware.
> >>>
> >>> Our infrastructure is currently built in a way where we use the exact
> same POD specification (and therefore resource settings) for all suits.
> Making it more flexible at this point would require significant code
> changes we are not likely to make. What this means is that we need to make
> sure our PODs have enough resources to run the most demanding suits. It
> also means we waste some resources when running less demanding ones.
> >>>
> >>> Given the set of OST suits we have ATM, we sized our PODs to allocate
> 32Gibs of RAM. Given the servers we have, this means we can run 15 suits at
> a time in parallel. This was sufficient for a while, but given increasing
> demand, and the expectation for it to increase further once we introduce
> the patch gating features we've been working on, we must find a way to
> significantly increase our suit running capacity.
> >>>
> >>> We have measured the amount of RAM required by each suit and came to
> the conclusion that for the vast majority of suits, we could settle for
> PODs that allocate only 14Gibs of RAM. If we make that change, we would be
> able to run a total of 40 suits at a time, almost tripling our current
> capacity.
> >>>
> >>> The downside of making this change is that our STDCI V2 infrastructure
> will no longer be able to run suits that require more then 14Gib of RAM.
> This effectively means it would no longer be possible to run these suits
> from OST's check-patch job or from the OST manual job.
> >>>
> >>> The list of relevant suits that would be affected follows, the suit
> owners, as documented in the CI configuration, have be added as "to"
> recipients to the message:
> >>>
> >>> hc-basic-suite-4.3
> >>> hc-basic-suite-master
> >>> metrics-suite-4.3
> >>>
> >>> Since we're aware people would still like to be able to work with the
> bigger suits, we will leverage the nightly suit invocation jobs to enable
> then to be run in the CI infra. We will support the following use cases:
> >>>
> >>> Periodically running the suit on the latest oVirt packages - this will
> be done by the nightly job like it is done today
> >>> Running the suit to test changes to the suit`s code - while currently
> this is done automatically by check-patch, this would have to be done
> manually in the future by manually triggering the nightly job and setting
> the REFSPEC parameter to point to the examined patch
> >>> Triggering the suit manually - This would be done by triggering the
> suit-specific nightly job (as opposed to the general OST manual job)
>
> [1] ^^
>
> >>>
> >>>  The patches listed below implement the changes outlined above:
> >>>
> >>> 102757 nightly-system-tests: big suits -> big containers
> >>> 102771: stdci: Drop `big` suits from check-patch
> >>>
&g

[ovirt-devel] Re: Removing big (hc-* and metrics) suits from OST's check-patch (How to make OST faster in CI)

2019-09-19 Thread Barak Korren
I haven't seen any comments on this thread, so we are going to move forward
with the change.

On Mon, 2 Sep 2019 at 09:03, Barak Korren  wrote:

> Adding Evgeny and Shirly who are AFAIK the owners of the metrics suit.
>
> On Sun, 1 Sep 2019 at 17:07, Barak Korren  wrote:
>
>> If you have been using or monitoring any OST suits recently, you may have
>> noticed we've been suffering from long delays in allocating CI hardware
>> resources for running OST suits. I'd like to briefly discuss the reasons
>> behind this, what are planning to do to resolve this and the implication of
>> those actions for big suit owners.
>>
>> As you might know, we have moved a while ago from running OST suits each
>> on its own dedicated server to running them inside containers managed by
>> OpenShift. That had allowed us to run multiple OST suits on the same
>> bare-metal host which in turn increased our overall capacity by 50% while
>> still allowing us to free up hardware for accommodating the kubevirt
>> project on our CI hardware.
>>
>> Our infrastructure is currently built in a way where we use the exact
>> same POD specification (and therefore resource settings) for all suits.
>> Making it more flexible at this point would require significant code
>> changes we are not likely to make. What this means is that we need to make
>> sure our PODs have enough resources to run the most demanding suits. It
>> also means we waste some resources when running less demanding ones.
>>
>> Given the set of OST suits we have ATM, we sized our PODs to allocate
>> 32Gibs of RAM. Given the servers we have, this means we can run 15 suits at
>> a time in parallel. This was sufficient for a while, but given increasing
>> demand, and the expectation for it to increase further once we introduce
>> the patch gating features we've been working on, we must find a way to
>> significantly increase our suit running capacity.
>>
>> We have measured the amount of RAM required by each suit and came to the
>> conclusion that for the vast majority of suits, we could settle for PODs
>> that allocate only 14Gibs of RAM. If we make that change, we would be able
>> to run a total of 40 suits at a time, almost tripling our current capacity.
>>
>> The downside of making this change is that our STDCI V2 infrastructure
>> will no longer be able to run suits that require more then 14Gib of RAM.
>> This effectively means it would no longer be possible to run these suits
>> from OST's check-patch job or from the OST manual job.
>>
>> The list of relevant suits that would be affected follows, the suit
>> owners, as documented in the CI configuration, have be added as "to"
>> recipients to the message:
>>
>>- hc-basic-suite-4.3
>>- hc-basic-suite-master
>>- metrics-suite-4.3
>>
>> Since we're aware people would still like to be able to work with the
>> bigger suits, we will leverage the nightly suit invocation jobs to enable
>> then to be run in the CI infra. We will support the following use cases:
>>
>>- *Periodically running the suit on the latest oVirt packages* - this
>>will be done by the nightly job like it is done today
>>- *Running the suit to test changes to the suit`s code* - while
>>currently this is done automatically by check-patch, this would have to be
>>done manually in the future by manually triggering the nightly job and
>>setting the REFSPEC parameter to point to the examined patch
>>- *Triggering the suit manually* - This would be done by triggering
>>the suit-specific nightly job (as opposed to the general OST manual job)
>>
>>  The patches listed below implement the changes outlined above:
>>
>>- 102757 <https://gerrit.ovirt.org/102757> nightly-system-tests: big
>>suits -> big containers
>>- 102771 <https://gerrit.ovirt.org/102771>: stdci: Drop `big` suits
>>from check-patch
>>
>> We know that making the changes we presented will make things a little
>> less convenient for users and maintainers of the big suits, but we believe
>> the benefits of having vastly increased execution capacity for all other
>> suits outweigh those shortcomings.
>>
>> We would like to hear all relevant comment and questions from the quite
>> owners and other interested parties, especially is you think we should not
>> carry out the changes we propose.
>> Please take the time to respond on this thread, or on the linked patches.
>>
>> Thanks,
>>
>> --
>> Barak Korren
>> RHV DevOps team , RHCE, RHCi
>>

[ovirt-devel] Removing big (hc-* and metrics) suits from OST's check-patch (How to make OST faster in CI)

2019-09-01 Thread Barak Korren
If you have been using or monitoring any OST suits recently, you may have
noticed we've been suffering from long delays in allocating CI hardware
resources for running OST suits. I'd like to briefly discuss the reasons
behind this, what are planning to do to resolve this and the implication of
those actions for big suit owners.

As you might know, we have moved a while ago from running OST suits each on
its own dedicated server to running them inside containers managed by
OpenShift. That had allowed us to run multiple OST suits on the same
bare-metal host which in turn increased our overall capacity by 50% while
still allowing us to free up hardware for accommodating the kubevirt
project on our CI hardware.

Our infrastructure is currently built in a way where we use the exact same
POD specification (and therefore resource settings) for all suits. Making
it more flexible at this point would require significant code changes we
are not likely to make. What this means is that we need to make sure our
PODs have enough resources to run the most demanding suits. It also means
we waste some resources when running less demanding ones.

Given the set of OST suits we have ATM, we sized our PODs to allocate
32Gibs of RAM. Given the servers we have, this means we can run 15 suits at
a time in parallel. This was sufficient for a while, but given increasing
demand, and the expectation for it to increase further once we introduce
the patch gating features we've been working on, we must find a way to
significantly increase our suit running capacity.

We have measured the amount of RAM required by each suit and came to the
conclusion that for the vast majority of suits, we could settle for PODs
that allocate only 14Gibs of RAM. If we make that change, we would be able
to run a total of 40 suits at a time, almost tripling our current capacity.

The downside of making this change is that our STDCI V2 infrastructure will
no longer be able to run suits that require more then 14Gib of RAM. This
effectively means it would no longer be possible to run these suits from
OST's check-patch job or from the OST manual job.

The list of relevant suits that would be affected follows, the suit owners,
as documented in the CI configuration, have be added as "to" recipients to
the message:

   - hc-basic-suite-4.3
   - hc-basic-suite-master
   - metrics-suite-4.3

Since we're aware people would still like to be able to work with the
bigger suits, we will leverage the nightly suit invocation jobs to enable
then to be run in the CI infra. We will support the following use cases:

   - *Periodically running the suit on the latest oVirt packages* - this
   will be done by the nightly job like it is done today
   - *Running the suit to test changes to the suit`s code* - while
   currently this is done automatically by check-patch, this would have to be
   done manually in the future by manually triggering the nightly job and
   setting the REFSPEC parameter to point to the examined patch
   - *Triggering the suit manually* - This would be done by triggering the
   suit-specific nightly job (as opposed to the general OST manual job)

 The patches listed below implement the changes outlined above:

   - 102757 <https://gerrit.ovirt.org/102757> nightly-system-tests: big
   suits -> big containers
   - 102771 <https://gerrit.ovirt.org/102771>: stdci: Drop `big` suits from
   check-patch

We know that making the changes we presented will make things a little less
convenient for users and maintainers of the big suits, but we believe the
benefits of having vastly increased execution capacity for all other suits
outweigh those shortcomings.

We would like to hear all relevant comment and questions from the quite
owners and other interested parties, especially is you think we should not
carry out the changes we propose.
Please take the time to respond on this thread, or on the linked patches.

Thanks,

-- 
Barak Korren
RHV DevOps team , RHCE, RHCi
Red Hat EMEA
redhat.com | TRIED. TESTED. TRUSTED. | redhat.com/trusted
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/2O3MV7X5VB32DG2KJDMJKDYWWSHNBZ3R/


[ovirt-devel] Re: OST's basic suite UI sanity tests optimization

2019-08-07 Thread Barak Korren
On Wed, 7 Aug 2019 at 14:54, Eyal Edri  wrote:

>
>
> On Wed, Aug 7, 2019 at 1:24 PM Doron Fediuck  wrote:
>
>>
>>
>> On Tue, 12 Mar 2019 at 10:20, Martin Perina  wrote:
>>
>>>
>>>
>>> On Fri, Mar 8, 2019 at 12:36 PM Marcin Sobczyk 
>>> wrote:
>>>
>>>> Greg,
>>>>
>>>> that's a great finding and a very good starting point.
>>>>
>>>> If we want to stick with docker images and Firefox/Chrome testing, I
>>>> still have some ideas, that would shorten the running time even more:
>>>>
>>>>- we do something like this:
>>>>
>>>> log("waiting %s sec for grid to initialize..." % GRID_STARTUP_DELAY)
>>>> time.sleep(GRID_STARTUP_DELAY)
>>>>
>>>>   this is very inefficient. We can change that to something like I
>>>> wrote here (_wait_for_selenium_hub):
>>>>
>>>>
>>>> https://gerrit.ovirt.org/#/c/98135/2/common/test-scenarios-files/selenium/navigation/selenium_on_engine.py
>>>>
>>>>   This function probably needs some improvement (i.e. urllib3 spits
>>>> out warnings on an unsuccessful connection attempt, so they would need to
>>>> be silenced), but that's a far better approach than a simple sleep.
>>>>
>>>>- parallelize running Firefox and Chrome tests - there's no reason
>>>>not to run them both at the same time. There's something called
>>>>VectorThread in lago.utils. A simple example of usage can be found
>>>>in '004_basic_sanity.py:955' (disk_operations function). This would
>>>>have a nice side effect of getting rid of the ugly global
>>>>ovirt_driver - each thread would have it's own.
>>>>
>>>>
>>>>- maybe not a running-time improvement, but I think
>>>>https://gerrit.ovirt.org/#/c/98127/ is still relevant - the way we
>>>>call save_screenshot is ugly and much too verbose
>>>>
>>>> Right now, I have to switch my focus to some important stuff in VDSM -
>>>> the OST patches were a continuation of a hackathon effort and something
>>>> like a "side-project" ;) Still, I don't want the tread to die. I think
>>>> there's a lot of room for improvements. I can rebase/improve some of my
>>>> patches if you find them useful. Please keep me posted with your efforts!
>>>>
>>>> Regards, Marcin
>>>> On 3/7/19 11:10 PM, Greg Sheremeta wrote:
>>>>
>>>> Marcin,
>>>>
>>>> It just dawned on me that the main reason 008's start_grid takes so
>>>> long is that the docker images are fresh pulled every time. Several hundred
>>>> MB, every time (ugh, sorry). We can and should cache them. What do you
>>>> think about trying this before doing anything else? [it would also be a
>>>> good time to update from actinium to the latest, iron.]
>>>>
>>>> @Barak Korren  you once mentioned to me we should
>>>> cache these if they are ok to cache (they are). How do we do that?
>>>>
>>>>
>>> Gal/Gallit/Barak, so is there any way how to store those docker
>>> containers within image of lago VM which runs OST tests?
>>>
>>>>
>>>> Reviving this now since we have some containerization support for CI.
>> Can we push this forward?
>>
>
> Adding Barak, Gal and Daniel, IIRC we do have cache for container images
> in STDCI, not sure if/how it works in OST.
>

I think this was already discussed somewhere else a long time ago - the
Selenium images should already be cached on our system at this point



>
>
>> docker.io/selenium/node-chrome-debug   3.9.1-actinium  327adc897d23
>>>>   13 months ago   *904 MB*
>>>> docker.io/selenium/node-firefox-debug   3.9.1-actinium
>>>> 88649b420bd513 months ago   *814 MB*
>>>>
>>>> Greg
>>>>
>>>>
>>>> On Tue, Mar 5, 2019 at 6:15 AM Greg Sheremeta 
>>>> wrote:
>>>>
>>>>>
>>>>> On Tue, Mar 5, 2019 at 4:55 AM Marcin Sobczyk 
>>>>> wrote:
>>>>>
>>>>>> Hi,
>>>>>> On 3/4/19 7:07 PM, Greg Sheremeta wrote:
>>>>>>
>>>>>> Hi,
>>>>>>
>>>>>> Thanks for trying to improve the tests!
>>>>>>
>>>>>> I'm reluctant to give u

[ovirt-devel] Re: Weakness of repos in OST

2019-06-25 Thread Barak Korren
On Tue, 25 Jun 2019 at 12:08, Dominik Holler  wrote:

> Hi,
> from my point of view, we are not testing the repos in OST,
> because we manage the packages manually.
> The clean way would be installing something like
> https://resources.ovirt.org/pub/yum-repo/ovirt-release43-pre.rpm
> But this would download each package multiple times each run.
>

Those repos get created too late for most OST runs that need to test
pre-released code.


Maybe a way to test the repos would be an OST which bypasses
> lago's repo management.
> What is your view on this?
> Dominik
>

Bypassing Lagos repo management means that you would get unreliable tests
because you allow outside access during the test run.

We had a series of issues in DS CI last week to remind us why this is not a
good idea in the long run...

The right way to do this IMO is to to have code in OST that extracts the
repo configuration files from the *-release*.rpm and plugs them into the
existing repo handling code.


> ___
> Devel mailing list -- devel@ovirt.org
> To unsubscribe send an email to devel-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/devel@ovirt.org/message/7STOCII42CP5MEZW3AC3VPTE4XZRGLIM/
>


-- 
Barak Korren
RHV DevOps team , RHCE, RHCi
Red Hat EMEA
redhat.com | TRIED. TESTED. TRUSTED. | redhat.com/trusted
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/EA372W4E72X2PQEATIIUHHNEFQVBTXTQ/


[ovirt-devel] FC28 Blacklisted in oVirt STDCI V2

2019-06-10 Thread Barak Korren
Hi all,

By Snadro's request, as of a few minutes ago FC28 had been blacklisted in
oVirt's STDCI V2 system.

What this means is that if any project has FC28 threads configured in its
STDCI V2 YAML configuration file, those threads will be ignored by the
STDCI system and not invoked.

Threads for building and testing on other distributions will keep working
as before.

It is still recommended that projects with fc28 configuration will patch
their configuration files to remove FC28, to make it easier to understand
what the CI does and not rely on an implicit blacklist.

Please note that this only concerns projects that have made the switch to
STDCI V2. Projects that still use V1, and have FC28 jobs defined, those
jobs will keep working as before. The following patch by Sandro, however,
includes code to remove all those jobs:

https://gerrit.ovirt.org/c/100556/

Thanks,
Barak.

-- 
Barak Korren
RHV DevOps team , RHCE, RHCi
Red Hat EMEA
redhat.com | TRIED. TESTED. TRUSTED. | redhat.com/trusted
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/SULMT7OYNNIZO2ONZZYUZZPBHOOQH72F/


[ovirt-devel] Re: Testing sanlock development build with OST

2019-05-22 Thread Barak Korren
On Wed, 22 May 2019 at 21:07, Nir Soffer  wrote:

> On Sun, May 19, 2019 at 8:43 AM Barak Korren  wrote:
>
>>
>>
>> On Sun, 19 May 2019 at 00:01, Nir Soffer  wrote:
>>
>>> Looking in https://jenkins.ovirt.org/job/ovirt-system-tests_manual/build
>>> <https://jenkins.ovirt.org/job/ovirt-system-tests_manual/build?delay=0sec>
>>>
>>> CUSTOM_REPOS:
>>>
>>> You can add multiple Jenkins build urls/Yum repos, one per line.
>>> Supported formats are:
>>> * Jenkins Build url:
>>> e.g.,
>>> http://jenkins.ovirt.org/job/vdsm_master_build-artifacts-on-demand-el7-x86_64/lastSuccessfulBuild/
>>> * Yum repo: "rec:yum_repo_url"
>>> e.g., rec:
>>> http://jenkins.ovirt.org/job/vdsm_master_build-artifacts-on-demand-el7-x86_64/lastSuccessfulBuild/artifact/
>>>
>>>
>> It doesn't actually have to be a yum repo, 'rec:' simply does a recursive
>> HTTP crawl.
>> `repoman` and therefore OST does not actually support reading YUM
>> metadata ATM.
>>
>
> Thanks, testing with latest build now:
> rec:https://cbs.centos.org/kojifiles/packages/sanlock/3.7.3/1.el7/x86_64/
>
> https://jenkins.ovirt.org/job/ovirt-system-tests_manual/4764/
>
>
>> It seems that this should work:
>>>
>>> 1. build sanlock rpms from my sanlock tree
>>> 2. copy to some public web server
>>> 3. create yum repo
>>>
>>
>> no need for that, but the web server needs to be browsable.
>>
>>
>>> 4. add rec:http://my.server/sanlock-repo/
>>>
>>> OST will pull sanlock from this repo, right?
>>>
>>> The biggest issue seems to be a public web server, I don't have one. Do
>>> we have something that I can use
>>> in jenkins.ovirt.org or other domain we control?
>>>
>>
>> To be in jenkinsit needs to be build by jenkins...
>>
>>
>>> I want to run these tests regularly, to make sure that sanlock always
>>> works with vdsm, without manual
>>> testing.
>>>
>>
>> I think there are couple of solutions here you could consider:
>>
>>1. Setup a build repo containing automation files for oVirt and have
>>oVirt's CI system run the builds. this will enable full automation for the
>>whole test process
>>
>> Do you mean project without any source, only stdci.yaml and build script
> pulling sanlock source from master, and creating rpms?
>

Yes, only you don't actually have to write the logic to pull the other
source yourself, we already have tooling for that that will keep things
reproducible:
https://ovirt-infra-docs.readthedocs.io/en/latest/CI/Build_and_test_standards/#defining-extra-source-code-dependencies-aka-upstream-sources



> Can we use my sanlock fork on github for this?
> https://github.com/nirs/sanlock
>

Doesn't have to be a fork of the existing sanlok repo, but we can use
gitHub for that, yeah. I'd rather we use something in oVirt's org though to
save some time setting up credentials and permissions.


>
>>1. Build via copr - in which case copt will provide HTTP hosting for
>>the resulting RPMs.
>>
>> Interesting, but I think I need cbs instead, since we don't have Fedora
> OST yet.
>

Copr can build for CentOS too (Its called EPEL there, but its essentialy
using the same mock we use in our CI for emulating CentOS).


> I think the simplest way would something like fedpkg scratch build use the
> build URL.
>

Putting Koji into the mix never makes thing simpler


> Sandro, what do I need to be able to do scratch builds in cbs?
> https://cbs.centos.org/koji/index
>

I think that would actually be an overkill...

-- 
Barak Korren
RHV DevOps team , RHCE, RHCi
Red Hat EMEA
redhat.com | TRIED. TESTED. TRUSTED. | redhat.com/trusted
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/467Y4OIZKMVX4AAEN4FYOYH6SF36LD26/


[ovirt-devel] Re: Testing sanlock development build with OST

2019-05-18 Thread Barak Korren
On Sun, 19 May 2019 at 00:01, Nir Soffer  wrote:

> Looking in https://jenkins.ovirt.org/job/ovirt-system-tests_manual/build
> <https://jenkins.ovirt.org/job/ovirt-system-tests_manual/build?delay=0sec>
>
> CUSTOM_REPOS:
>
> You can add multiple Jenkins build urls/Yum repos, one per line.
> Supported formats are:
> * Jenkins Build url:
> e.g.,
> http://jenkins.ovirt.org/job/vdsm_master_build-artifacts-on-demand-el7-x86_64/lastSuccessfulBuild/
> * Yum repo: "rec:yum_repo_url"
> e.g., rec:
> http://jenkins.ovirt.org/job/vdsm_master_build-artifacts-on-demand-el7-x86_64/lastSuccessfulBuild/artifact/
>
>
It doesn't actually have to be a yum repo, 'rec:' simply does a recursive
HTTP crawl.
`repoman` and therefore OST does not actually support reading YUM metadata
ATM.


> It seems that this should work:
>
> 1. build sanlock rpms from my sanlock tree
> 2. copy to some public web server
> 3. create yum repo
>

no need for that, but the web server needs to be browsable.


> 4. add rec:http://my.server/sanlock-repo/
>
> OST will pull sanlock from this repo, right?
>
> The biggest issue seems to be a public web server, I don't have one. Do we
> have something that I can use
> in jenkins.ovirt.org or other domain we control?
>

To be in jenkinsit needs to be build by jenkins...


> I want to run these tests regularly, to make sure that sanlock always
> works with vdsm, without manual
> testing.
>

I think there are couple of solutions here you could consider:

   1. Setup a build repo containing automation files for oVirt and have
   oVirt's CI system run the builds. this will enable full automation for the
   whole test process
   2. Build via copr - in which case copt will provide HTTP hosting for the
   resulting RPMs.



>
> Nir
>


-- 
Barak Korren
RHV DevOps team , RHCE, RHCi
Red Hat EMEA
redhat.com | TRIED. TESTED. TRUSTED. | redhat.com/trusted
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/FOPSIZ7EBMKYEGX7UBPJPDZUXKKN3YKG/


[ovirt-devel] Re: ovirt-engine-sdk-go: Integration with TravisCI to support pushing auto-generated codes

2019-02-21 Thread Barak Korren
בתאריך יום ו׳, 22 בפבר׳ 2019, 6:13, מאת Joey Ma ‏:

>
>
> On Wed, Feb 20, 2019 at 12:41 AM Joey Ma  wrote:
>
>>
>>
>> On Tue, Feb 19, 2019 at 7:23 PM Barak Korren  wrote:
>>
>>>
>>>
>>> On Tue, 19 Feb 2019 at 09:14, Joey Ma  wrote:
>>>
>>>> Hi all,
>>>>
>>>
>>> Hi Joey,
>>>
>>
>>>
>>>>
>>>> With the generous help of several nice guys, currently the Go SDK
>>>> related projects, oVirt/ovirt-engine-sdk-go and oVirt/go-ovirt, are already
>>>> available under oVirt org, and the integration of oVirt/ovirt-engine-sdk-go
>>>> with oVirt STD-CI is also completed [1]. Sincerely thank you to everyone.
>>>>
>>>> While there is still an issue left that we need a proper solution to
>>>> integrate oVirt/ovirt-engine-sdk-go with TravisCI which could push the
>>>> auto-generated codes into oVirt/go-ovirt. Previously I adopted my
>>>> personal github access token which is stored encrypted [2] to work it out.
>>>>
>>>> But as it's been under oVirt community, we need a more regular way to
>>>> make this. As @Evgheni  suggested, maybe a new
>>>> access token from a dedicated github account or via the Jenkins job will
>>>> work?
>>>>
>>>> Any one could help? Any insights into this would be appreciated and
>>>> thanks in advance.
>>>>
>>> If you do it from the STDCI script (The Jenkin job), we could make the
>>> credentials for the STDCI GitHub bot available to the script when it runs
>>> (See the section about secrets in the standard CI docs
>>> <https://ovirt-infra-docs.readthedocs.io/en/latest/CI/Build_and_test_standards/index.html>
>>> for a full explanation about how this works).
>>>
>>
>>>
>> I`d want do understand how the flow works and what is its purpose, it
>>> sounds to me a bit strange that we need to push from one repo to the other.
>>>
>>
>> Hi Barak,
>>
>> Basically the root cause is that the Go SDK codes are automatically
>> generated by oVirt/ovirt-engine-sdk-go, but reside in oVirt/go-ovirt.
>>
>> Current workflow is:
>> 1. Once a pull request get merged in ovirt-engine-sdk-go, TravisCI will
>> run the build phrase, which mainly generates the Go SDK codes;
>> 2. If the build phrase has passed,  TravisCI will then trigger the
>> script deploy-codes.sh [1];
>> 3. The deploy-codes.sh [1]  pushes the auto-generated Go SDK codes to
>> go-ovirt via the preconfigured credential of github account;
>> 4. Eventually users are able to utilize the latest SDK by using "
>> github.com/ovirt/go-ovirt" as the import path;
>>
>> The reasons for this are:
>> 1. The auto-generated codes are better to be put into a dedicated repo,
>> not in its generator repo;
>> 2. The dedicated go-ovirt repo provides a convenient way for users to
>> utilized the SDK package, which is also the common way to import Go
>> packages;
>>
>> From my perspective, there are two projects with similar purpose:
>> * oVirt/ovirt-engine-sdk: TravisCI will trigger a script [2] to push
>> newly generated docs into its gh-pages branch once new commits merged.
>> > *This feature will also get implemented in oVirt/ovirt-engine-sdk-go
>> ASAP.*
>> * kubevirt/kubevirt: After new commits merged, the kubevirt repo would
>> trigger the script [3] to deploy new auto-generated Python client codes
>> into kubevirt/client-python. This enables users could easily install the
>> Python client by running `pip install git+
>> https://github.com/kubevirt/client-python.git`
>> <https://github.com/kubevirt/client-python.git>.
>>
>
> Hi Barak,
>
> I was wondering if I made this clear.
>
>
>> Regarding to the credentials, the environment variables binding secrets
>> mentioned in STD-CI doc are effective solutions and definitely I would like
>> to prefer the common rules used in community.
>> *> Also I've a question that make me confused for a long time, are the
>> two environment variables `encrypted_1fc90f464345_key` and
>> `encrypted_1fc90f464345_iv` used in [2] defined in a STDCI secrets file [4]
>> for oVirt/ovirt-engine-sdk? For I could not find where they are defined.*
>>
>
> This definitely was a stupid question. I apologize for not carefully
> reading [1] which could tell the answers.
>
> [1] and [2]probably provide us another way to work it out via deploy-key.
> Please let me introduce the working 

[ovirt-devel] Re: ovirt-engine-sdk-go: Integration with TravisCI to support pushing auto-generated codes

2019-02-19 Thread Barak Korren
On Tue, 19 Feb 2019 at 09:14, Joey Ma  wrote:

> Hi all,
>

Hi Joey,


>
> With the generous help of several nice guys, currently the Go SDK related
> projects, oVirt/ovirt-engine-sdk-go and oVirt/go-ovirt, are already
> available under oVirt org, and the integration of oVirt/ovirt-engine-sdk-go
> with oVirt STD-CI is also completed [1]. Sincerely thank you to everyone.
>
> While there is still an issue left that we need a proper solution to
> integrate oVirt/ovirt-engine-sdk-go with TravisCI which could push the
> auto-generated codes into oVirt/go-ovirt. Previously I adopted my
> personal github access token which is stored encrypted [2] to work it out.
>
> But as it's been under oVirt community, we need a more regular way to make
> this. As @Evgheni  suggested, maybe a new access
> token from a dedicated github account or via the Jenkins job will work?
>
> Any one could help? Any insights into this would be appreciated and thanks
> in advance.
>

If you do it from the STDCI script (The Jenkin job), we could make the
credentials for the STDCI GitHub bot available to the script when it runs
(See the section about secrets in the standard CI docs
<https://ovirt-infra-docs.readthedocs.io/en/latest/CI/Build_and_test_standards/index.html>
for a full explanation about how this works).

I`d want do understand how the flow works and what is its purpose, it
sounds to me a bit strange that we need to push from one repo to the other.

Regards,
Barak.


-- 
Barak Korren
RHV DevOps team , RHCE, RHCi
Red Hat EMEA
redhat.com | TRIED. TESTED. TRUSTED. | redhat.com/trusted
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/WCOA4PBY5DNXK2NQF2WECETWJQZRRXOV/


[ovirt-devel] Re: vdsm has been tagged (v4.30.9)

2019-02-18 Thread Barak Korren
/vdsm_standard-on-merge/341




>
>
>>
>> Nir
>>
>>
>>
>>>
>>>
>>>> Nir
>>>>
>>>>
>>>>
>>>>>
>>>>>
>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>> --
>>>>>>
>>>>>> SANDRO BONAZZOLA
>>>>>>
>>>>>> MANAGER, SOFTWARE ENGINEERING, EMEA R RHV
>>>>>>
>>>>>> Red Hat EMEA <https://www.redhat.com/>
>>>>>>
>>>>>> sbona...@redhat.com
>>>>>> <https://red.ht/sig>
>>>>>>
>>>>>
>>>>>
>>>>> --
>>>>>
>>>>> SANDRO BONAZZOLA
>>>>>
>>>>> MANAGER, SOFTWARE ENGINEERING, EMEA R RHV
>>>>>
>>>>> Red Hat EMEA <https://www.redhat.com/>
>>>>>
>>>>> sbona...@redhat.com
>>>>> <https://red.ht/sig>
>>>>> ___
>>>>> Devel mailing list -- devel@ovirt.org
>>>>> To unsubscribe send an email to devel-le...@ovirt.org
>>>>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>>>>> oVirt Code of Conduct:
>>>>> https://www.ovirt.org/community/about/community-guidelines/
>>>>> List Archives:
>>>>> https://lists.ovirt.org/archives/list/devel@ovirt.org/message/WZFGDY4VW37BLGJ766GPBIDFWOSAP6DF/
>>>>>
>>>> ___
>>>> Devel mailing list -- devel@ovirt.org
>>>> To unsubscribe send an email to devel-le...@ovirt.org
>>>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>>>> oVirt Code of Conduct:
>>>> https://www.ovirt.org/community/about/community-guidelines/
>>>> List Archives:
>>>> https://lists.ovirt.org/archives/list/devel@ovirt.org/message/E5CPOTSIAK3GGWSY3JBG7FV5RGY5H5Q7/
>>>>
>>>
>>>
>>> --
>>>
>>> Eyal edri
>>>
>>>
>>> MANAGER
>>>
>>> RHV/CNV DevOps
>>>
>>> EMEA VIRTUALIZATION R
>>>
>>>
>>> Red Hat EMEA <https://www.redhat.com/>
>>> <https://red.ht/sig> TRIED. TESTED. TRUSTED.
>>> <https://redhat.com/trusted>
>>> phone: +972-9-7692018
>>> irc: eedri (on #tlv #rhev-dev #rhev-integ)
>>>
>> ___
> Devel mailing list -- devel@ovirt.org
> To unsubscribe send an email to devel-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/devel@ovirt.org/message/QDFUBVXAU7YEISSHFIREG6XRCO6EUERM/
>


-- 
Barak Korren
RHV DevOps team , RHCE, RHCi
Red Hat EMEA
redhat.com | TRIED. TESTED. TRUSTED. | redhat.com/trusted
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/BKEB5MNBIP2KDGJB223SJ2EW4JDTAGL6/


[ovirt-devel] Re: package versioning problem causing failure on CQ ovirt-master on all projects

2019-02-06 Thread Barak Korren
בתאריך יום ד׳, 6 בפבר׳ 2019, 12:25, מאת Simone Tiraboschi ‏<
stira...@redhat.com>:

>
>
> On Wed, Feb 6, 2019 at 11:17 AM Barak Korren  wrote:
>
>>
>>
>> On Wed, 6 Feb 2019 at 11:57, Simone Tiraboschi 
>> wrote:
>>
>>>
>>>
>>> On Wed, Feb 6, 2019 at 10:44 AM Barak Korren  wrote:
>>>
>>>>
>>>>
>>>> On Wed, 6 Feb 2019 at 11:34, Simone Tiraboschi 
>>>> wrote:
>>>>
>>>>>
>>>>>
>>>>> On Wed, Feb 6, 2019 at 10:23 AM Barak Korren 
>>>>> wrote:
>>>>>
>>>>>>
>>>>>>
>>>>>> On Wed, 6 Feb 2019 at 11:15, Simone Tiraboschi 
>>>>>> wrote:
>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> On Wed, Feb 6, 2019 at 10:00 AM Dan Kenigsberg 
>>>>>>> wrote:
>>>>>>>
>>>>>>>> On Wed, Feb 6, 2019 at 10:54 AM Simone Tiraboschi <
>>>>>>>> stira...@redhat.com> wrote:
>>>>>>>> >
>>>>>>>> >
>>>>>>>> >
>>>>>>>> > On Wed, Feb 6, 2019 at 9:45 AM Dan Kenigsberg 
>>>>>>>> wrote:
>>>>>>>> >>
>>>>>>>> >> On Wed, Feb 6, 2019 at 10:16 AM Simone Tiraboschi <
>>>>>>>> stira...@redhat.com> wrote:
>>>>>>>> >> >
>>>>>>>> >> >
>>>>>>>> >> >
>>>>>>>> >> > On Tue, Feb 5, 2019 at 7:07 PM Dafna Ron 
>>>>>>>> wrote:
>>>>>>>> >> >>
>>>>>>>> >> >> Hi,
>>>>>>>> >> >>
>>>>>>>> >> >> Please note that ovirt-ansible-hosted-engine-setup has a
>>>>>>>> versioning problem with the package and is causing bootstrap to fail 
>>>>>>>> for
>>>>>>>> upgrade suite [1]
>>>>>>>> >> >>
>>>>>>>> >> >> This is effecting all projects, its been reported to the
>>>>>>>> developers and should be fixed as soon as possible.
>>>>>>>> >> >>
>>>>>>>> >> >> you can view CQ status here:
>>>>>>>> >> >>
>>>>>>>> https://jenkins.ovirt.org/view/Change%20queue%20jobs/job/ovirt-master_change-queue-tester/
>>>>>>>> >> >>
>>>>>>>> >> >> [1] http://pastebin.test.redhat.com/708086
>>>>>>>> >>
>>>>>>>> >> It is unfair to refer to an internal pastebin here. It is also
>>>>>>>> not
>>>>>>>> >> very sensible, as it is quite short.
>>>>>>>> >>
>>>>>>>> >> 2019-02-05 11:23:51,390-0500 ERROR
>>>>>>>> >> otopi.plugins.otopi.packagers.yumpackager yumpackager.error:85
>>>>>>>> Yum
>>>>>>>> >>
>>>>>>>> [u'ovirt-hosted-engine-setup-2.3.5-0.0.master.20190205110929.gitfdbc215.el7.noarch
>>>>>>>> >> requires ovirt-ansible-hosted-engine-setup >= 1.0.10']
>>>>>>>> >> 2019-02-05 11:23:51,390-0500 DEBUG otopi.context
>>>>>>>> >> context._executeMethod:142 method exception
>>>>>>>> >> Traceback (most recent call last):
>>>>>>>> >>   File "/tmp/ovirt-6fV8LBWX5i/pythonlib/otopi/context.py", line
>>>>>>>> 132,
>>>>>>>> >> in _executeMethod
>>>>>>>> >> method['method']()
>>>>>>>> >>   File
>>>>>>>> "/tmp/ovirt-6fV8LBWX5i/otopi-plugins/otopi/packagers/yumpackager.py",
>>>>>>>> >> line 248, in _packages
>>>>>>>> >> self.processTransaction()
>>>>>>>> >>   File
>>>>>>>> "/tmp/ovirt-6fV8LBWX5i/otopi-plugins/otopi/packagers/yumpackager.py",
>>>>>>>> >> line 262, in processTransaction
>>>>>>>> >> if self._miniyum.bu

[ovirt-devel] Re: package versioning problem causing failure on CQ ovirt-master on all projects

2019-02-06 Thread Barak Korren
On Wed, 6 Feb 2019 at 11:57, Simone Tiraboschi  wrote:

>
>
> On Wed, Feb 6, 2019 at 10:44 AM Barak Korren  wrote:
>
>>
>>
>> On Wed, 6 Feb 2019 at 11:34, Simone Tiraboschi 
>> wrote:
>>
>>>
>>>
>>> On Wed, Feb 6, 2019 at 10:23 AM Barak Korren  wrote:
>>>
>>>>
>>>>
>>>> On Wed, 6 Feb 2019 at 11:15, Simone Tiraboschi 
>>>> wrote:
>>>>
>>>>>
>>>>>
>>>>> On Wed, Feb 6, 2019 at 10:00 AM Dan Kenigsberg 
>>>>> wrote:
>>>>>
>>>>>> On Wed, Feb 6, 2019 at 10:54 AM Simone Tiraboschi <
>>>>>> stira...@redhat.com> wrote:
>>>>>> >
>>>>>> >
>>>>>> >
>>>>>> > On Wed, Feb 6, 2019 at 9:45 AM Dan Kenigsberg 
>>>>>> wrote:
>>>>>> >>
>>>>>> >> On Wed, Feb 6, 2019 at 10:16 AM Simone Tiraboschi <
>>>>>> stira...@redhat.com> wrote:
>>>>>> >> >
>>>>>> >> >
>>>>>> >> >
>>>>>> >> > On Tue, Feb 5, 2019 at 7:07 PM Dafna Ron 
>>>>>> wrote:
>>>>>> >> >>
>>>>>> >> >> Hi,
>>>>>> >> >>
>>>>>> >> >> Please note that ovirt-ansible-hosted-engine-setup has a
>>>>>> versioning problem with the package and is causing bootstrap to fail for
>>>>>> upgrade suite [1]
>>>>>> >> >>
>>>>>> >> >> This is effecting all projects, its been reported to the
>>>>>> developers and should be fixed as soon as possible.
>>>>>> >> >>
>>>>>> >> >> you can view CQ status here:
>>>>>> >> >>
>>>>>> https://jenkins.ovirt.org/view/Change%20queue%20jobs/job/ovirt-master_change-queue-tester/
>>>>>> >> >>
>>>>>> >> >> [1] http://pastebin.test.redhat.com/708086
>>>>>> >>
>>>>>> >> It is unfair to refer to an internal pastebin here. It is also not
>>>>>> >> very sensible, as it is quite short.
>>>>>> >>
>>>>>> >> 2019-02-05 11:23:51,390-0500 ERROR
>>>>>> >> otopi.plugins.otopi.packagers.yumpackager yumpackager.error:85 Yum
>>>>>> >>
>>>>>> [u'ovirt-hosted-engine-setup-2.3.5-0.0.master.20190205110929.gitfdbc215.el7.noarch
>>>>>> >> requires ovirt-ansible-hosted-engine-setup >= 1.0.10']
>>>>>> >> 2019-02-05 11:23:51,390-0500 DEBUG otopi.context
>>>>>> >> context._executeMethod:142 method exception
>>>>>> >> Traceback (most recent call last):
>>>>>> >>   File "/tmp/ovirt-6fV8LBWX5i/pythonlib/otopi/context.py", line
>>>>>> 132,
>>>>>> >> in _executeMethod
>>>>>> >> method['method']()
>>>>>> >>   File
>>>>>> "/tmp/ovirt-6fV8LBWX5i/otopi-plugins/otopi/packagers/yumpackager.py",
>>>>>> >> line 248, in _packages
>>>>>> >> self.processTransaction()
>>>>>> >>   File
>>>>>> "/tmp/ovirt-6fV8LBWX5i/otopi-plugins/otopi/packagers/yumpackager.py",
>>>>>> >> line 262, in processTransaction
>>>>>> >> if self._miniyum.buildTransaction():
>>>>>> >>   File "/tmp/ovirt-6fV8LBWX5i/pythonlib/otopi/miniyum.py", line
>>>>>> 920,
>>>>>> >> in buildTransaction
>>>>>> >> raise yum.Errors.YumBaseError(msg)
>>>>>> >> YumBaseError:
>>>>>> [u'ovirt-hosted-engine-setup-2.3.5-0.0.master.20190205110929.gitfdbc215.el7.noarch
>>>>>> >> requires ovirt-ansible-hosted-engine-setup >= 1.0.10']
>>>>>> >> 2019-02-05 11:23:51,391-0500 ERROR otopi.context
>>>>>> >> context._executeMethod:151 Failed to execute stage 'Package
>>>>>> >> installation':
>>>>>> [u'ovirt-hosted-engine-setup-2.3.5-0.0.master.20190205110929.gitfdbc215.el7.noarch
>>>>>> >> requires ovirt-ansible-hosted-engine-setup >= 1.0.10']
>&g

[ovirt-devel] Re: package versioning problem causing failure on CQ ovirt-master on all projects

2019-02-06 Thread Barak Korren
On Wed, 6 Feb 2019 at 11:34, Simone Tiraboschi  wrote:

>
>
> On Wed, Feb 6, 2019 at 10:23 AM Barak Korren  wrote:
>
>>
>>
>> On Wed, 6 Feb 2019 at 11:15, Simone Tiraboschi 
>> wrote:
>>
>>>
>>>
>>> On Wed, Feb 6, 2019 at 10:00 AM Dan Kenigsberg 
>>> wrote:
>>>
>>>> On Wed, Feb 6, 2019 at 10:54 AM Simone Tiraboschi 
>>>> wrote:
>>>> >
>>>> >
>>>> >
>>>> > On Wed, Feb 6, 2019 at 9:45 AM Dan Kenigsberg 
>>>> wrote:
>>>> >>
>>>> >> On Wed, Feb 6, 2019 at 10:16 AM Simone Tiraboschi <
>>>> stira...@redhat.com> wrote:
>>>> >> >
>>>> >> >
>>>> >> >
>>>> >> > On Tue, Feb 5, 2019 at 7:07 PM Dafna Ron  wrote:
>>>> >> >>
>>>> >> >> Hi,
>>>> >> >>
>>>> >> >> Please note that ovirt-ansible-hosted-engine-setup has a
>>>> versioning problem with the package and is causing bootstrap to fail for
>>>> upgrade suite [1]
>>>> >> >>
>>>> >> >> This is effecting all projects, its been reported to the
>>>> developers and should be fixed as soon as possible.
>>>> >> >>
>>>> >> >> you can view CQ status here:
>>>> >> >>
>>>> https://jenkins.ovirt.org/view/Change%20queue%20jobs/job/ovirt-master_change-queue-tester/
>>>> >> >>
>>>> >> >> [1] http://pastebin.test.redhat.com/708086
>>>> >>
>>>> >> It is unfair to refer to an internal pastebin here. It is also not
>>>> >> very sensible, as it is quite short.
>>>> >>
>>>> >> 2019-02-05 11:23:51,390-0500 ERROR
>>>> >> otopi.plugins.otopi.packagers.yumpackager yumpackager.error:85 Yum
>>>> >>
>>>> [u'ovirt-hosted-engine-setup-2.3.5-0.0.master.20190205110929.gitfdbc215.el7.noarch
>>>> >> requires ovirt-ansible-hosted-engine-setup >= 1.0.10']
>>>> >> 2019-02-05 11:23:51,390-0500 DEBUG otopi.context
>>>> >> context._executeMethod:142 method exception
>>>> >> Traceback (most recent call last):
>>>> >>   File "/tmp/ovirt-6fV8LBWX5i/pythonlib/otopi/context.py", line 132,
>>>> >> in _executeMethod
>>>> >> method['method']()
>>>> >>   File
>>>> "/tmp/ovirt-6fV8LBWX5i/otopi-plugins/otopi/packagers/yumpackager.py",
>>>> >> line 248, in _packages
>>>> >> self.processTransaction()
>>>> >>   File
>>>> "/tmp/ovirt-6fV8LBWX5i/otopi-plugins/otopi/packagers/yumpackager.py",
>>>> >> line 262, in processTransaction
>>>> >> if self._miniyum.buildTransaction():
>>>> >>   File "/tmp/ovirt-6fV8LBWX5i/pythonlib/otopi/miniyum.py", line 920,
>>>> >> in buildTransaction
>>>> >> raise yum.Errors.YumBaseError(msg)
>>>> >> YumBaseError:
>>>> [u'ovirt-hosted-engine-setup-2.3.5-0.0.master.20190205110929.gitfdbc215.el7.noarch
>>>> >> requires ovirt-ansible-hosted-engine-setup >= 1.0.10']
>>>> >> 2019-02-05 11:23:51,391-0500 ERROR otopi.context
>>>> >> context._executeMethod:151 Failed to execute stage 'Package
>>>> >> installation':
>>>> [u'ovirt-hosted-engine-setup-2.3.5-0.0.master.20190205110929.gitfdbc215.el7.noarch
>>>> >> requires ovirt-ansible-hosted-engine-setup >= 1.0.10']
>>>> >> 2019-02-05 11:23:51,413-0500 DEBUG
>>>> >> otopi.plugins.otopi.debug.debug_failure.debug_failure
>>>> >> debug_failure._notification:100 tcp connections:
>>>> >>
>>>> >> >>
>>>> >> >
>>>> >> > The issue is that on github we already have
>>>> >> > VERSION="1.0.10"
>>>> >> > as we can see in
>>>> >> >
>>>> https://github.com/oVirt/ovirt-ansible-hosted-engine-setup/blob/master/build.sh#L3
>>>> >> >
>>>> >> > And this has been bumped before the commit that now is reported as
>>>> broken.
>>>> >> >
>>>> >> > CI instead is still building the package as 1.0.9 

[ovirt-devel] Re: package versioning problem causing failure on CQ ovirt-master on all projects

2019-02-06 Thread Barak Korren
gt; >
>> > It has been built here once as 1.0.10:
>> >
>> https://jenkins.ovirt.org/job/oVirt_ovirt-ansible-hosted-engine-setup_standard-on-ghpush/91/
>> >
>> > then on the next commit, CI started building it again as 1.0.9 although
>> in the source code we have 1.0.10 and so this issue.
>>
>> I don't understand the issue yet (that's not surprising as I do not
>> know what is that "ghpush" job). Which CI job has built the wrong
>> version? can you share its logs? who owns it?
>>
>
> In the git log I see:
> commit b5a6c1db135d81d75f3330160e7ef4a84c97fd60 (HEAD -> master,
> upstream/master, origin/master, origin/HEAD, nolog)
> Author: Simone Tiraboschi 
> Date:   Tue Feb 5 10:56:58 2019 +0100
>
> Avoid using no_log when we have to pass back values to otopi
>
> commit 96974fad1ee6aee33f8183e49240f8a2a7a617d4
> Author: Simone Tiraboschi 
> Date:   Thu Jan 31 16:39:58 2019 +0100
>
> use dynamic inclusion to avoid tag inheritance
>
> commit 4a9a23fb8e88acba5af4febed43d9e4b02e7a2c5
> Author: Simone Tiraboschi 
> Date:   Thu Jan 31 15:15:04 2019 +0100
>
> Force facts gathering on partial executions
>
> commit 7428b54a5ba8458379b1a27d116f9504bb830e69
> Author: Simone Tiraboschi 
> Date:   Wed Jan 30 17:01:01 2019 +0100
>
> Use static imports and tags
>
> Fixes
> https://github.com/oVirt/ovirt-ansible-hosted-engine-setup/issues/20
> Reuires https://github.com/oVirt/ovirt-ansible-engine-setup/pull/39
>
>
>
> Version has been bumped to 1.0.10 on commit
> 7428b54a5ba8458379b1a27d116f9504bb830e69 since it introduces a backward
> incompatible change and we need to track it.
>
> 7428b54a5ba8458379b1a27d116f9504bb830e69 failed CI tests due to an issue
> on a different package found yesterday.
>
> So 7428b54a5ba8458379b1a27d116f9504bb830e69 got ignored and now CI is
> building from commit b5a6c1db135d81d75f3330160e7ef4a84c97fd60 (the last
> one) rebased on something before 7428b54a5ba8458379b1a27d116f9504bb830e69
> which is not what we have in git so now, after
> b5a6c1db135d81d75f3330160e7ef4a84c97fd60 (last commit) the package builds
> in CI as 1.0.9 although in the code we have 1.0.10 and so the issue.
>
>
We never ignore commits, certainly not merged ones...

We can fall back to older builds on system test failures and throw away
newer build if we suspect they cause the failure, if which case the builds
need to be resubmitted, but this logic happens at the build leve not the
commit level, there is no commit reordering or dropping anywhere.


-- 
Barak Korren
RHV DevOps team , RHCE, RHCi
Red Hat EMEA
redhat.com | TRIED. TESTED. TRUSTED. | redhat.com/trusted
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/BOHHVQVIMO7VHJY6Z6Y4XMVFYTEULTOJ/


[ovirt-devel] Re: package versioning problem causing failure on CQ ovirt-master on all projects

2019-02-06 Thread Barak Korren
On Wed, 6 Feb 2019 at 11:00, Dan Kenigsberg  wrote:

> On Wed, Feb 6, 2019 at 10:54 AM Simone Tiraboschi 
> wrote:
> >
> >
> >
> > On Wed, Feb 6, 2019 at 9:45 AM Dan Kenigsberg  wrote:
> >>
> >> On Wed, Feb 6, 2019 at 10:16 AM Simone Tiraboschi 
> wrote:
> >> >
> >> >
> >> >
> >> > On Tue, Feb 5, 2019 at 7:07 PM Dafna Ron  wrote:
> >> >>
> >> >> Hi,
> >> >>
> >> >> Please note that ovirt-ansible-hosted-engine-setup has a versioning
> problem with the package and is causing bootstrap to fail for upgrade suite
> [1]
> >> >>
> >> >> This is effecting all projects, its been reported to the developers
> and should be fixed as soon as possible.
> >> >>
> >> >> you can view CQ status here:
> >> >>
> https://jenkins.ovirt.org/view/Change%20queue%20jobs/job/ovirt-master_change-queue-tester/
> >> >>
> >> >> [1] http://pastebin.test.redhat.com/708086
> >>
> >> It is unfair to refer to an internal pastebin here. It is also not
> >> very sensible, as it is quite short.
> >>
> >> 2019-02-05 11:23:51,390-0500 ERROR
> >> otopi.plugins.otopi.packagers.yumpackager yumpackager.error:85 Yum
> >>
> [u'ovirt-hosted-engine-setup-2.3.5-0.0.master.20190205110929.gitfdbc215.el7.noarch
> >> requires ovirt-ansible-hosted-engine-setup >= 1.0.10']
> >> 2019-02-05 11:23:51,390-0500 DEBUG otopi.context
> >> context._executeMethod:142 method exception
> >> Traceback (most recent call last):
> >>   File "/tmp/ovirt-6fV8LBWX5i/pythonlib/otopi/context.py", line 132,
> >> in _executeMethod
> >> method['method']()
> >>   File
> "/tmp/ovirt-6fV8LBWX5i/otopi-plugins/otopi/packagers/yumpackager.py",
> >> line 248, in _packages
> >> self.processTransaction()
> >>   File
> "/tmp/ovirt-6fV8LBWX5i/otopi-plugins/otopi/packagers/yumpackager.py",
> >> line 262, in processTransaction
> >> if self._miniyum.buildTransaction():
> >>   File "/tmp/ovirt-6fV8LBWX5i/pythonlib/otopi/miniyum.py", line 920,
> >> in buildTransaction
> >> raise yum.Errors.YumBaseError(msg)
> >> YumBaseError:
> [u'ovirt-hosted-engine-setup-2.3.5-0.0.master.20190205110929.gitfdbc215.el7.noarch
> >> requires ovirt-ansible-hosted-engine-setup >= 1.0.10']
> >> 2019-02-05 11:23:51,391-0500 ERROR otopi.context
> >> context._executeMethod:151 Failed to execute stage 'Package
> >> installation':
> [u'ovirt-hosted-engine-setup-2.3.5-0.0.master.20190205110929.gitfdbc215.el7.noarch
> >> requires ovirt-ansible-hosted-engine-setup >= 1.0.10']
> >> 2019-02-05 11:23:51,413-0500 DEBUG
> >> otopi.plugins.otopi.debug.debug_failure.debug_failure
> >> debug_failure._notification:100 tcp connections:
> >>
> >> >>
> >> >
> >> > The issue is that on github we already have
> >> > VERSION="1.0.10"
> >> > as we can see in
> >> >
> https://github.com/oVirt/ovirt-ansible-hosted-engine-setup/blob/master/build.sh#L3
> >> >
> >> > And this has been bumped before the commit that now is reported as
> broken.
> >> >
> >> > CI instead is still building the package as 1.0.9 ignoring the commit
> that bumped the version.
> >> > Honestly I don't know how I can fix it if the version value is
> already the desired one in the source code.
> >>
> >> I don't see your ovirt-ansible-hosted-engine-setup-1.0.10, only
> >>
> https://plain.resources.ovirt.org/pub/ovirt-master-snapshot/rpm/el7/noarch/ovirt-ansible-hosted-engine-setup-1.0.9-0.1.master.20190129095419.el7.noarch.rpm
> >> Not even under "tested":
> >>
> https://plain.resources.ovirt.org/repos/ovirt/tested/master/rpm/el7/noarch/ovirt-ansible-hosted-engine-setup-1.0.9-0.1.master.20190129095419.el7.noarch.rpm
> >>
> >> Simone, can you doublecheck that its artifacts have been built and
> >> have been accepted by the change queue?
> >
> >
> > It has been built here once as 1.0.10:
> >
> https://jenkins.ovirt.org/job/oVirt_ovirt-ansible-hosted-engine-setup_standard-on-ghpush/91/
> >
> > then on the next commit, CI started building it again as 1.0.9 although
> in the source code we have 1.0.10 and so this issue.
>

The next build of this job failed because of infra issue.

Are you confusing pre-and post-merge builds?

*ghpush job runs post merge on merged code
*check-pr job runs on PRs

change-queue only looks at builds generated by the ghpush job and unless
someone intervened manually, the *ghpush job should always handle commits
in merge order.


> I don't understand the issue yet (that's not surprising as I do not
> know what is that "ghpush" job). Which CI job has built the wrong
> version? can you share its logs? who owns it?
>
>

-- 
Barak Korren
RHV DevOps team , RHCE, RHCi
Red Hat EMEA
redhat.com | TRIED. TESTED. TRUSTED. | redhat.com/trusted
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/RHWT64HONVMDHNUEDIKRDX2MMFNCOC7N/


[ovirt-devel] Re: Tests failed because global_setup.sh failed

2018-12-25 Thread Barak Korren
On Tue, 25 Dec 2018 at 09:53, Yedidyah Bar David  wrote:

> On Mon, Dec 24, 2018 at 7:49 PM Nir Soffer  wrote:
> >
> > Not sure why global setup failed:
>
> Because of:
>
> + sudo -n systemctl enable postfix
> Failed to execute operation: Connection timed out
> + sudo -n systemctl start postfix
> Failed to start postfix.service: Connection timed out
> See system logs and 'systemctl status postfix.service' for details.
> + failed=true
>
>
Lets have the discussion on the Jira ticket:
https://ovirt-jira.atlassian.net/browse/OVIRT-2636



> Looked a bit and can't find system logs to try and understand why this
> failed.
>
> >
> > + [[ ! -O /home/jenkins/.ssh ]]
> > + [[ ! -G /home/jenkins/.ssh ]]
> > + verify_set_permissions 700 /home/jenkins/.ssh
> > + local target_permissions=700
> > + local path_to_set=/home/jenkins/.ssh
> > ++ stat -c %a /home/jenkins/.ssh
> > + local access=700
> > + [[ 700 != \7\0\0 ]]
> > + return 0
> > + [[ -f /home/jenkins/.ssh/known_hosts ]]
> > + verify_set_ownership /home/jenkins/.ssh/known_hosts
> > + local path_to_set=/home/jenkins/.ssh/known_hosts
> > ++ id -un
> > + local owner=jenkins
> > ++ id -gn
> > + local group=jenkins
> > + [[ ! -O /home/jenkins/.ssh/known_hosts ]]
> > + [[ ! -G /home/jenkins/.ssh/known_hosts ]]
> > + verify_set_permissions 644 /home/jenkins/.ssh/known_hosts
> > + local target_permissions=644
> > + local path_to_set=/home/jenkins/.ssh/known_hosts
> > ++ stat -c %a /home/jenkins/.ssh/known_hosts
> > + local access=644
> > + [[ 644 != \6\4\4 ]]
> > + return 0
> > + return 0
> > + true
> > + log ERROR Aborting.
> >
> > Build:
> >
> https://jenkins.ovirt.org/blue/rest/organizations/jenkins/pipelines/vdsm_standard-check-patch/runs/1048/nodes/125/steps/479/log/?start=0
>
> I found above in this log, but do not see this log in the artifacts:
>
> https://jenkins.ovirt.org/job/vdsm_standard-check-patch/1084/
>
> I (still?) do not know blue ocean well enough, so far I found it hard
> to understand and find stuff there.
>
> CI team: please try to make searches easier. Ideally, I'd like in above
> link (to a specific build of a specific job) to have a search box for that
> build, that searches in everything created by that build - perhaps not
> only artifacts, if above output from global_setup is not considered an
> artifact. Thanks.
>

You can create an RFE...


>
> Best regards,
> --
> Didi
>


-- 
Barak Korren
RHV DevOps team , RHCE, RHCi
Red Hat EMEA
redhat.com | TRIED. TESTED. TRUSTED. | redhat.com/trusted
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/ZWL7RLJY2L4QJEYD56JQ3HMMOWP7TSAL/


[ovirt-devel] Re: [VDSM] all test passed, build failed with "tar: journal/b087148aba6d49b9bbef488e52a48752/system.journal: file changed as we read it"

2018-12-16 Thread Barak Korren
On Mon, 17 Dec 2018 at 08:32, Edward Haas  wrote:

>
>
> On Sun, Dec 16, 2018 at 3:31 PM Barak Korren  wrote:
>
>>
>>
>> On Sun, 16 Dec 2018 at 14:44, Edward Haas  wrote:
>>
>>>
>>>
>>> On Sun, Dec 16, 2018 at 2:40 PM Nir Soffer  wrote:
>>>
>>>> On Sun, Dec 16, 2018 at 2:21 PM Nir Soffer  wrote:
>>>>
>>>>> On Sun, Dec 2, 2018 at 8:18 AM Edward Haas  wrote:
>>>>>
>>>>>>
>>>>>>
>>>>>> On Sat, Dec 1, 2018 at 11:10 PM Nir Soffer 
>>>>>> wrote:
>>>>>>
>>>>>>> On Thu, Nov 29, 2018 at 11:21 AM Edward Haas 
>>>>>>> wrote:
>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> On Thu, Nov 29, 2018 at 10:41 AM Edward Haas 
>>>>>>>> wrote:
>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> On Wed, Nov 28, 2018 at 8:12 PM Nir Soffer 
>>>>>>>>> wrote:
>>>>>>>>>
>>>>>>>>>> We have this failure that pops randomly:
>>>>>>>>>>
>>>>>>>>>> 1. All tests pass
>>>>>>>>>>
>>>>>>>>>> *00:13:13.284* ___ summary 
>>>>>>>>>> *00:13:13.285*   tests: commands 
>>>>>>>>>> succeeded*00:13:13.286*   storage-py27: commands 
>>>>>>>>>> succeeded*00:13:13.286*   storage-py36: commands 
>>>>>>>>>> succeeded*00:13:13.286*   lib-py27: commands succeeded*00:13:13.287* 
>>>>>>>>>>   lib-py36: commands succeeded*00:13:13.288*   network-py27: 
>>>>>>>>>> commands succeeded*00:13:13.290*   network-py36: commands 
>>>>>>>>>> succeeded*00:13:13.291*   virt-py27: commands 
>>>>>>>>>> succeeded*00:13:13.292*   virt-py36: commands 
>>>>>>>>>> succeeded*00:13:13.293*   congratulations :)
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> 2. But we fail to collect logs at the end
>>>>>>>>>>
>>>>>>>>>> *00:14:35.992* 
>>>>>>>>>> ##*00:14:35.995*
>>>>>>>>>>  ## Wed Nov 28 17:39:50 UTC 2018 Finished env: 
>>>>>>>>>> fc28:fedora-28-x86_64*00:14:35.996* ##  took 764 
>>>>>>>>>> seconds*00:14:35.997* ##  rc = 1*00:14:35.997* 
>>>>>>>>>> ##*00:14:36.009*
>>>>>>>>>>  ##! ERROR v*00:14:36.010* 
>>>>>>>>>> ##! Last 20 log entries: 
>>>>>>>>>> /tmp/mock_logs.Lcop4ZOq/script/stdout_stderr.log*00:14:36.011* 
>>>>>>>>>> ##!*00:14:36.012* 
>>>>>>>>>> journal/b087148aba6d49b9bbef488e52a48752/system.journal*00:14:36.013*
>>>>>>>>>>  tar: journal/b087148aba6d49b9bbef488e52a48752/system.journal: file 
>>>>>>>>>> changed as we read it*00:14:36.014* 
>>>>>>>>>> journal/b087148aba6d49b9bbef488e52a48752/user-1000.journal*00:14:36.015*
>>>>>>>>>>  lastlog*00:14:36.015* libvirt/*00:14:36.015* 
>>>>>>>>>> libvirt/lxc/*00:14:36.015* libvirt/libxl/*00:14:36.016* 
>>>>>>>>>> libvirt/qemu/*00:14:36.016* 
>>>>>>>>>> libvirt/qemu/LiveOS-f920001d-be4e-47ea-ac26-72480fd5be87.log*00:14:36.017*
>>>>>>>>>>  libvirt/uml/*00:14:36.017* ovirt-guest-agent/*00:14:36.017* 
>>>>>>>>>> ovirt-guest-agent/ovirt-guest-agent.log*00:14:36.017* 
>>>>>>>>>> README*00:14:36.018* samba/*00:14:36.018* samba/old/*00:14:36.018* 
>>>>>>>>>> sssd/*00:14:36.018* tallylog*00:14:36.018* wtmp*00:14:36.018* Took 
>>>>>>>>>> 678 seconds*00:14:36.018* 
>>>>>>>>>> ===*00:14:36.019* ##!*00:14:36.019* 
>>>>>>>>>>

[ovirt-devel] Re: [VDSM] all test passed, build failed with "tar: journal/b087148aba6d49b9bbef488e52a48752/system.journal: file changed as we read it"

2018-12-16 Thread Barak Korren
:13:55.923* libvirt/*00:13:55.924* 
>>>>>>>> libvirt/lxc/*00:13:55.926* libvirt/libxl/*00:13:55.927* 
>>>>>>>> libvirt/qemu/*00:13:55.928* 
>>>>>>>> libvirt/qemu/LiveOS-f920001d-be4e-47ea-ac26-72480fd5be87.log*00:13:55.929*
>>>>>>>>  libvirt/uml/*00:13:55.930* ovirt-guest-agent/*00:13:55.930* 
>>>>>>>> ovirt-guest-agent/ovirt-guest-agent.log*00:13:55.932* 
>>>>>>>> README*00:13:55.933* samba/*00:13:55.933* samba/old/*00:13:55.935* 
>>>>>>>> sssd/*00:13:55.935* tallylog*00:13:55.935* wtmp
>>>>>>>>
>>>>>>>>
>>>>>>>> Most if not all are lot relevant to vdsm tests, and should not be
>>>>>>>> collected.
>>>>>>>>
>>>>>>>> This was added in:
>>>>>>>>
>>>>>>>> commit 9c9c17297433e5a5a49aa19cde10b206e7db61e9
>>>>>>>> Author: Edward Haas 
>>>>>>>> Date:   Tue Apr 17 10:53:11 2018 +0300
>>>>>>>>
>>>>>>>> automation: Collect logs even when check-patch fails
>>>>>>>>
>>>>>>>> Change-Id: Idfe07ce6fc55473b1db1d7f16754f559cc5c345a
>>>>>>>> Signed-off-by: Edward Haas 
>>>>>>>>
>>>>>>>> Reviewed in:
>>>>>>>> https://gerrit.ovirt.org/c/90370
>>>>>>>>
>>>>>>>> Edward, can you explain why do we need to collect logs during
>>>>>>>> check-patch,
>>>>>>>> and why do we need to collect all the logs in the system?
>>>>>>>>
>>>>>>>
>>>>>>> check-patch are running unit and integrations tests.
>>>>>>> The integration tests are touching the OS and other packages (like
>>>>>>> openvswitch).
>>>>>>> It was added so we can debug why tests failed.
>>>>>>>
>>>>>>> I guess we can now separate the unit and integration tests, but it
>>>>>>> will not solve
>>>>>>> the problem presented here.
>>>>>>> Failing to collect the logs silently sounds a good enough solution
>>>>>>> to me.
>>>>>>>
>>>>>>
>>>>>> Barak suggested to just exclude the journal:
>>>>>> https://gerrit.ovirt.org/#/c/95850/
>>>>>>
>>>>>
>>>>> This is fixed now, thanks!
>>>>>
>>>>
>>> Not fixed yet, we still fail collecting /var/host_log:
>>>
>>> + cd /var/host_log
>>>
>>> + tar --exclude 'journal/*' -czf
>>> /home/jenkins/workspace/vdsm_standard-check-patch/vdsm/exported-artifacts/mock_varlogs.tar.gz
>>> btmp faillog glusterfs grubby_prune_debug lastlog libvirt openvswitch swtpm
>>> tallylog vdsm_tests.log wtmp yum.log
>>> + cd /var/host_log
>>> + tar -czf
>>> /home/jenkins/workspace/vdsm_standard-check-patch/vdsm/exported-artifacts/host_varlogs.tar.gz
>>> anaconda audit boot.log boot.log-20181210 boot.log-20181211
>>> boot.log-20181212 boot.log-20181213 boot.log-20181214 boot.log-20181215
>>> boot.log-20181216 btmp btmp-20181201 cron cron-20181126 cron-20181202
>>> cron-20181210 cron-20181216 dmesg dmesg.old firewalld glusterfs grubby
>>> grubby_prune_debug httpd journal lastlog libvirt maillog maillog-20181126
>>> maillog-20181202 maillog-20181210 maillog-20181216 messages ntpstats
>>> ovirt-engine ovirt-guest-agent ovirt-imageio-proxy ppp puppet qemu-ga
>>> secure spooler spooler-20181126 spooler-20181202 spooler-20181210
>>> spooler-20181216 tallylog tuned wpa_supplicant.log wtmp yum.log
>>> yum.log-20170101 yum.log-20180101
>>> tar: journal/d2b3276bfc6c7a4e95ce6b2b9b5d0f20/system.journal: file
>>> changed as we read it
>>>
>>>
>>> https://jenkins.ovirt.org/blue/rest/organizations/jenkins/pipelines/vdsm_standard-check-patch/runs/825/nodes/119/steps/209/log/?start=0
>>>
>>> Copying the binary logs from journal is not the way to copy logs.
>>>
>>> If we need the host journal, it should be collected by CI infra using
>>> journalctl.
>>>
>>
> But these are collected by the CI, not us. We just take what was already
> collected.
> Gal, Barak, can you explain this?
>

As you can see in the code Nir patched, We don't collect it, you do.


>
>
>> This should fix the issue:
>> https://gerrit.ovirt.org/c/96244/
>>
>>
>>>
>>> Nir
>>>
>>>>
>>>>
>>>>> Any reason why we exclude journal only for /var/log, and do collect
>>>>> the binary journal
>>>>> from /var/host_log? I guess it can fail in the same way.
>>>>>
>>>>
>>>> As far as I know, /vatests/integration/vlan_test.pyr/host_log is
>>>> collected by the CI and placed (copy) there.
>>>> It does not collect everything, therefore we added the second one.
>>>>
>>>>
>>>>> If we collect logs for integration tests, the most important log is
>>>>> the journal, and now
>>>>> we skip it.
>>>>>
>>>>
>>>> I think it appears in /var/host_log.
>>>>
>>>>
>>>>>
>>>>> We can get the journal in a reliable way like this:
>>>>>
>>>>> journalctl --since build-date > /tmp/journal.log
>>>>>
>>>>> Nir
>>>>>
>>>>>>

-- 
Barak Korren
RHV DevOps team , RHCE, RHCi
Red Hat EMEA
redhat.com | TRIED. TESTED. TRUSTED. | redhat.com/trusted
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/HOAQDBVVEAQ2G7TSIRP3KZTBP7M3BJQT/


[ovirt-devel] Re: [Ovirt] [CQ weekly status] [14-12-2018]

2018-12-15 Thread Barak Korren
On Fri, 14 Dec 2018 at 18:27, Dafna Ron  wrote:

> Hi,
>
> This mail is to provide the current status of CQ and allow people to
> review status before and after the weekend.
> Please refer to below colour map for further information on the meaning of
> the colours.
>
> *CQ-4.2*:  GREEN (#1)
>
> Last job failure was on Dec 11th.
> Project: ovirt-engine
> Reason: infra related failure
>
> *CQ-Master:* GREEN (#1)
>
> Last job failure was on Dec 13th
> Project: vdsm
> Reason: failed package build cause CQ to fail to run. the package build
> failure was infra related and a new package was successfully built and
> tested after this failure.
>
>
Given RHEVINTEG-2677
<https://projects.engineering.redhat.com/browse/RHEVINTEG-2677> had gone
unsolved for 4 days, I'd think DS-master should be red...
That issue basically means vdsm can`t go through the DS master CQ.



> Happy week!
> Dafna
>
>
>
> ---
> COLOUR MAP
>
> Green = job has been passing successfully
>
> ** green for more than 3 days may suggest we need a review of our test
> coverage
>
>
>1.
>
>1-3 days   GREEN (#1)
>2.
>
>4-7 days   GREEN (#2)
>3.
>
>Over 7 days GREEN (#3)
>
>
> Yellow = intermittent failures for different projects but no lasting or
> current regressions
>
> ** intermittent would be a healthy project as we expect a number of
> failures during the week
>
> ** I will not report any of the solved failures or regressions.
>
>
>1.
>
>Solved job failuresYELLOW (#1)
>2.
>
>Solved regressions  YELLOW (#2)
>
>
> Red = job has been failing
>
> ** Active Failures. The colour will change based on the amount of time the
> project/s has been broken. Only active regressions would be reported.
>
>
>1.
>
>1-3 days  RED (#1)
>2.
>
>4-7 days  RED (#2)
>3.
>
>Over 7 days RED (#3)
>
> ___
> Devel mailing list -- devel@ovirt.org
> To unsubscribe send an email to devel-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/devel@ovirt.org/message/JBPRLKX4UGSZMAYQ2356T5WBJDBGQD4S/
>


-- 
Barak Korren
RHV DevOps team , RHCE, RHCi
Red Hat EMEA
redhat.com | TRIED. TESTED. TRUSTED. | redhat.com/trusted
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/PNPT5EF2MFCANACGGHIO725SBO46HJTC/


[ovirt-devel] Re: [VDSM] check-merged failing for years - remove it?

2018-12-06 Thread Barak Korren
On Thu, 6 Dec 2018 at 12:10, Marcin Sobczyk  wrote:

> Hi Eyal,
>
> removal of V1 is on the way:
> https://gerrit.ovirt.org/#/c/96031/
> https://gerrit.ovirt.org/#/c/96027/
> https://gerrit.ovirt.org/#/c/95774/
>
> and here's a patch that disables 'check-merged' on master:
>
> https://gerrit.ovirt.org/#/c/96032/
>
> Do we want to also disable it on 'ovirt-4.2'?
>

Ether, disable, or make sure it passes



> Marcin
> On 12/6/18 10:50 AM, Eyal Edri wrote:
>
> Guys,
> The check-merged job is causing lot of noise and failures in CI and CQ.
> Can we drop it ASAP? and continue to discuss offline if you want to move
> that functionality to check-patch as part of V2?
>
> Also, if we could drop the V1 jobs that would be great so we'll reduce
> noise from failures there.
>
> On Thu, Nov 29, 2018 at 8:39 AM Barak Korren  wrote:
>
>>
>>
>> On Thu, 29 Nov 2018 at 00:29, Nir Soffer  wrote:
>>
>>> On Wed, Nov 28, 2018 at 11:30 PM Nir Soffer  wrote:
>>>
>>>> On Wed, Nov 28, 2018 at 12:03 PM Edward Haas 
>>>> wrote:
>>>>
>>>>>
>>>>>
>>>>> On Wed, Nov 28, 2018 at 11:28 AM Marcin Sobczyk 
>>>>> wrote:
>>>>>
>>>>>> How much value does it add comparing to check-patch?
>>>>>>
>>>>>> If we can hold for a while with pulling the plug, I can try to split
>>>>>> it into substages in stdci v2 and see if things stabilize a bit.
>>>>>>
>>>>>
>>>>> I would prefer we first work with stdci v2 in order to move the
>>>>> functional tests there (or at least play with it).
>>>>> Then we can remove it.
>>>>>
>>>>
>>>> Turns out that this worth with stdci v2 - if check-merged fail, the
>>>> change queue
>>>> will reject the patch.
>>>>
>>>> See this mail from in...@ovirt.org mailing list:
>>>>
>>>> Change 95559,13 (vdsm) is probably the reason behind recent system test
>>>>> failures in the "ovirt-master" change queue and needs to be fixed.
>>>>>
>>>>> This change had been removed from the testing queue. Artifacts build
>>>>> from this
>>>>> change will not be released until it is fixed.
>>>>>
>>>>> For further details about the change see:
>>>>> https://gerrit.ovirt.org/#/c/95559/13
>>>>
>>>>
>>>> According to Dafna and Barak, change queue require that all jobs pass,
>>>> so we cannot
>>>> have flaky job in the build.
>>>>
>>>> I hopefully removed it from stdci v2 here:
>>>> https://gerrit.ovirt.org/c/95845/
>>>>
>>>> I don't think we should even enable check-merged again. All tests must
>>>> run *before* we
>>>> merge. When cannot work with a job that will randomly fail after merge.
>>>>
>>>
>>> Here is another failure:
>>>
>>> A system test invoked by the "ovirt-master" change queue including change
>>> 95817,2 (vdsm) failed. However, this change seems not to be the root
>>> cause for
>>> this failure. Change 95559,13 (vdsm) that this change depends on or is
>>> based
>>> on, was detected as the cause of the testing failures.
>>>
>>> This change had been removed from the testing queue. Artifacts built
>>> from this
>>> change will not be released until either change 95559,13 (vdsm) is fixed
>>> and
>>> this change is updated to refer to or rebased on the fixed version, or
>>> this
>>> change is modified to no longer depend on it.
>>>
>>> For further details about the change see:
>>> https://gerrit.ovirt.org/#/c/95817/2
>>>
>>> For further details about the change that seems to be the root cause
>>> behind the
>>> testing failures see:
>>> https://gerrit.ovirt.org/#/c/95559/13
>>>
>>> For failed test results see:
>>> http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/11719/
>>>
>>>
>>> We are going to see more failures, since we merged yesterday several
>>> patches after the stdci v2 patch:
>>>
>>> 8e4df87a5 storage: blocksd_test refactored to use pytest monkeypatching
>>> ede08ad53 storage: blocksd_test refactore to use pytest.xfail
>>> 46aad2375 storage: blocksd_test convertet to pytest
>>> b4f2809c0 storage: Improved SD.c

[ovirt-devel] Re: [VDSM] check-merged failing for years - remove it?

2018-11-28 Thread Barak Korren
On Thu, 29 Nov 2018 at 00:29, Nir Soffer  wrote:

> On Wed, Nov 28, 2018 at 11:30 PM Nir Soffer  wrote:
>
>> On Wed, Nov 28, 2018 at 12:03 PM Edward Haas  wrote:
>>
>>>
>>>
>>> On Wed, Nov 28, 2018 at 11:28 AM Marcin Sobczyk 
>>> wrote:
>>>
>>>> How much value does it add comparing to check-patch?
>>>>
>>>> If we can hold for a while with pulling the plug, I can try to split it
>>>> into substages in stdci v2 and see if things stabilize a bit.
>>>>
>>>
>>> I would prefer we first work with stdci v2 in order to move the
>>> functional tests there (or at least play with it).
>>> Then we can remove it.
>>>
>>
>> Turns out that this worth with stdci v2 - if check-merged fail, the
>> change queue
>> will reject the patch.
>>
>> See this mail from in...@ovirt.org mailing list:
>>
>> Change 95559,13 (vdsm) is probably the reason behind recent system test
>>> failures in the "ovirt-master" change queue and needs to be fixed.
>>>
>>> This change had been removed from the testing queue. Artifacts build
>>> from this
>>> change will not be released until it is fixed.
>>>
>>> For further details about the change see:
>>> https://gerrit.ovirt.org/#/c/95559/13
>>
>>
>> According to Dafna and Barak, change queue require that all jobs pass, so
>> we cannot
>> have flaky job in the build.
>>
>> I hopefully removed it from stdci v2 here:
>> https://gerrit.ovirt.org/c/95845/
>>
>> I don't think we should even enable check-merged again. All tests must
>> run *before* we
>> merge. When cannot work with a job that will randomly fail after merge.
>>
>
> Here is another failure:
>
> A system test invoked by the "ovirt-master" change queue including change
> 95817,2 (vdsm) failed. However, this change seems not to be the root cause
> for
> this failure. Change 95559,13 (vdsm) that this change depends on or is
> based
> on, was detected as the cause of the testing failures.
>
> This change had been removed from the testing queue. Artifacts built from
> this
> change will not be released until either change 95559,13 (vdsm) is fixed
> and
> this change is updated to refer to or rebased on the fixed version, or this
> change is modified to no longer depend on it.
>
> For further details about the change see:
> https://gerrit.ovirt.org/#/c/95817/2
>
> For further details about the change that seems to be the root cause
> behind the
> testing failures see:
> https://gerrit.ovirt.org/#/c/95559/13
>
> For failed test results see:
> http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/11719/
>
>
> We are going to see more failures, since we merged yesterday several
> patches after the stdci v2 patch:
>
> 8e4df87a5 storage: blocksd_test refactored to use pytest monkeypatching
> ede08ad53 storage: blocksd_test refactore to use pytest.xfail
> 46aad2375 storage: blocksd_test convertet to pytest
> b4f2809c0 storage: Improved SD.create() docstring
> 30b1423e0 virt: use log.warning(), not log.warn()
> 224ebf092 ci: Added 'stdciv2' configuration file
>

Hold on, since you also have V1 jobs now, patches are being submitted twice
into the queue, and there are actually passing when submitted by the v1
jobs like they did before:
https://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/11713/execution/node/85/log/

So yeah, you should fix check-merged or disable it, but as long as you have
the v1 jobs these failures are not causing any real harm, just noise.

-- 
Barak Korren
RHV DevOps team , RHCE, RHCi
Red Hat EMEA
redhat.com | TRIED. TESTED. TRUSTED. | redhat.com/trusted
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/RVTPRQURXCVCPLFZUS7TOUDU3A3BMGEJ/


[ovirt-devel] Re: splitting tests for stdci v2

2018-11-27 Thread Barak Korren
On Tue, 27 Nov 2018 at 11:27, Marcin Sobczyk  wrote:

> Hi,
>
> I have another question about our approach to moving to stdci v2 - after
> some talk with Martin, he suggested to make a switch from v1 to v2 just by
> adding 'stdci.yaml' in master and our stable branches and turning off old
> CI. Then, we would have the same old build process just with v2.
>
> That makes sense to me - right now I'm working with refining 'stdci.yaml'
> for master (cleaning things up, splitting into substages, etc.), but the
> problem is that for each patch I post *two* pipelines are being launched -
> the old CI and the new one. IMHO this is abusing the infrastructure.
>
> By enabling v2 first and only then focusing on doing refinements we could
> avoid that.
>
> But the real question is - do we want the new CI to be optimized
> (substages etc.) for stable branches? I don't feel really comfortable with
> messing with stable's build process...
>

Since the stdci.yaml file is committed to the branch you can decide for
yourself which branches you optimize and which you don't...



> Marcin
> On 11/26/18 2:06 PM, Nir Soffer wrote:
>
> On Mon, Nov 26, 2018 at 3:00 PM Marcin Sobczyk 
> wrote:
>
>> Hi,
>>
>> I'm currently working on paralleling our stdci v2.
>>
>> I've already extracted 'linters' stage, more patches (and more
>> substages) are on the way.
>>
>> This part i.e. :
>>
>> if git diff-tree --no-commit-id --name-only -r HEAD | egrep --quiet
>> 'vdsm.spec.in|Makefile.am|automation' ; then
>>  ./automation/build-artifacts.sh"
>> ...
>>
>> seems to be an excellent candidate for extraction to a separate substage.
>>
>> The question is - how should we proceed with tests? I can create
>> substage for each of:
>>
>> tox -e "tests,{storage,lib,network,virt}"
>>
>> But the original 'check-patch' combined the coverage reports into one -
>> we would lose that.
>>
>
> My long term goal is to get rid of all the ugly bash code in the makefile,
> and
> run everything via tox, but as first step I think we can split the work by
> running:
>
> make tests
>
> In the tests substage, instead of "make check" today.
>
>
> Will do so.
>
>
> Does it change anything about coverage?
>
> We'll get separate coverage reports for py27 x el7 and {py27, py36} x fc28.
>
>
>
> Theoretically we can split also to storage/network/virt/infra jobs but I
> think
> this will consume too many resources and harm other projects sharing
> the slaves.
>
>
>> There is a possibility that we could work on something that gathers
>> coverage data from multiple sources (tests, OST) as a completely
>> separate jenkins job or smth, but that will be a bigger effort.
>> What do you think about it?
>>
>> Marcin
>>
>>
>>
>>
>> _______
> Devel mailing list -- devel@ovirt.org
> To unsubscribe send an email to devel-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/devel@ovirt.org/message/TXAML3NLA5FW74DP4LGHZSKB5OPW5O3W/
>


-- 
Barak Korren
RHV DevOps team , RHCE, RHCi
Red Hat EMEA
redhat.com | TRIED. TESTED. TRUSTED. | redhat.com/trusted
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/QH6KMLASTICRKJND63AUKGP5GZNR22HU/


[ovirt-devel] Re: splitting tests for stdci v2

2018-11-26 Thread Barak Korren
On Mon, 26 Nov 2018 at 15:10, Nir Soffer  wrote:

> On Mon, Nov 26, 2018 at 3:00 PM Marcin Sobczyk 
> wrote:
>
>> Hi,
>>
>> I'm currently working on paralleling our stdci v2.
>>
>> I've already extracted 'linters' stage, more patches (and more
>> substages) are on the way.
>>
>> This part i.e. :
>>
>> if git diff-tree --no-commit-id --name-only -r HEAD | egrep --quiet
>> 'vdsm.spec.in|Makefile.am|automation' ; then
>>  ./automation/build-artifacts.sh"
>> ...
>>
>>
Please note that checks like this (about which files were changed by the
patch) can be done via the STDCI V2 'runif' option. So you no longer need
to write scripts like this.


> seems to be an excellent candidate for extraction to a separate substage.
>>
>> The question is - how should we proceed with tests? I can create
>> substage for each of:
>>
>> tox -e "tests,{storage,lib,network,virt}"
>>
>> But the original 'check-patch' combined the coverage reports into one -
>> we would lose that.
>>
>
> My long term goal is to get rid of all the ugly bash code in the makefile,
> and
> run everything via tox, but as first step I think we can split the work by
> running:
>
> make tests
>
> In the tests substage, instead of "make check" today.
>
> Does it change anything about coverage?
>
> Theoretically we can split also to storage/network/virt/infra jobs but I
> think
> this will consume too many resources and harm other projects sharing
> the slaves.
>
>
>> There is a possibility that we could work on something that gathers
>> coverage data from multiple sources (tests, OST) as a completely
>> separate jenkins job or smth, but that will be a bigger effort.
>> What do you think about it?
>>
>> Marcin
>>
>>
>>
>>
>> ___
> Devel mailing list -- devel@ovirt.org
> To unsubscribe send an email to devel-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/devel@ovirt.org/message/MR4VTN6BZFRCTPP77TT4SG47E36W72KQ/
>


-- 
Barak Korren
RHV DevOps team , RHCE, RHCi
Red Hat EMEA
redhat.com | TRIED. TESTED. TRUSTED. | redhat.com/trusted
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/SVA35XP5HSYFSCFQP4KJ6DEJQX7DK5TC/


[ovirt-devel] Re: UI test failing on a vdsm change?

2018-11-24 Thread Barak Korren
בתאריך שבת, 24 בנוב׳ 2018, 17:05, מאת Greg Sheremeta :

>
>
> On Sat, Nov 24, 2018 at 9:49 AM Dan Kenigsberg  wrote:
>
>>
>>
>> On Sat, 24 Nov 2018, 13:50 Greg Sheremeta >
>>> Correct, that vdsm patch is unrelated.
>>>
>>> The docker-based selenium testing infrastructure did not initialize
>>> correctly. Firefox started but chrome did not download correctly.
>>>  [
>>> https://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/11365/testReport/junit/(root)/008_basic_ui_sanity/running_tests___basic_suite_el7_x86_64___start_grid/
>>> ]
>>>
>>> Unable to find image 'selenium/node-chrome-debug:3.9.1-actinium' locally
>>> Trying to pull repository docker.io/selenium/node-chrome-debug ...
>>> 3.9.1-actinium: Pulling from docker.io/selenium/node-chrome-debug
>>> 1be7f2b886e8 : 
>>> Already exists
>>> 6fbc4a21b806: Already exists
>>> c71a6f8e1378: Already exists
>>> 4be3072e5a37: Already exists
>>> 06c6d2f59700: Already exists
>>> edcd5e9f2f91: Already exists
>>> 0eeaf787f757: Already exists
>>> c949dee5af7e: Already exists
>>> df88a49b4162: Already exists
>>> ce3c6f42fd24: Already exists
>>> 6d845a39af3f: Pulling fs layer
>>> 11d16a965e13: Pulling fs layer
>>> 1294e9b42691: Pulling fs layer
>>> 04b0c053828d: Pulling fs layer
>>> cf044f1d0e2a: Pulling fs layer
>>> 8f84ccb3a86a: Pulling fs layer
>>> be9a1d0955bd: Pulling fs layer
>>> 872e5c8a3ad8: Pulling fs layer
>>> 07efee6f27e7: Pulling fs layer
>>> 5c6207de8f09: Pulling fs layer
>>> b932cacc6ddb: Pulling fs layer
>>> c057ca8f4e65: Pulling fs layer
>>> bbe16010d6ab: Pulling fs layer
>>> 645ca3607a4c: Pulling fs layer
>>> cf044f1d0e2a: Waiting
>>> 04b0c053828d: Waiting
>>> 8f84ccb3a86a: Waiting
>>> be9a1d0955bd: Waiting
>>> c057ca8f4e65: Waiting
>>> 5c6207de8f09: Waiting
>>> b932cacc6ddb: Waiting
>>> bbe16010d6ab: Waiting
>>> 645ca3607a4c: Waiting
>>> 07efee6f27e7: Waiting
>>> 872e5c8a3ad8: Waiting*/usr/bin/docker-current: error pulling image 
>>> configuration: unknown blob.
>>> *See '/usr/bin/docker-current run --help'.
>>>
>>>
>>> checking chrome node
>>> executing shell: *curl http://:/wd/hub/static/resource/hub.html
>>> <--- that URL won't work :)*
>>>
>>>   % Total% Received % Xferd  Average Speed   TimeTime Time
>>> Current
>>>  Dload  Upload   Total   SpentLeft
>>> Speed
>>>
>>>   0 00 00 0  0  0 --:--:-- --:--:--
>>> --:--:-- 0
>>> curl: (6) Could not resolve host: ; Unknown error
>>>
>>> checking firefox node
>>> executing shell: curl
>>> http://172.18.0.3:/wd/hub/static/resource/hub.html
>>> 
>>> WebDriver Hub
>>>
>>>
>>> This is the first time I've seen something like this with this test. Did
>>> it happen only the one time?
>>>
>>
>> I have no idea. I didn't not even know that such a test had existed.
>>
>
> Yep, we make sure the UI loads, user can login and navigate, etc. -- all
> automated.
>
>
>>
>> ovirt CI tries to cache yum repos it pulls from. Do you know if it does
>> so with docker repos?
>>
>
> I don't know. The selenium ones are standard from dockerhub [
> https://hub.docker.com/u/selenium/]
>

We do not have the same elaborate caching that we have for RPMs for
containers, but we can cache containers as long as they are explicitly
whitelisted.

Please create a ticket so we'll do that for the selenium containers if time
makes sense.



>
>>
>>
>>>
>>> On Sat, Nov 24, 2018 at 2:35 AM Dan Kenigsberg 
>>> wrote:
>>>
 I just noticed that a vdsm change to gluster tests
 https://gerrit.ovirt.org/#/c/95596/ failed in the change queue, on

 WebDriverException in _init_browser connecting to hub


 https://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/11365/testReport/junit/(root)/008_basic_ui_sanity/running_tests___basic_suite_el7_x86_64___initialize_chrome/

 The failure is clearly unrelated to the patch; maybe one of you can
 explain why the test fails?

>>>
>>>
>>> --
>>>
>>> GREG SHEREMETA
>>>
>>> SENIOR SOFTWARE ENGINEER - TEAM LEAD - RHV UX
>>>
>>> Red Hat NA
>>>
>>> 
>>>
>>> gsher...@redhat.comIRC: gshereme
>>> 
>>>
>>
>
> --
>
> GREG SHEREMETA
>
> SENIOR SOFTWARE ENGINEER - TEAM LEAD - RHV UX
>
> Red Hat NA
>
> 
>
> gsher...@redhat.comIRC: gshereme
> 
> ___
> Devel mailing list -- devel@ovirt.org
> To unsubscribe send an email to devel-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/devel@ovirt.org/message/WKJ2Y2SZNZYL37X2I5EX2O3SO5XH7COI/
>
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: 

[ovirt-devel] Re: [vdsm] illegible check-patch failure

2018-11-18 Thread Barak Korren
On Sun, 18 Nov 2018 at 11:55, Dan Kenigsberg  wrote:

> This job
> https://jenkins.ovirt.org/job/vdsm_master_check-patch-fc28-x86_64/1965/consoleFull
> fails on
>
> 06:46:24 journal/b087148aba6d49b9bbef488e52a48752/system.journal
> 06:46:27 tar: journal/b087148aba6d49b9bbef488e52a48752/system.journal:
> file changed as we read it
>
> (I think).
>
> Is that true? How can this be avoided?
>


Umm... stop journald while reading?

Maybe we need a different way to get the log data from it

Or maybe just exclude the journal files, because you have the logs as text
files already anyway

-- 
Barak Korren
RHV DevOps team , RHCE, RHCi
Red Hat EMEA
redhat.com | TRIED. TESTED. TRUSTED. | redhat.com/trusted
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/I6PATLYTBXMZZVEVFYCIQENBLQJOKARP/


[ovirt-devel] Re: [CQ ovirt master] [ovirt-engine] - not passing for 10 days

2018-11-15 Thread Barak Korren
 >>>>
>> >> > > > >>>> Is this helpful for you?
>> >> > > > >>>>
>> >> > > > >>>>
>> >> > > > >>>>
>> >> > > > >>>> actually, there ire two issues
>> >> > > > >>>> 1) cluster is still 4.3 even after Martin’s revert.
>> >> > > > >>>>
>> >> > > > >>>
>> >> > > > >>> https://gerrit.ovirt.org/#/c/95409/ should align cluster
>> level with dc level
>> >> > > > >>>
>> >> > > > >>
>> >> > > > >> This change aligns the cluster level, but
>> >> > > > >>
>> https://jenkins.ovirt.org/view/oVirt%20system%20tests/job/ovirt-system-tests_manual/3502/parameters/
>> >> > > > >> consuming build result from
>> >> > > > >>
>> https://jenkins.ovirt.org/view/Change%20queue%20jobs/job/ovirt-master_change-queue-tester/11121/
>> >> > > > >> looks like that this does not solve the issue:
>> >> > > > >>  File
>> "/home/jenkins/workspace/ovirt-system-tests_manual/ovirt-system-tests/basic-suite-master/test-scenarios/004_basic_sanity.py",
>> line 698, in run_vms
>> >> > > > >>api.vms.get(VM0_NAME).start(start_params)
>> >> > > > >>  File
>> "/usr/lib/python2.7/site-packages/ovirtsdk/infrastructure/brokers.py", line
>> 31193, in start
>> >> > > > >>headers={"Correlation-Id":correlation_id}
>> >> > > > >>  File
>> "/usr/lib/python2.7/site-packages/ovirtsdk/infrastructure/proxy.py", line
>> 122, in request
>> >> > > > >>persistent_auth=self.__persistent_auth
>> >> > > > >>  File
>> "/usr/lib/python2.7/site-packages/ovirtsdk/infrastructure/connectionspool.py",
>> line 79, in do_request
>> >> > > > >>persistent_auth)
>> >> > > > >>  File
>> "/usr/lib/python2.7/site-packages/ovirtsdk/infrastructure/connectionspool.py",
>> line 162, in __do_request
>> >> > > > >>raise errors.RequestError(response_code, response_reason,
>> response_body)
>> >> > > > >> RequestError:
>> >> > > > >> status: 400
>> >> > > > >> reason: Bad Request
>> >> > > > >>
>> >> > > > >> engine.log:
>> >> > > > >> 2018-11-14 03:10:36,802-05 INFO
>> [org.ovirt.engine.core.bll.scheduling.SchedulingManager] (default task-3)
>> [99e282ea-577a-4dab-857b-285b1df5e6f6] Candidate host
>> 'lago-basic-suite-master-host-0' ('4dbfb937-ac4b-4cef-8ae3-124944829add')
>> was filtered out by 'VAR__FILTERTYPE__INTERNAL' filter 'CPU-Level'
>> (correlation id: 99e282ea-577a-4dab-857b-285b1df5e6f6)
>> >> > > > >> 2018-11-14 03:10:36,802-05 INFO
>> [org.ovirt.engine.core.bll.scheduling.SchedulingManager] (default task-3)
>> [99e282ea-577a-4dab-857b-285b1df5e6f6] Candidate host
>> 'lago-basic-suite-master-host-1' ('731e5055-706e-4310-a062-045e32ffbfeb')
>> was filtered out by 'VAR__FILTERTYPE__INTERNAL' filter 'CPU-Level'
>> (correlation id: 99e282ea-577a-4dab-857b-285b1df5e6f6)
>> >> > > > >> 2018-11-14 03:10:36,802-05 ERROR
>> [org.ovirt.engine.core.bll.RunVmCommand] (default task-3)
>> [99e282ea-577a-4dab-857b-285b1df5e6f6] Can't find VDS to run the VM
>> 'dc1e1e92-1e5c-415e-8ac2-b919017adf40' on, so this VM will not be run.
>> >> > > > >>
>> >> > > > >>
>> >> > > > >
>> >> > > > >
>> >> > > > > https://gerrit.ovirt.org/#/c/95283/ results in
>> >> > > > >
>> http://jenkins.ovirt.org/job/ovirt-engine_master_build-artifacts-el7-x86_64/8071/
>> >> > > > > which is used in
>> >> > > > >
>> https://jenkins.ovirt.org/view/oVirt%20system%20tests/job/ovirt-system-tests_manual/3504/parameters/
>> >> > > > > results in run_vms succeeding.
>> >> > > > >
>> >> > > > > The next merged change
>> >> > > > > https://gerrit.ovirt.org/#/c/95310/ results in
>> >> > > > >
>> http://jenkins.ovirt.org/job/ovirt-engine_master_build-artifacts-el7-x86_64/8072/
>> >> > > > > which is used in
>> >> > > > >
>> https://jenkins.ovirt.org/view/oVirt%20system%20tests/job/ovirt-system-tests_manual/3505/parameters/
>> >> > > > > results in run_vms failing with
>> >> > > > >  File
>> "/home/jenkins/workspace/ovirt-system-tests_manual/ovirt-system-tests/basic-suite-master/test-scenarios/004_basic_sanity.py",
>> line 698, in run_vms
>> >> > > > >api.vms.get(VM0_NAME).start(start_params)
>> >> > > > >  File
>> "/usr/lib/python2.7/site-packages/ovirtsdk/infrastructure/brokers.py", line
>> 31193, in start
>> >> > > > >headers={"Correlation-Id":correlation_id}
>> >> > > > >  File
>> "/usr/lib/python2.7/site-packages/ovirtsdk/infrastructure/proxy.py", line
>> 122, in request
>> >> > > > >persistent_auth=self.__persistent_auth
>> >> > > > >  File
>> "/usr/lib/python2.7/site-packages/ovirtsdk/infrastructure/connectionspool.py",
>> line 79, in do_request
>> >> > > > >persistent_auth)
>> >> > > > >  File
>> "/usr/lib/python2.7/site-packages/ovirtsdk/infrastructure/connectionspool.py",
>> line 162, in __do_request
>> >> > > > >raise errors.RequestError(response_code, response_reason,
>> response_body)
>> >> > > > > RequestError:
>> >> > > > > status: 400
>> >> > > > > reason: Bad Request
>> >> > > > >
>> >> > > > >
>> >> > > > > So even if the Cluster Level should be 4.2 now,
>> >> > > > > still https://gerrit.ovirt.org/#/c/95310/ seems influence the
>> behavior.
>> >> > > >
>> >> > > > I really do not see how it can affect 4.2.
>> >> > >
>> >> > > Me neither.
>> >> > >
>> >> > > > Are you sure the cluster is really 4.2? Sadly it’s not being
>> logged at all
>> >> > >
>> >> > > screenshot from local execution https://imgur.com/a/yiWBw3c
>> >> > >
>> >> > > > But if it really seem to matter (and since it needs a fix anyway
>> for 4.3) feel free to revert it of course
>> >> > > >
>> >> > >
>> >> > > I will post a revert change and check if this changes the behavior.
>> >> >
>> >> > Dominik, thanks for the research and for Martin's and your
>> >> > reverts/fixes. Finally Engine passes OST
>> >> >
>> https://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/11153/
>> >> > and QE can expect a build tomorrow, after 2 weeks of droughts.
>> >>
>> >> unfortunately, the drought continues.
>> >
>> >
>> > Sorry, missing the content or meaning, what does drought means?
>>
>> Pardon my flowery language. I mean 2 weeks of no ovirt-engine builds.
>>
>> >
>> >>
>> >> Barrak tells me that something is broken in the nightly cron job
>> >> copying the the tested repo onto the master-snapshot one.
>> >
>> >
>> > Dafna, can you check this?
>> >
>> >>
>> >>
>> >> +Edri: please make it a priority to have it fixed.
>> >
>> >
>> >
>> > --
>> >
>> > Eyal edri
>> >
>> >
>> > MANAGER
>> >
>> > RHV/CNV DevOps
>> >
>> > EMEA VIRTUALIZATION R
>> >
>> >
>> > Red Hat EMEA
>> >
>> > TRIED. TESTED. TRUSTED.
>> > phone: +972-9-7692018
>> > irc: eedri (on #tlv #rhev-dev #rhev-integ)
>>
>

-- 
Barak Korren
RHV DevOps team , RHCE, RHCi
Red Hat EMEA
redhat.com | TRIED. TESTED. TRUSTED. | redhat.com/trusted
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/MMWXTDQD6BBRPCZEY3BC4LYHRVKNXYGZ/


[ovirt-devel] Re: [ OST Failure Report ] [ oVirt Master (ovirt-engine) ] [ 14-09-2018 ] [ 002_bootstrap.add_hosts ]

2018-09-14 Thread Barak Korren
בתאריך יום ו׳, 14 בספט׳ 2018, 19:03, מאת Ravi Shankar Nori ‏<
rn...@redhat.com>:

> Hi Martin,
>
> This is what I did. Checked out jenkins, ovirt-system-tests and ran the
> mock from ovirt-engine dir.
>
> ../jenkins/mock_configs/mock_runner.sh -e
> ../ovirt-system-tests/automation/upgrade-from-release_suite_master.sh el7
>


You should actually run it from the ovirt-system-tests directory. I'm very
much surprised running from the engine directory did anything useful, as it
makes no sense...




> On Fri, Sep 14, 2018 at 11:23 AM, Martin Perina 
> wrote:
>
>>
>>
>> On Fri, Sep 14, 2018 at 4:44 PM, Dafna Ron  wrote:
>>
>>> if you run it with mock you would remove any environmental conditions
>>> that can effect the outcome so I recommend using mock
>>>
>>
>> Out of curiosity how to use mock_runner with run_suite? There are not
>> mentioned any steps on execution using mock_runner in docs [1] (only
>> installation of mock_runner), following command doesn't work:
>>
>> ./jenkins/mock_configs/mock_runner.sh -C ../jenkins/mock_configs -p el7
>> -e run_suite.sh basic-suite-master
>>
>> And I haven't found any parameters in mock_runner how to pass additional
>> command line options to executed script.
>>
>>
>> [1]
>> https://ovirt-system-tests.readthedocs.io/en/latest/general/running_tests/index.html
>>
>>>
>>>
>>> On Fri, Sep 14, 2018 at 3:32 PM, Martin Perina 
>>> wrote:
>>>


 On Fri, Sep 14, 2018 at 3:49 PM, Dafna Ron  wrote:

> did you use mock to reproduce?
>

 No, just run_suite under myself

>
> On Fri, Sep 14, 2018 at 2:39 PM, Martin Perina 
> wrote:
>
>> Hi,
>>
>> the problem is that we haven't fetched the temporary host-deploy log
>> from /tmp directory, so we don't know which string that host-deploy 
>> process
>> sent to engine is causing that issue. I tried to reproduce on my local
>> machine, but I was unable to reproduce it, 002_bootstrap phase finished
>> successfully (other phases are still running).
>>
>> So if anyone is able to reproduce, please try to fetch host-deploy
>> log from /tmp directory after the error is raised and share it.
>>
>> Thanks
>>
>> Martin
>>
>>
>> On Fri, Sep 14, 2018 at 1:52 PM, Dafna Ron  wrote:
>>
>>> Full logs can be found here:
>>>
>>>
>>> https://jenkins.ovirt.org/view/Change%20queue%20jobs/job/ovirt-master_change-queue-tester/10307/artifact/upgrade-from-release-suite.el7.x86_64/test_logs/upgrade-from-release-suite-master/post-002_bootstrap.py/
>>>
>>> On Fri, Sep 14, 2018 at 12:48 PM, Dafna Ron  wrote:
>>>
 Hi,

 The previous regression was resolved and we now have a new
 regression.

 I don't think that the reported change is related so can someone
 from ovirt-engine take a look?

 The failure is add host on the upgrade suite.

 Please note that we have not had an engine-ovirt build for over 10
 days due to several consecutive regressions and I would ask you to stop
 merging until we can stabilize the project and have a new package of
 engine.

 error:

 2018-09-14 05:51:07,670-04 INFO
 [org.ovirt.engine.core.uutils.ssh.SSHDialog]
 (EE-ManagedThreadFactory-engine-Thread-1) [5c91fcbd] SSH execute
 'root@lago-upgrade-from-release-suite-master-host-0' 'umask 0077;
 MYTMP="$(TMPDIR="${OVIRT_TMPDIR}" mktemp -d -t ovirt-XX)"; trap
 "chmod -R u+rwX \"${MYTMP}\" > /dev/null 2>&1; rm -fr \"${MYTMP}\" >
 /dev/null 2>&1" 0; tar -b1 --warning=no-timestamp -C "${MYTMP}" -x &&
 "${MYTMP}"/ovirt-host-deploy DIALOG/dialect=str:machine
 DIALOG/customization=bool:True'
 2018-09-14 05:51:08,550-04 INFO
 [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
 (VdsDeploy) [5c91fcbd] EVENT_ID: VDS_INSTALL_IN_PROGRESS(509), 
 Installing
 Host lago-upgrade-from-release-suite-master-host-0. Stage: 
 Initializing.
 2018-09-14 05:51:08,565-04 INFO
 [org.ovirt.engine.core.utils.transaction.TransactionSupport] 
 (VdsDeploy)
 [5c91fcbd] transaction rolled back
 2018-09-14 05:51:08,574-04 ERROR
 [org.ovirt.engine.core.bll.hostdeploy.VdsDeployBase] (VdsDeploy) 
 [5c91fcbd]
 Error during deploy dialog
 2018-09-14 05:51:08,578-04 ERROR
 [org.ovirt.engine.core.bll.hostdeploy.VdsDeployBase]
 (EE-ManagedThreadFactory-engine-Thread-1) [5c91fcbd] Error during host
 lago-upgrade-from-release-suite-master-host-0 install
 2018-09-14 05:51:08,586-04 ERROR
 [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
 (EE-ManagedThreadFactory-engine-Thread-1) [5c91fcbd] EVENT_ID:
 VDS_INSTALL_IN_PROGRESS_ERROR(511), An error has occurred 

[ovirt-devel] Re: [ OST Failure Report ] [ oVirt Master (ovirt-engine) ] [ 11-09-2018 ] [ 002_bootstrap.add_cluster ]

2018-09-13 Thread Barak Korren
On 12 September 2018 at 16:41, Ravi Shankar Nori  wrote:

> Hi Dafna,
>
> works for master too https://jenkins.ovirt.org/job/
> ovirt-system-tests_manual/3170
>

This is testing an engine build from the master branch with oVirt 4.2
packages and test suit. Probably not what you wanted, you need to select
'master' as the oVirt (engine) version when testing master packages.


>
>
> On Wed, Sep 12, 2018 at 9:14 AM, Dafna Ron  wrote:
>
>> Hi Ravi,
>>
>> I looked at the parameters of the manual job and the job ran is in 4.2
>> while the failure is in master branch.
>>
>>
>> On Wed, Sep 12, 2018 at 2:07 PM, Ravi Shankar Nori 
>> wrote:
>>
>>> Hi Dafna,
>>>
>>> I ran OST upgrade from release on my patch [1] and it seems to pass OST
>>> [2]
>>>
>>> What am I missing?
>>>
>>> Thanks
>>>
>>> Ravi
>>>
>>> [1] https://gerrit.ovirt.org/#/c/94281/
>>> [2] https://jenkins.ovirt.org/job/ovirt-system-tests_manual/3168
>>>
>>> On Wed, Sep 12, 2018 at 7:02 AM, Dafna Ron  wrote:
>>>
>>>> Hi Ravi,
>>>> it seems your patch is failing CQ with the same issue:
>>>> https://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/10230/
>>>>
>>>> can you please take a look?
>>>>
>>>> Thanks,
>>>> Dafna
>>>>
>>>>
>>>> On Wed, Sep 12, 2018 at 8:08 AM, Dafna Ron  wrote:
>>>>
>>>>> Thank you Ravi.
>>>>>
>>>>>
>>>>> On Tue, Sep 11, 2018 at 4:22 PM, Ravi Shankar Nori 
>>>>> wrote:
>>>>>
>>>>>> Hi Dafna,
>>>>>>
>>>>>> Posted a patch to fix the issue [1]. [2] successful OST execution.
>>>>>>
>>>>>> Thanks
>>>>>>
>>>>>> Ravi
>>>>>>
>>>>>> [1] https://gerrit.ovirt.org/#/c/94281/
>>>>>> [2] https://jenkins.ovirt.org/job/ovirt-system-tests_manual/3165/
>>>>>>
>>>>>> On Tue, Sep 11, 2018 at 4:44 AM, Dafna Ron  wrote:
>>>>>>
>>>>>>> Hi,
>>>>>>>
>>>>>>> we have failures on both upgrade and basic suites for master.
>>>>>>> The patch reported as cause is:
>>>>>>> https://gerrit.ovirt.org/#/c/93345/10 - engine : Add finer grained
>>>>>>> monitoring thresholds for memory consumption on Hypervisors
>>>>>>>
>>>>>>> Ravi, can you please check this issue?
>>>>>>>
>>>>>>> You can see the logs here:
>>>>>>>
>>>>>>> https://jenkins.ovirt.org/job/ovirt-master_change-queue-test
>>>>>>> er/10187/artifact/basic-suite.el7.x86_64/test_logs/basic-sui
>>>>>>> te-master/
>>>>>>>
>>>>>>> Here is the error:
>>>>>>>
>>>>>>> 2018-09-11 02:08:44,511-04 WARN  
>>>>>>> [org.ovirt.engine.core.bll.AddClusterCommand]
>>>>>>> (default task-2) [1362d97c-9d55-40d2-9b03-ebe93ae2fe67] Validation
>>>>>>> of action 'AddCluster' failed for user admin@internal-authz.
>>>>>>> Reasons: VAR__TYPE__CLUSTER,VAR
>>>>>>> __ACTION__CREATE,must be greater than or equal to 1,$groups
>>>>>>> [Ljava.lang.Class;@29a040fc,$message 
>>>>>>> {javax.validation.constraints.Min.message},$payload
>>>>>>> [Ljava.lang.Class;@64094510,$value 
>>>>>>> 1,ACTION_TYPE_FAILED_ATTRIBUTE_PATH,$path
>>>>>>> cluster.logM
>>>>>>> axMemoryUsedThreshold,$validatedValue 0
>>>>>>> 2018-09-11 02:08:44,511-04 DEBUG [org.ovirt.engine.core.common.
>>>>>>> di.interceptor.DebugLoggingInterceptor] (default task-2)
>>>>>>> [1362d97c-9d55-40d2-9b03-ebe93ae2fe67] method: runAction, params:
>>>>>>> [AddCluster, ManagementNetworkOnClusterOperationPara
>>>>>>> meters:{commandId='e5eb6cf6-4e3d-4f50-be6a-9fe18bfc4d97',
>>>>>>> user='null', commandType='Unknown'}], timeElapsed: 56ms
>>>>>>> 2018-09-11 02:08:44,516-04 ERROR [org.ovirt.engine.api.restapi.
>>>>>>> resource.AbstractBackendResource] (default task-2) [] Operation
>>>>>>> Failed: [must be greater than or equal to 1, Attribute:
>>>>>>> cluster.logMaxMemoryUsedThreshold]
>>>>>>> 2018-09-11 02:08:44,533-04 DEBUG 
>>>>>>> [org.ovirt.engine.core.aaa.filters.SsoRestApiAuthFilter]
>>>>>>> (default task-2) [] Entered SsoRestApiAuthFilter
>>>>>>>
>>>>>>>
>>>>>>> Thanks,
>>>>>>> Dafna
>>>>>>>
>>>>>>>
>>>>>>
>>>>>
>>>>
>>>
>>
>
> ___
> Infra mailing list -- in...@ovirt.org
> To unsubscribe send an email to infra-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: https://www.ovirt.org/community/about/community-
> guidelines/
> List Archives: https://lists.ovirt.org/archives/list/in...@ovirt.org/
> message/45B2A3OWVYP6FM6KYQWRDGY5RII5S6EC/
>
>


-- 
Barak Korren
RHV DevOps team , RHCE, RHCi
Red Hat EMEA
redhat.com | TRIED. TESTED. TRUSTED. | redhat.com/trusted
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/RFPO3IDPTWO2UK46C2SNVSHHD72HAU2D/


[ovirt-devel] Re: [ OST Failure Report ] [ oVirt Master (ovirt-engine) ] [ 11-09-2018 ] [ 002_bootstrap.add_cluster ]

2018-09-13 Thread Barak Korren
On 12 September 2018 at 21:52, Ravi Shankar Nori  wrote:

> ../jenkins/mock_configs/mock_runner.sh -C ../jenkins/mock_configs -p el7
>
> On master with my patches succeeded successfully.
>
>
This runs check-patch.sh which is deprecated since we switched OST to
STDCIV2. To run a specific suit you need to run:

../jenkins/mock_configs/mock_runner.sh -e
automation/${suit_type}_suit_${ovirt_version}.sh el7

(Note that the '-C' option is no longer needed in recent enough version of
mock_runner.sh)

Regards,
Barak.


>
> On Wed, Sep 12, 2018 at 11:57 AM, Dafna Ron  wrote:
>
>> Hi Ravi,
>>
>> I trust CQ more then the manual job :)
>>
>> I am adding Barak and Daniel because we may need to look at the manual
>> job to see why it would pass there and not in CQ.
>>
>> Can you try to run the fix using mock locally?
>>
>>
>> Thanks,
>> Dafna
>>
>>
>> On Wed, Sep 12, 2018 at 2:41 PM, Ravi Shankar Nori 
>> wrote:
>>
>>> Hi Dafna,
>>>
>>> works for master too https://jenkins.ovirt.org/job/
>>> ovirt-system-tests_manual/3170
>>>
>>> On Wed, Sep 12, 2018 at 9:14 AM, Dafna Ron  wrote:
>>>
>>>> Hi Ravi,
>>>>
>>>> I looked at the parameters of the manual job and the job ran is in 4.2
>>>> while the failure is in master branch.
>>>>
>>>>
>>>> On Wed, Sep 12, 2018 at 2:07 PM, Ravi Shankar Nori 
>>>> wrote:
>>>>
>>>>> Hi Dafna,
>>>>>
>>>>> I ran OST upgrade from release on my patch [1] and it seems to pass
>>>>> OST [2]
>>>>>
>>>>> What am I missing?
>>>>>
>>>>> Thanks
>>>>>
>>>>> Ravi
>>>>>
>>>>> [1] https://gerrit.ovirt.org/#/c/94281/
>>>>> [2] https://jenkins.ovirt.org/job/ovirt-system-tests_manual/3168
>>>>>
>>>>> On Wed, Sep 12, 2018 at 7:02 AM, Dafna Ron  wrote:
>>>>>
>>>>>> Hi Ravi,
>>>>>> it seems your patch is failing CQ with the same issue:
>>>>>> https://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/10230/
>>>>>>
>>>>>> can you please take a look?
>>>>>>
>>>>>> Thanks,
>>>>>> Dafna
>>>>>>
>>>>>>
>>>>>> On Wed, Sep 12, 2018 at 8:08 AM, Dafna Ron  wrote:
>>>>>>
>>>>>>> Thank you Ravi.
>>>>>>>
>>>>>>>
>>>>>>> On Tue, Sep 11, 2018 at 4:22 PM, Ravi Shankar Nori >>>>>> > wrote:
>>>>>>>
>>>>>>>> Hi Dafna,
>>>>>>>>
>>>>>>>> Posted a patch to fix the issue [1]. [2] successful OST execution.
>>>>>>>>
>>>>>>>> Thanks
>>>>>>>>
>>>>>>>> Ravi
>>>>>>>>
>>>>>>>> [1] https://gerrit.ovirt.org/#/c/94281/
>>>>>>>> [2] https://jenkins.ovirt.org/job/ovirt-system-tests_manual/3165/
>>>>>>>>
>>>>>>>> On Tue, Sep 11, 2018 at 4:44 AM, Dafna Ron  wrote:
>>>>>>>>
>>>>>>>>> Hi,
>>>>>>>>>
>>>>>>>>> we have failures on both upgrade and basic suites for master.
>>>>>>>>> The patch reported as cause is:
>>>>>>>>> https://gerrit.ovirt.org/#/c/93345/10 - engine : Add finer
>>>>>>>>> grained monitoring thresholds for memory consumption on Hypervisors
>>>>>>>>>
>>>>>>>>> Ravi, can you please check this issue?
>>>>>>>>>
>>>>>>>>> You can see the logs here:
>>>>>>>>>
>>>>>>>>> https://jenkins.ovirt.org/job/ovirt-master_change-queue-test
>>>>>>>>> er/10187/artifact/basic-suite.el7.x86_64/test_logs/basic-sui
>>>>>>>>> te-master/
>>>>>>>>>
>>>>>>>>> Here is the error:
>>>>>>>>>
>>>>>>>>> 2018-09-11 02:08:44,511-04 WARN  
>>>>>>>>> [org.ovirt.engine.core.bll.AddClusterCommand]
>>>>>>>>> (default task-2) [1362d97c-9d55-40d2-9b03-ebe93ae2fe67

[ovirt-devel] Re: repoman glitch?

2018-08-09 Thread Barak Korren
I found out what happened here in the 1st run.

The build job started at* 18:41:26* and finished at* 18:48:19* while
artifacts were archived at* 18:48:18*

The test job started at* 18:44:13*, finished at* 19:24:10*, so it had
reached the point of trying to download the RPMs at* 18:46:36* so almost
two minutes before they actually became available

(All times are in UTC)

Dan, you need to wait for the build job to finish before you can launch the
test job...



On 9 August 2018 at 12:46, Dan Kenigsberg  wrote:

> On Thu, Aug 9, 2018 at 12:41 PM, Anton Marchukov 
> wrote:
> > Hello Barak, Dan.
> >
> > Repoman indeed expect the link to jenkins job only and cannot work
> > with specific artifact path. So I think the last rerun [1] with just
> > http://jenkins.ovirt.org/job/vdsm_4.2_build-artifacts-on-
> demand-el7-x86_64/44/
> > worked on repoman side as I see from lago log, the artifacts were
> > detected and downloaded:
> >
> > 2018-08-08 19:58:14,067::INFO::root::Saving
> > /dev/shm/ost/deployment-network-suite-4.2/default/
> internal_repo/default/el7/x86_64/vdsm-4.20.36-11.git9f9bbcc.el7.x86_64.rpm
> > 2018-08-08 19:58:14,068::INFO::root::Saving
> > /dev/shm/ost/deployment-network-suite-4.2/default/
> internal_repo/default/el7/noarch/vdsm-api-4.20.36-11.
> git9f9bbcc.el7.noarch.rpm
> > …
> >
> > That matches artifact names produced by the job Dan passed as the
> parameter:
> >
> >
> > vdsm-4.20.36-11.git9f9bbcc.el7.x86_64.rpm
> > vdsm-api-4.20.36-11.git9f9bbcc.el7.noarch.rpm
> > ...
> >
> >
> > [1] https://jenkins.ovirt.org/job/ovirt-system-tests_manual/3054/
>
> Darn, you are right. The second job did take the correct vdsm. It
> failed due to a production bug that we need to fix.
>
> >
> >
> > On 9 August 2018 at 09:25:40, Dan Kenigsberg (dan...@redhat.com) wrote:
> >> On Thu, Aug 9, 2018 at 8:29 AM, Barak Korren wrote:
> >> >
> >> >
> >> > On 8 August 2018 at 22:53, Dan Kenigsberg wrote:
> >> >>
> >> >> I've executed
> >> >> http://jenkins.ovirt.org/job/ovirt-system-tests_manual/
> 3053/parameters/
> >> >> using
> >> >> http://jenkins.ovirt.org/job/vdsm_4.2_build-artifacts-on-
> demand-el7-x86_64/44/artifact/exported-artifacts/
> >> >> as customer repo.
> >> >>
> >> >> The custom repo has vdsm-4.20.36-11.git9f9bbcc.el7.x86_64.rpm which
> I
> >> >> expected would be pulled onto ost hosts. However
> >> >>
> >> >> http://jenkins.ovirt.org/job/ovirt-system-tests_manual/
> 3053/artifact/exported-artifacts/tests.test_vm_
> operations/lago-network-suite-4-2-host-0/_var_log/yum.log
> >> >> shows that this was not the case.
> >> >>
> >> >> Any idea why is that?
> >> >
> >> >
> >> >
> >> > I can see the following in lago.log (in the section that includes the
> >> > repoman log):
> >> >
> >> > 2018-08-08 18:47:02,357::INFO::repoman.common.repo::Resolving
> artifact
> >> > source
> >> > http://jenkins.ovirt.org/job/vdsm_4.2_build-artifacts-on-
> demand-el7-x86_64/44/
> >> > 2018-08-08 18:47:02,493::INFO::repoman.common.sources.jenkins::
> Parsing
> >> > jenkins URL:
> >> > http://jenkins.ovirt.org/job/vdsm_4.2_build-artifacts-on-
> demand-el7-x86_64/44/
> >> > 2018-08-08 18:47:02,493::WARNING::root:: No artifacts found
> >> > 2018-08-08 18:47:02,493::INFO::root:: Done
> >> >
> >> >
> >> > The fact that the log says 'Parsing jenkins URL' means that repoman
> properly
> >> > detects that it is a URL to a Jenkins build, additionally when I run
> the
> >> > following locally it seems to download the packages just fine:
> >> >
> >> > repoman ~/tmp/repo add
> >> > http://jenkins.ovirt.org/job/vdsm_4.2_build-artifacts-on-
> demand-el7-x86_64/44/
> >> >
> >> > So this looks like a repoman bug. Adding Anton.
> >> >
> >> > @Dan - can you just retry?
> >>
> >> I did try again, in
> >> https://jenkins.ovirt.org/job/ovirt-system-tests_manual/3054 which
> >> failed again.
> >> However, this time it has an empty lago.log.
> >>
> >> >
> >> >
> >> >>
> >> >> ___
> >> >> Devel mailing list -- devel@ovirt.org
> >> >> To unsubscribe send an email to devel-le...@ovirt.org

[ovirt-devel] Re: repoman glitch?

2018-08-08 Thread Barak Korren
On 8 August 2018 at 22:53, Dan Kenigsberg  wrote:

> I've executed http://jenkins.ovirt.org/job/ovirt-system-tests_manual/
> 3053/parameters/
> using http://jenkins.ovirt.org/job/vdsm_4.2_build-artifacts-on-
> demand-el7-x86_64/44/artifact/exported-artifacts/
> as customer repo.
>
> The custom repo has vdsm-4.20.36-11.git9f9bbcc.el7.x86_64.rpm which I
> expected would be pulled onto ost hosts. However
> http://jenkins.ovirt.org/job/ovirt-system-tests_manual/
> 3053/artifact/exported-artifacts/tests.test_vm_
> operations/lago-network-suite-4-2-host-0/_var_log/yum.log
> shows that this was not the case.
>
> Any idea why is that?
>


I can see the following in lago.log (in the section that includes the
repoman log):

2018-08-08 18:47:02,357::INFO::repoman.common.repo::Resolving artifact
source 
http://jenkins.ovirt.org/job/vdsm_4.2_build-artifacts-on-demand-el7-x86_64/44/
2018-08-08 18:47:02,493::INFO::repoman.common.sources.jenkins::Parsing
jenkins URL: 
http://jenkins.ovirt.org/job/vdsm_4.2_build-artifacts-on-demand-el7-x86_64/44/
2018-08-08 18:47:02,493::WARNING::root::No artifacts found
2018-08-08 18:47:02,493::INFO::root::Done


The fact that the log says 'Parsing jenkins URL' means that repoman
properly detects that it is a URL to a Jenkins build, additionally when I
run the following locally it seems to download the packages just fine:

repoman ~/tmp/repo add
http://jenkins.ovirt.org/job/vdsm_4.2_build-artifacts-on-demand-el7-x86_64/44/

So this looks like a repoman bug. Adding Anton.

@Dan - can you just retry?



> ___
> Devel mailing list -- devel@ovirt.org
> To unsubscribe send an email to devel-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: https://www.ovirt.org/community/about/community-
> guidelines/
> List Archives: https://lists.ovirt.org/archives/list/devel@ovirt.org/
> message/PQHTXDZ6SLWI53FRHIOE5HDUI5ZBM4Z6/
>



-- 
Barak Korren
RHV DevOps team , RHCE, RHCi
Red Hat EMEA
redhat.com | TRIED. TESTED. TRUSTED. | redhat.com/trusted
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/2F7XQSVQZDD76WOVEJ3TSHGJY37I6SXG/


[ovirt-devel] Re: who owns ovirt copr?

2018-08-06 Thread Barak Korren
On 6 August 2018 at 09:43, Sandro Bonazzola  wrote:

>
>
> 2018-08-06 7:54 GMT+02:00 Barak Korren :
>
>> While the search for the credentials can continue, I'd like to ask why is
>> copr being used in the 1st place.
>>
>> Can we switch the build process to STDCI? If you already have a script to
>> generate the SRPM for copr, having STDCI then run rpmbuild should be
>> trivial.
>>
>
> The responsible manager for this package explicitly requested to not use
> oVirt Jenkins at all for this specific package.
>

I'd like to re-review this decision the the reasons behind it.



>
>>
>> On 4 August 2018 at 13:46, Eyal Edri  wrote:
>>
>>> I believe its possible, though it seems like it redirecting to me
>>> currently [1], but I don't receive any emails from it.
>>> Duck, can you help?
>>>
>>> [1] from deploy.yaml on infra-ansible project:ovirt-copr: "{{
>>> eedri_mail }}"
>>>
>>> On Sat, Aug 4, 2018 at 1:27 PM Greg Sheremeta 
>>> wrote:
>>>
>>>> On Sat, Aug 4, 2018 at 6:16 AM Eyal Edri  wrote:
>>>>
>>>>>
>>>>>
>>>>> On Fri, Aug 3, 2018 at 11:15 PM Sandro Bonazzola 
>>>>> wrote:
>>>>>
>>>>>>
>>>>>>
>>>>>> Il ven 3 ago 2018, 02:53 Greg Sheremeta  ha
>>>>>> scritto:
>>>>>>
>>>>>>> Hi,
>>>>>>>
>>>>>>> I'd like to move https://copr.fedorainfrac
>>>>>>> loud.org/coprs/mlibra/ovirt-web-ui/ to https://copr.fedorainfraclo
>>>>>>> ud.org/coprs/ovirt/
>>>>>>>
>>>>>>> Who owns this / can share the password?
>>>>>>>
>>>>>>
>>>>>> It's owned by infra.
>>>>>>
>>>>>
>>>>> I don't recall us ever using it, it might have been temporarily used a
>>>>> few years ago when we tested copr to see if it can be helpful,
>>>>> but ended up not using it, so I don't think we have any password info
>>>>> on it, Evgheni?
>>>>>
>>>>
>>>> It may be easier to point 'ovirt-c...@ovirt.org' to me -- is that
>>>> possible?
>>>>
>>>>
>>>>>
>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>> Best wishes,
>>>>>>> Greg
>>>>>>>
>>>>>>> --
>>>>>>>
>>>>>>> GREG SHEREMETA
>>>>>>>
>>>>>>> SENIOR SOFTWARE ENGINEER - TEAM LEAD - RHV UX
>>>>>>>
>>>>>>> Red Hat NA
>>>>>>>
>>>>>>> <https://www.redhat.com/>
>>>>>>>
>>>>>>> gsher...@redhat.comIRC: gshereme
>>>>>>> <https://red.ht/sig>
>>>>>>> ___
>>>>>>> Devel mailing list -- devel@ovirt.org
>>>>>>> To unsubscribe send an email to devel-le...@ovirt.org
>>>>>>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>>>>>>> oVirt Code of Conduct: https://www.ovirt.org/communit
>>>>>>> y/about/community-guidelines/
>>>>>>> List Archives: https://lists.ovirt.org/archiv
>>>>>>> es/list/devel@ovirt.org/message/ZN4C75J7WUBHNAXH5MCD62GBAZM6VDR6/
>>>>>>>
>>>>>> ___
>>>>>> Infra mailing list -- in...@ovirt.org
>>>>>> To unsubscribe send an email to infra-le...@ovirt.org
>>>>>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>>>>>> oVirt Code of Conduct: https://www.ovirt.org/communit
>>>>>> y/about/community-guidelines/
>>>>>> List Archives: https://lists.ovirt.org/archiv
>>>>>> es/list/in...@ovirt.org/message/4JKP5F6NTOZV4KU3AYZHYJZ46UKEPIMN/
>>>>>>
>>>>>
>>>>>
>>>>> --
>>>>>
>>>>> Eyal edri
>>>>>
>>>>>
>>>>> MANAGER
>>>>>
>>>>> RHV DevOps
>>>>>
>>>>> EMEA VIRTUALIZATION R
>>>>>
>>>>>
>>>>> Red Hat EMEA <

[ovirt-devel] Re: who owns ovirt copr?

2018-08-05 Thread Barak Korren
While the search for the credentials can continue, I'd like to ask why is
copr being used in the 1st place.

Can we switch the build process to STDCI? If you already have a script to
generate the SRPM for copr, having STDCI then run rpmbuild should be
trivial.

On 4 August 2018 at 13:46, Eyal Edri  wrote:

> I believe its possible, though it seems like it redirecting to me
> currently [1], but I don't receive any emails from it.
> Duck, can you help?
>
> [1] from deploy.yaml on infra-ansible project:ovirt-copr: "{{
> eedri_mail }}"
>
> On Sat, Aug 4, 2018 at 1:27 PM Greg Sheremeta  wrote:
>
>> On Sat, Aug 4, 2018 at 6:16 AM Eyal Edri  wrote:
>>
>>>
>>>
>>> On Fri, Aug 3, 2018 at 11:15 PM Sandro Bonazzola 
>>> wrote:
>>>
>>>>
>>>>
>>>> Il ven 3 ago 2018, 02:53 Greg Sheremeta  ha
>>>> scritto:
>>>>
>>>>> Hi,
>>>>>
>>>>> I'd like to move https://copr.fedorainfracloud.org/coprs/
>>>>> mlibra/ovirt-web-ui/ to https://copr.fedorainfracloud.org/coprs/ovirt/
>>>>>
>>>>> Who owns this / can share the password?
>>>>>
>>>>
>>>> It's owned by infra.
>>>>
>>>
>>> I don't recall us ever using it, it might have been temporarily used a
>>> few years ago when we tested copr to see if it can be helpful,
>>> but ended up not using it, so I don't think we have any password info on
>>> it, Evgheni?
>>>
>>
>> It may be easier to point 'ovirt-c...@ovirt.org' to me -- is that
>> possible?
>>
>>
>>>
>>>
>>>>
>>>>
>>>>
>>>>> Best wishes,
>>>>> Greg
>>>>>
>>>>> --
>>>>>
>>>>> GREG SHEREMETA
>>>>>
>>>>> SENIOR SOFTWARE ENGINEER - TEAM LEAD - RHV UX
>>>>>
>>>>> Red Hat NA
>>>>>
>>>>> <https://www.redhat.com/>
>>>>>
>>>>> gsher...@redhat.comIRC: gshereme
>>>>> <https://red.ht/sig>
>>>>> ___
>>>>> Devel mailing list -- devel@ovirt.org
>>>>> To unsubscribe send an email to devel-le...@ovirt.org
>>>>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>>>>> oVirt Code of Conduct: https://www.ovirt.org/
>>>>> community/about/community-guidelines/
>>>>> List Archives: https://lists.ovirt.org/archives/list/devel@ovirt.org/
>>>>> message/ZN4C75J7WUBHNAXH5MCD62GBAZM6VDR6/
>>>>>
>>>> ___
>>>> Infra mailing list -- in...@ovirt.org
>>>> To unsubscribe send an email to infra-le...@ovirt.org
>>>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>>>> oVirt Code of Conduct: https://www.ovirt.org/community/about/community-
>>>> guidelines/
>>>> List Archives: https://lists.ovirt.org/archives/list/in...@ovirt.org/
>>>> message/4JKP5F6NTOZV4KU3AYZHYJZ46UKEPIMN/
>>>>
>>>
>>>
>>> --
>>>
>>> Eyal edri
>>>
>>>
>>> MANAGER
>>>
>>> RHV DevOps
>>>
>>> EMEA VIRTUALIZATION R
>>>
>>>
>>> Red Hat EMEA <https://www.redhat.com/>
>>> <https://red.ht/sig> TRIED. TESTED. TRUSTED.
>>> <https://redhat.com/trusted>
>>> phone: +972-9-7692018
>>> irc: eedri (on #tlv #rhev-dev #rhev-integ)
>>>
>>
>>
>> --
>>
>> GREG SHEREMETA
>>
>> SENIOR SOFTWARE ENGINEER - TEAM LEAD - RHV UX
>>
>> Red Hat NA
>>
>> <https://www.redhat.com/>
>>
>> gsher...@redhat.comIRC: gshereme
>> <https://red.ht/sig>
>>
>
>
> --
>
> Eyal edri
>
>
> MANAGER
>
> RHV DevOps
>
> EMEA VIRTUALIZATION R
>
>
> Red Hat EMEA <https://www.redhat.com/>
> <https://red.ht/sig> TRIED. TESTED. TRUSTED. <https://redhat.com/trusted>
> phone: +972-9-7692018
> irc: eedri (on #tlv #rhev-dev #rhev-integ)
>
> ___
> Devel mailing list -- devel@ovirt.org
> To unsubscribe send an email to devel-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: https://www.ovirt.org/community/about/community-
> guidelines/
> List Archives: https://lists.ovirt.org/archives/list/devel@ovirt.org/
> message/SLAEXLNK2R7SKCKJYWLPBNBI7LRT7AGT/
>
>


-- 
Barak Korren
RHV DevOps team , RHCE, RHCi
Red Hat EMEA
redhat.com | TRIED. TESTED. TRUSTED. | redhat.com/trusted
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/7C7EJTO2KEFXBL4SI7MKAARST5ME6KEF/


[ovirt-devel] Re: ovirt.org documentation refresh

2018-07-29 Thread Barak Korren
On 29 July 2018 at 21:25, Greg Sheremeta  wrote:

> Hi all,
>
> Brian Proffitt has kindly offered to refresh the documentation on
> ovirt.org to version 4.2 (it's currently a mix of 4.0 + whatever people
> have updated since.)
>
> Other than the main 'official' documentation, I noticed that there is a
> lot of documentation on separate sites. For example, imageio, mom, all of
> the OST stuff, etc. For now there are no plans for any of this extra stuff,
> but in the future it should probably all look the same.
>
> Can you share ovirt documentation sites that you know of?
>

http://ovirt-infra-docs.readthedocs.io/en/latest/


Best wishes,
> Greg
>
> --
>
> GREG SHEREMETA
>
> SENIOR SOFTWARE ENGINEER - TEAM LEAD - RHV UX
>
> Red Hat NA
>
> <https://www.redhat.com/>
>
> gsher...@redhat.comIRC: gshereme
> <https://red.ht/sig>
>
> ___
> Devel mailing list -- devel@ovirt.org
> To unsubscribe send an email to devel-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: https://www.ovirt.org/community/about/community-
> guidelines/
> List Archives: https://lists.ovirt.org/archives/list/devel@ovirt.org/
> message/GJVWN5QAKJFLEQ4IARV3NJN7I35OCHD6/
>
>


-- 
Barak Korren
RHV DevOps team , RHCE, RHCi
Red Hat EMEA
redhat.com | TRIED. TESTED. TRUSTED. | redhat.com/trusted
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/XFFSC2L56TTKPJXTVPKNONEQWVKUHMQZ/


[ovirt-devel] [ATT] OST 'master' is now broken because of a change in CentOS - DEVELOPMENT BLOCKED

2018-07-28 Thread Barak Korren
The openstack-java-glance-* packages in CentOS have been updated in a way
that is now incompatible with how engine had been using Glance.

This in turn causes any OST run to break ATM, which means no patches are
currently making it past OST and CQ and into the 'tested' and nightly
snapshot repositories.

So far we've only seen this affect 'master' but since the change was made
in CentOS, there is no reason to believe it will not break other version as
well.

A fix to engine to make it compatible with the new package had been posted
here:
https://gerrit.ovirt.org/c/93352/

Additionally, an issue in OST had made this harder to diagnose then it
should have been, and was fixed here:
https://gerrit.ovirt.org/c/93350/

Actions required:

   1. Please avoid merging any unrelated patches until the issue is fixed
   2. If you've merged any patches since Friday morning, please not that
   there were probably removed from the change queue asfailed changes, ad
   willl need to be resubmitted by either merging a newer patch to the
   relevant project, commenting "ci re-merge please" on the latest merged
   patch in Gerrit or resenting the Webhook event from GitHub.
   3. If you can please help with reviewing, merging and back-porting the
   patches above to speed up resolution of this issue.

Here is a list of project for which we've seen patches get dropped over the
weekend

   - ovirt-provider-ovn
   - ovirt-engine
   - ovirt-ansible-vm-infra
   - vdsm

Tracker ticket for this issue:
https://ovirt-jira.atlassian.net/browse/OVIRT-2375


-- 
Barak Korren
RHV DevOps team , RHCE, RHCi
Red Hat EMEA
redhat.com | TRIED. TESTED. TRUSTED. | redhat.com/trusted
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/4C662H7SU2JHP4GPVLAGIIUOJUWTQ3EB/


[ovirt-devel] Re: OST: Cluster compatibility testing

2018-07-26 Thread Barak Korren
On 26 July 2018 at 14:58, Milan Zamazal  wrote:

> Hi, CI runs now OST basic-suite-master periodically with data center and
> cluster versions different from the default one on master.  That tests
> changes in master for breakages when run in older version compatibility
> modes.  Contingent failures are reported to infra.
>
> If you make an OST test known not to run on older cluster versions
> (e.g. < 4.1), you should mark the test with a decorator such as
>
>   @versioning.require_version(4, 1)
>
> You can also distinguish cluster version dependent code inside tests by
> calling
>
>   versioning.cluster_version_ok(4, 1)
>
> See examples in basic-suite-master.
>
> You can run basic-suite-master with a different data center and cluster
> version manually on your computer by setting OST_DC_VERSION environment
> variable, e.g.:
>
>   export OST_DC_VERSION=4.1
>   ./run_suite.sh basic-suite-master
>
> Barak, it's currently possible to request OST run on Gerrit patches.
> I was asked whether it is also possible to request an OST run with a
> non-default cluster version(s).  Is it or not?
>

If I understand correctly, we ended up making different suits for
different  cluster versions, so its just a matter of making them available
in the drop-down list in the manual job.
Here is a patch to do that:

 https://gerrit.ovirt.org/93342

-- 
Barak Korren
RHV DevOps team , RHCE, RHCi
Red Hat EMEA
redhat.com | TRIED. TESTED. TRUSTED. | redhat.com/trusted
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/F5LTNYLOYOVAI4IBQ3C4AJ5GAFOOIWX6/


[ovirt-devel] s390x (mainframe) CI node is currently not available

2018-07-25 Thread Barak Korren
Hi all,

Unfortunately, the s390x build host we've been using had gone off-line a
couple of days ago, and the ETA for having it back up is unknown.

The implication of having the host down is that projects that build on
s390x have their builds be stuck forever waiting for the s390x node to
become available.

Sine the s390x builds are missing, builds for other platforms, while being
successful do not get passed to the change queue and as a result are not
being released to the nightly repos.

As a work around for the current situation, we need to disable the s390x
builds. Here is a patch that does this for vdsm:
https://gerrit.ovirt.org/c/93316/

For V2 projects the change should be done in the projects own repo, for
example here is how we did it for the CI jobs of the 'jenkins' repo itself:
https://gerrit.ovirt.org/c/93288/


-- 
Barak Korren
RHV DevOps team , RHCE, RHCi
Red Hat EMEA
redhat.com | TRIED. TESTED. TRUSTED. | redhat.com/trusted
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/DRIZMHAYBKIPEBHZTWEBOXZ2PMSP33N4/


[ovirt-devel] Re: [ OST Failure Report ] [ oVirt Master (ovirt-engine) ] [ 23-07-2018 ] [ 008_basic_ui_sanity.start_grid ]

2018-07-25 Thread Barak Korren
 lago-basic-suite-master-host-1 systemd: Starting MOM
>> instance configured for VDSM purposes...
>> ​
>>
>>
>>
>> ​The error in 008_basic_ui_sanity.py.junit.xml probably means that the
>> docker executable was not found on the machine running the test. Can it
>> be the cause of the failure?
>>
>> >message="[Errno 2] No such file or directory
>> >> begin captured stdout <<
>> -
>>executing shell: docker ps
>>- >> end captured stdout << ---
>>
>> File "/usr/lib64/python2.7/unittest/case.py", line 369, in run
>> testMethod()
>> File "/usr/lib/python2.7/site-packages/nose/case.py", line 197, in
>> runTest self.test(*self.arg)
>> File "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py", line 129,
>> in wrapped_test test()
>> File "/home/jenkins/workspace/ovirt-master_change-queue-
>> tester/ovirt-system-tests/basic-suite-master/test-
>> scenarios/008_basic_ui_sanity.py", line 169, in start_grid
>> _docker_cleanup()
>> File "/home/jenkins/workspace/ovirt-master_change-queue-
>> tester/ovirt-system-tests/basic-suite-master/test-
>> scenarios/008_basic_ui_sanity.py", line 136, in _docker_cleanup
>> _shell(["docker", "ps"])
>> File "/home/jenkins/workspace/ovirt-master_change-queue-
>> tester/ovirt-system-tests/basic-suite-master/test-
>> scenarios/008_basic_ui_sanity.py", line 119, in _shell
>> stderr=subprocess.PIPE)
>> File "/usr/lib64/python2.7/subprocess.py", line 711, in __init__
>> errread, errwrite)
>> File "/usr/lib64/python2.7/subprocess.py", line 1327, in _execute_child
>> raise child_exception [Errno 2] No such file or directory ​
>>
>>
> Yep, looks like docker isn't installed. And yes that would fail it. Any
> recent changes? I know Gal is working on some containerization of this [1],
> but I don't know what's been merged.
>

It seems that there was a short period of time last week where docker was
not available in CentOS, and while our mirrors server should protect
against this type of issue, we experienced some issues with it (Basically
ran out of disk space), so the jobs failed over to the upstream CentOs
repos, and Docker installation in mock failed.



> [1] Change I5af15dce: Adjust UI test to run inside STDCI container |
> https://gerrit.ovirt.org/#/c/93074/
>
>
>>
>> ​Andrej​
>>
>>
>>>>
>>>>
>>>> On Mon, Jul 23, 2018 at 10:31 AM, oVirt Jenkins 
>>>> wrote:
>>>>
>>>>> Change 92882,9 (ovirt-engine) is probably the reason behind recent
>>>>> system test
>>>>> failures in the "ovirt-master" change queue and needs to be fixed.
>>>>>
>>>>> This change had been removed from the testing queue. Artifacts build
>>>>> from this
>>>>> change will not be released until it is fixed.
>>>>>
>>>>> For further details about the change see:
>>>>> https://gerrit.ovirt.org/#/c/92882/9
>>>>>
>>>>> For failed test results see:
>>>>> http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/8764/
>>>>> ___
>>>>> Infra mailing list -- in...@ovirt.org
>>>>> To unsubscribe send an email to infra-le...@ovirt.org
>>>>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>>>>> oVirt Code of Conduct: https://www.ovirt.org/
>>>>> community/about/community-guidelines/
>>>>> List Archives: https://lists.ovirt.org/archives/list/in...@ovirt.org/
>>>>> message/6LYYXSGM4LQSRVSYY3IJEIE64LW27TJM/
>>>>>
>>>>
>>>>
>>>
>>>
>>> --
>>> Martin Perina
>>> Associate Manager, Software Engineering
>>> Red Hat Czech s.r.o.
>>> ___
>>> Infra mailing list -- in...@ovirt.org
>>> To unsubscribe send an email to infra-le...@ovirt.org
>>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>>> oVirt Code of Conduct: https://www.ovirt.org/community/about/community-
>>> guidelines/
>>> List Archives: https://lists.ovirt.org/archives/list/in...@ovirt.org/
>>> message/KXBI2VR5TXH2FRBOS3ASV3YPOTJZ52RB/
>>>
>> ___
>> Infra mailing list -- in...@ovirt.org
>> To unsubscribe send an email to infra-le...@ovirt.org
>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>> oVirt Code of Conduct: https://www.ovirt.org/community/about/community-
>> guidelines/
>> List Archives: https://lists.ovirt.org/archives/list/in...@ovirt.org/
>> message/AD5NAECNGUW4LYJFC5C67TP4SMAY3ZW2/
>>
>
>
> --
>
> GREG SHEREMETA
>
> SENIOR SOFTWARE ENGINEER - TEAM LEAD - RHV UX
>
> Red Hat NA
>
> <https://www.redhat.com/>
>
> gsher...@redhat.comIRC: gshereme
> <https://red.ht/sig>
>
> ___
> Devel mailing list -- devel@ovirt.org
> To unsubscribe send an email to devel-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: https://www.ovirt.org/community/about/community-
> guidelines/
> List Archives: https://lists.ovirt.org/archives/list/devel@ovirt.org/
> message/W6BR572DZKYDD6F7E2OBX2725FLLEMXW/
>
>


-- 
Barak Korren
RHV DevOps team , RHCE, RHCi
Red Hat EMEA
redhat.com | TRIED. TESTED. TRUSTED. | redhat.com/trusted
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/JBYHKWJKHIVDQHJNKR4YMTGO2TW7OQQQ/


[ovirt-devel] Re: Build failed - TestPyWatch.test_kill_grandkids() did someone encounter this failure?

2018-07-15 Thread Barak Korren
On 15 July 2018 at 20:11, Nir Soffer  wrote:

> On Sun, Jul 15, 2018 at 6:51 PM Dan Kenigsberg  wrote:
>
>> May I repeat Nir's question: does it fail consistently?
>> And are you rebased on master?
>>
>> Undefined command: "py-bt"
>>
>>
>> Is a known xfail for Fedora
>>
>>
>> On Sun, Jul 15, 2018, 17:49 Eyal Shenitzky  wrote:
>>
>>> failed when running the CI for the patch -
>>> https://gerrit.ovirt.org/#/c/93028/
>>>
>>> link -
>>> http://jenkins.ovirt.org/job/vdsm_master_check-patch-el7-x86_64/24344/
>>>
>>>
> According to the build log:
>
> *11:58:34* make[1]: Leaving directory 
> `/home/jenkins/workspace/vdsm_master_check-patch-el7-x86_64/vdsm'*11:58:34* + 
> debuginfo-install -y python*11:58:39* Could not find debuginfo for main pkg: 
> python-2.7.5-69.el7_5.x86_64*11:58:41* Could not find debuginfo pkg for 
> dependency package python-libs-2.7.5-69.el7_5.x86_64*11:58:47* *11:58:47* 
> *11:58:47*
>   Package   Arch   VersionRepository
> Size*11:58:47* 
> *11:58:47*
>  Installing:*11:58:47*  glibc-debuginfo   x86_64 2.17-222.el7 
>   centos-debuginfo 9.5 M*11:58:47*  yum-plugin-auto-update-debug-info noarch 
> 1.1.31-45.el7  centos-base-el7   27 k*11:58:47* Installing for 
> dependencies:*11:58:47*  glibc-debuginfo-commonx86_64 
> 2.17-222.el7   centos-debuginfo 9.6 M*11:58:47* *11:58:47* Transaction 
> Summary*11:58:47* 
> *11:58:47*
>  Install  2 Packages (+1 Dependent package)
>
>
> We could not find python debuginfo package, which explains why py-bt
> was missing.
>
> The build should fail in this case, but "yum install -y" does not fail
> when a package
> is missing.
>
> Barak, can you suggest a way to fail the build if a package is missing?
>

I suppose adding something like the following to the script would do the
trick:

rpm -q $PACKAGE || exit 1



>
>
>> On Sun, Jul 15, 2018 at 4:36 PM, Nir Soffer  wrote:
>>>
>>>>
>>>> On Sun, Jul 15, 2018 at 3:43 PM Eyal Shenitzky 
>>>> wrote:
>>>>
>>>>> Did someone encounter this failure?
>>>>>
>>>>> *12:05:19* 0.00s teardown 
>>>>> tests/pywatch_test.py::TestPyWatch::test_kill_grandkids*12:05:19* 
>>>>> === FAILURES 
>>>>> ===*12:05:19* __ 
>>>>> TestPyWatch.test_timeout_backtrace __*12:05:19* 
>>>>> *12:05:19* self = >>>> 0x7f0f730219d0>*12:05:19* *12:05:19* @pytest.mark.xfail(on_fedora(), 
>>>>> reason="py-bt is broken on Fedora 27")*12:05:19* def 
>>>>> test_timeout_backtrace(self):*12:05:19* script = '''*12:05:19*
>>>>>  import time*12:05:19* *12:05:19* def outer():*12:05:19* 
>>>>> inner()*12:05:19* *12:05:19* def inner():*12:05:19* 
>>>>> time.sleep(10)*12:05:19* *12:05:19* outer()*12:05:19* 
>>>>> '''*12:05:19* rc, out, err = exec_cmd(['./py-watch', '0.1', 
>>>>> 'python', '-c', script])*12:05:19* >   assert b'in inner ()' in 
>>>>> out*12:05:19* E   AssertionError: assert 'in inner ()' in 
>>>>> '=\n= 
>>>>> Watched process timed out   ... Terminating 
>>>>> watched process
>>>>> =\n=\n'*12:05:19*
>>>>>  *12:05:19* pywatch_test.py:68: AssertionError*12:05:19* 
>>>>> -- Captured log call 
>>>>> ---*12:05:19* cmdutils.py151 
>>>>> DEBUG./py-watch 0.1 python -c '*12:05:19* import time*12:05:19* 
>>>>> *12:05:19* def outer():*12:05:19* inner()*12:05:19* *12:05:19* def 
>>>>> inner():*12:05:19* time.sleep(10)*12:05:19* *12:05:19* 
>>>>> outer()*12:05:19* ' (cwd None)*12:05:19* cmdutils.py159 
>>>>> DEBUGFAILED:  = 'Missing separate debuginfo for 
>>>>> /home/jenkins/workspace/vdsm_master_check-patch-el7-x86_64/vdsm/.tox/lib-py27/b

[ovirt-devel] Re: ovirt-system-tests_he-node-ng-suite-master is failing on not enough memory to run VMs

2018-07-08 Thread Barak Korren
On 6 July 2018 at 11:57, Sandro Bonazzola  wrote:

>
> https://jenkins.ovirt.org/job/ovirt-system-tests_he-node-ng-
> suite-master/165/testReport/(root)/004_basic_sanity/vm_run/
>
> Cannot run VM. There is no host that satisfies current scheduling
> constraints. See below for details:, The host 
> lago-he-node-ng-suite-master-host-0
> did not satisfy internal filter Memory because its available memory is too
> low (656 MB) to run the VM.
>
>

this sounds like something that needs to be fixed in the suit's
LagoInitFile.



> --
>
> SANDRO BONAZZOLA
>
> MANAGER, SOFTWARE ENGINEERING, EMEA R RHV
>
> Red Hat EMEA <https://www.redhat.com/>
>
> sbona...@redhat.com
> <https://red.ht/sig>
>
> ___
> Devel mailing list -- devel@ovirt.org
> To unsubscribe send an email to devel-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: https://www.ovirt.org/community/about/community-
> guidelines/
> List Archives: https://lists.ovirt.org/archives/list/devel@ovirt.org/
> message/TDKLML6YDBATFHS232GFJF7QVRTWUH74/
>
>


-- 
Barak Korren
RHV DevOps team , RHCE, RHCi
Red Hat EMEA
redhat.com | TRIED. TESTED. TRUSTED. | redhat.com/trusted
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/S6F3JGB3C5OSKNWMM3THXMIR2XYYUOGO/


[ovirt-devel] Re: [CQ Failure Report] [oVirt Master ovirt-engine-wildfly] [3/7/18] [ovirt-system-tests_basic-suite-master_deploy-scripts_setup_engine.sh]

2018-07-03 Thread Barak Korren
On 4 July 2018 at 08:41, Martin Perina  wrote:

> Hi,
>
> I've just checked [1] and [2] repos and ovirt-engine-wildfly*-13 packages
> are not yet available on those repos. Any luck with passing those packages
> through CQ?
>

Unfortunately an unrelated regressions caused the dependent changes to end
up in different batches during the bisection search, and they were dropped
again.

Here is finally a test running with the two of them together and nothing
else:
https://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/8563/



>
> Thanks
>
> Martin
>
> [1] https://plain.resources.ovirt.org/pub/ovirt-master-snapshot/rpm/el7/
> [2] https://plain.resources.ovirt.org/repos/ovirt/tested/master/rpm/el7/
>
>
> On Tue, Jul 3, 2018 at 12:30 PM, Barak Korren  wrote:
>
>>
>>
>> On 3 July 2018 at 12:26, Martin Perina  wrote:
>>
>>>
>>>
>>> On Tue, Jul 3, 2018 at 11:03 AM, Ehud Yonasi  wrote:
>>>
>>>> Suspected patch:
>>>> ​​
>>>> https://gerrit.ovirt.org/#/c/91555/
>>>>
>>>> Link to job:
>>>> http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/8542/
>>>>
>>>> Link to all logs:
>>>> http://jenkins.ovirt.org/job/ovirt-master_change-queue-teste
>>>> r/8542/artifact/exported-artifacts/
>>>>
>>>> Relevant error snippet:
>>>>
>>>> Error: ovirt-engine-wildfly-overlay conflicts with 
>>>> ovirt-engine-wildfly-13.0.0-1.el7.x86_64
>>>>
>>>>
>>> ​OK​, I thought that I can do stepped approach:
>>>
>>> 1. Push ovirt-engine-wildfly-13 RPM into the repos
>>> 2. Push ovirt-engine-wildfly-overlay-13 RPM, which requires
>>> ovirt-engine-wildfly-13, into repos
>>>
>>> But it seems that due to serial nature of our CQ it's not possible, so
>>> can we pass both patches [1] and [2] though CQ at once?
>>>
>>>
>> Ehud will handle pushing them through the CQ together (its a matter of
>> timing things so they both get added to the CQ while it is busy with other
>> things).
>>
>> I wonder - could we merge those projects together to avoid this situation
>> in the future? What is the relationship between 'ovirt-engine-wildfly'
>> and the Wildfly upstream?
>>
>>
>>
>>
>> --
>> Barak Korren
>> RHV DevOps team , RHCE, RHCi
>> Red Hat EMEA
>> redhat.com | TRIED. TESTED. TRUSTED. | redhat.com/trusted
>>
>
>
>
> --
> Martin Perina
> Associate Manager, Software Engineering
> Red Hat Czech s.r.o.
>



-- 
Barak Korren
RHV DevOps team , RHCE, RHCi
Red Hat EMEA
redhat.com | TRIED. TESTED. TRUSTED. | redhat.com/trusted
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/N2FLAHEF6RHSAHVNUZTSIY5WDCZOIRVO/


[ovirt-devel] Re: [CQ Failure Report] [oVirt Master ovirt-engine-wildfly] [3/7/18] [ovirt-system-tests_basic-suite-master_deploy-scripts_setup_engine.sh]

2018-07-03 Thread Barak Korren
since these issues seem rare from my POV, we do not prioritize the effort
to create a syntax to specify inter-change dependencies, even though the CQ
core algorithm was written with that in mind.

So for the time being you need to ask us to resolve this when it arises.

On 3 July 2018 at 13:34, Greg Sheremeta  wrote:

> I've also had this same need with projects that can't be combined, so I
> think we need an official solution. Is it just to ask you? There is no way
> for us to do it?
>
> Greg
>
> On Tue, Jul 3, 2018, 6:32 AM Barak Korren  wrote:
>
>>
>>
>> On 3 July 2018 at 12:26, Martin Perina  wrote:
>>
>>>
>>>
>>> On Tue, Jul 3, 2018 at 11:03 AM, Ehud Yonasi  wrote:
>>>
>>>> Suspected patch:
>>>> ​​
>>>> https://gerrit.ovirt.org/#/c/91555/
>>>>
>>>> Link to job:
>>>> http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/8542/
>>>>
>>>> Link to all logs:
>>>> http://jenkins.ovirt.org/job/ovirt-master_change-queue-
>>>> tester/8542/artifact/exported-artifacts/
>>>>
>>>> Relevant error snippet:
>>>>
>>>> Error: ovirt-engine-wildfly-overlay conflicts with 
>>>> ovirt-engine-wildfly-13.0.0-1.el7.x86_64
>>>>
>>>>
>>> ​OK​, I thought that I can do stepped approach:
>>>
>>> 1. Push ovirt-engine-wildfly-13 RPM into the repos
>>> 2. Push ovirt-engine-wildfly-overlay-13 RPM, which requires
>>> ovirt-engine-wildfly-13, into repos
>>>
>>> But it seems that due to serial nature of our CQ it's not possible, so
>>> can we pass both patches [1] and [2] though CQ at once?
>>>
>>>
>> Ehud will handle pushing them through the CQ together (its a matter of
>> timing things so they both get added to the CQ while it is busy with other
>> things).
>>
>> I wonder - could we merge those projects together to avoid this situation
>> in the future? What is the relationship between 'ovirt-engine-wildfly'
>> and the Wildfly upstream?
>>
>>
>>
>>
>> --
>> Barak Korren
>> RHV DevOps team , RHCE, RHCi
>> Red Hat EMEA
>> redhat.com | TRIED. TESTED. TRUSTED. | redhat.com/trusted
>> ___
>> Infra mailing list -- in...@ovirt.org
>> To unsubscribe send an email to infra-le...@ovirt.org
>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>> oVirt Code of Conduct: https://www.ovirt.org/community/about/community-
>> guidelines/
>> List Archives: https://lists.ovirt.org/archives/list/in...@ovirt.org/
>> message/MJIFEI4U3M4SJHEP7GO4CJLHHYO2SOEI/
>>
>


-- 
Barak Korren
RHV DevOps team , RHCE, RHCi
Red Hat EMEA
redhat.com | TRIED. TESTED. TRUSTED. | redhat.com/trusted
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/AXYGHELGSGGMIK3IQBLBB7HGXMTJAY4A/


[ovirt-devel] Re: [CQ Failure Report] [oVirt Master ovirt-engine-wildfly] [3/7/18] [ovirt-system-tests_basic-suite-master_deploy-scripts_setup_engine.sh]

2018-07-03 Thread Barak Korren
On 3 July 2018 at 12:26, Martin Perina  wrote:

>
>
> On Tue, Jul 3, 2018 at 11:03 AM, Ehud Yonasi  wrote:
>
>> Suspected patch:
>> ​​
>> https://gerrit.ovirt.org/#/c/91555/
>>
>> Link to job:
>> http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/8542/
>>
>> Link to all logs:
>> http://jenkins.ovirt.org/job/ovirt-master_change-queue-teste
>> r/8542/artifact/exported-artifacts/
>>
>> Relevant error snippet:
>>
>> Error: ovirt-engine-wildfly-overlay conflicts with 
>> ovirt-engine-wildfly-13.0.0-1.el7.x86_64
>>
>>
> ​OK​, I thought that I can do stepped approach:
>
> 1. Push ovirt-engine-wildfly-13 RPM into the repos
> 2. Push ovirt-engine-wildfly-overlay-13 RPM, which requires
> ovirt-engine-wildfly-13, into repos
>
> But it seems that due to serial nature of our CQ it's not possible, so can
> we pass both patches [1] and [2] though CQ at once?
>
>
Ehud will handle pushing them through the CQ together (its a matter of
timing things so they both get added to the CQ while it is busy with other
things).

I wonder - could we merge those projects together to avoid this situation
in the future? What is the relationship between 'ovirt-engine-wildfly' and
the Wildfly upstream?




-- 
Barak Korren
RHV DevOps team , RHCE, RHCi
Red Hat EMEA
redhat.com | TRIED. TESTED. TRUSTED. | redhat.com/trusted
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/MJIFEI4U3M4SJHEP7GO4CJLHHYO2SOEI/


[ovirt-devel] Re: Gerrit trying to set 3rd party cookies

2018-07-01 Thread Barak Korren
On 1 July 2018 at 15:41, Nir Soffer  wrote:

> After watching Sarah Bird's great talk about the terrifying web[1], I
> found that for
> some reason 3rd party cookies were enabled in my browser.
>
> After disabling them, I found that gerrit is using 3rd party cookies from
> gravatar.com.
> (see attached screenshot).
>
> Why do we allow 3rd parties like gravatar to set cookies?
>


We don't "allow" 3rd parties. For a 3rd party to be able to set cookies on
your site you need have some elements on your page that make the browser
pull content from them. In the case of Gravatar what we have are  tags
with "src" attributes that contain URLs that point to Gravatar and contain
one-way hashes of user email addresses. Those URLs resolve to the users
avatars if they registered their emails with Gravatar.

This is just how Gravater works - its very simple and reliable, to have it
work differently would require complex and fragile server-side code on our
side and would probably be prone to more security issues then the current
system.

The only 3rd-party we engage currently is Gravatar, I've no reason to
believe the engage in any sort of tracking. The maintainers of Gravatar are
also the maintainers of Wordpress, one of the bigger open-source
poster-child projects, which is all about people hosting their own stuff
rather then catering to the requirements of proprietary gate-keepers like
Facebook and GitHub (Now Microsoft...)...

Bottom line, I've strong reason to belive this is false alarm.


>
> Can we use gravatar without setting cookies?
>

This looks like a simple session cookie, try to log out of your acocunt on
Gravatar and see if it vanishes...


> [image: Screenshot from 2018-07-01 15-31-37.png]
> [1] https://il.pycon.org/2018/schedule/presentation/18/
>
> Nir
>
> ___
> Devel mailing list -- devel@ovirt.org
> To unsubscribe send an email to devel-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: https://www.ovirt.org/community/about/community-
> guidelines/
> List Archives: https://lists.ovirt.org/archives/list/devel@ovirt.org/
> message/H5RSJINV7WKJMWGF7NJ5SJZJJDP7MJZS/
>
>


-- 
Barak Korren
RHV DevOps team , RHCE, RHCi
Red Hat EMEA
redhat.com | TRIED. TESTED. TRUSTED. | redhat.com/trusted
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/QRS4XQ4KTGEBCFFTD62ZPMIIGSYQUDLV/


[ovirt-devel] Re: Deprecating direct RPM downloads from Jenkins

2018-06-19 Thread Barak Korren
On 19 June 2018 at 17:37, Michal Skrivanek 
wrote:

>
>
> On 19 Jun 2018, at 07:16, Barak Korren  wrote:
>
> Hi there,
>
>
> TL;DR: Is your build/CI/other process consuming RPMs directly from
> Jenkins? Could it be changed to consume from other available places?
>
>
>
> In STDCI V1 we supported obtaining of the latest CI build for a particular
> project in a particular branch on a particular platform directly from
> Jenkins, by using the "latestSuccessfulBuild" dynamic link that Jenkins
> generates.
>
> This was possible in STDCI V1 because we had one-to-one correlation
> between jenkins jobs and project/branch/platform combinations. That is no
> longer the case in STDCI V2 where each project gets just two fixed jobs
> that adjust themselves automatically to run needed functionality.
>
> We could implement some equivalent functionality in STDCI V2 by for e.g.
> uploading builds to some predictable locations on an artifact server, but
> that will take some non-trivial amount of work, so it leads us to the
> question if this functionality is really needed.
>
>
> does this affect the repo/rpms created as part of a “ci please build” run
> in any way?
>

The URL for the RPM will be different but as long as you use the full
job/build URL it will work. In short, no.

This only affects you if you need some kind of a 'meta' URL to find the
build from the lastest _merged_ commit.


>
>
> There are a couple of alternatives locations to get recently built
> packages from:
> - The 'tested' repo which contains all the packages that passed CQ/OST
> - The 'snapshot' repo which contains a nightly snapshot of 'tested'.
>
> So given the options above, if you have a build/CI/other process that
> currently consumes builds from Jenkins, could it be changed to consume from
> the other available locations?
>
> --
> Barak Korren
> RHV DevOps team , RHCE, RHCi
> Red Hat EMEA
> redhat.com | TRIED. TESTED. TRUSTED. | redhat.com/trusted
> ___
> Devel mailing list -- devel@ovirt.org
> To unsubscribe send an email to devel-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: https://www.ovirt.org/community/about/community-
> guidelines/
> List Archives: https://lists.ovirt.org/archives/list/devel@ovirt.org/
> message/A5LFDPPFWXTEEZLZFSBGZVJNEIP3H7HC/
>
>
>


-- 
Barak Korren
RHV DevOps team , RHCE, RHCi
Red Hat EMEA
redhat.com | TRIED. TESTED. TRUSTED. | redhat.com/trusted
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/GX6GRYIVJ2WE4WGBWFMTGPBT3E4SMTYB/


[ovirt-devel] Deprecating direct RPM downloads from Jenkins

2018-06-18 Thread Barak Korren
Hi there,


TL;DR: Is your build/CI/other process consuming RPMs directly from Jenkins?
Could it be changed to consume from other available places?


In STDCI V1 we supported obtaining of the latest CI build for a particular
project in a particular branch on a particular platform directly from
Jenkins, by using the "latestSuccessfulBuild" dynamic link that Jenkins
generates.

This was possible in STDCI V1 because we had one-to-one correlation between
jenkins jobs and project/branch/platform combinations. That is no longer
the case in STDCI V2 where each project gets just two fixed jobs that
adjust themselves automatically to run needed functionality.

We could implement some equivalent functionality in STDCI V2 by for e.g.
uploading builds to some predictable locations on an artifact server, but
that will take some non-trivial amount of work, so it leads us to the
question if this functionality is really needed.

There are a couple of alternatives locations to get recently built packages
from:
- The 'tested' repo which contains all the packages that passed CQ/OST
- The 'snapshot' repo which contains a nightly snapshot of 'tested'.

So given the options above, if you have a build/CI/other process that
currently consumes builds from Jenkins, could it be changed to consume from
the other available locations?

-- 
Barak Korren
RHV DevOps team , RHCE, RHCi
Red Hat EMEA
redhat.com | TRIED. TESTED. TRUSTED. | redhat.com/trusted
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/A5LFDPPFWXTEEZLZFSBGZVJNEIP3H7HC/


[ovirt-devel] Re: CI failing: /usr/sbin/groupadd -g 1000 mock

2018-06-18 Thread Barak Korren
On 17 June 2018 at 23:03, Yuval Turgeman  wrote:

> IIRC we hit this issue before when trying to install a mock rpm of a
> specific version inside an env that was created with a different version of
> mock (group names have changed from mock to mockbuild or vice versa)
>
> On Jun 17, 2018 22:53, "Nir Soffer"  wrote:
>
> 2 Fedora builds failed recently with this error:
>
> 19:40:34 ERROR: Command failed: 19:40:34 # /usr/sbin/groupadd -g 1000 mock
>
> See latest builds of this patch:
> https://gerrit.ovirt.org/#/c/91834/
>
> I merged the patch regardless, but please take a look.
>
>

Looks like an updated mock was  pushed to FC28 and introduced some
breakage.
We should have had package gating setup for FC28 before introducing the
FC28 slaves.

Hers is a tracker ticket:
https://ovirt-jira.atlassian.net/browse/OVIRT-2208

We will probably downgrade mock back to 1.4.10 at this point and do an
orderly upgrade later.

-- 
Barak Korren
RHV DevOps team , RHCE, RHCi
Red Hat EMEA
redhat.com | TRIED. TESTED. TRUSTED. | redhat.com/trusted
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/VC3C5OCTAAWMUDLBYQEMSN5JJINLVKT6/


[ovirt-devel] Re: ovirt-ansible-disaster-recovery build jobs not triggered

2018-06-10 Thread Barak Korren
On 8 June 2018 at 12:47, Sandro Bonazzola  wrote:

> https://jenkins.ovirt.org/job/oVirt_ovirt-ansible-disaster-
> recovery_standard-on-ghpush/196/artifact/
>
> shows no build job triggered. Can you please fix it?
>

As Microsoft engineers say - this is not a bug, its a feature.

That event triggering this was a tag push, it is not actually supported by
STDCI ATM.

To rebuild this commit we need to find merge event for it on GitHub and
re-fire it.
I've tried to do this but had no success so far since the branches on that
repo look a little strange to me.



> Thanks,
>
> --
>
> SANDRO BONAZZOLA
>
> ASSOCIATE MANAGER, SOFTWARE ENGINEERING, EMEA ENG VIRTUALIZATION R
>
> Red Hat EMEA <https://www.redhat.com/>
>
> sbona...@redhat.com
> <https://red.ht/sig>
>
> ___
> Infra mailing list -- in...@ovirt.org
> To unsubscribe send an email to infra-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: https://www.ovirt.org/community/about/community-
> guidelines/
> List Archives: https://lists.ovirt.org/archives/list/in...@ovirt.org/
> message/WNY5ZCMV35GFASFAFSZ4LW23AYV6GERO/
>
>


-- 
Barak Korren
RHV DevOps team , RHCE, RHCi
Red Hat EMEA
redhat.com | TRIED. TESTED. TRUSTED. | redhat.com/trusted
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/KJD4RT2MM5L74ALVUWNDLWYSJFIPB6NQ/


[ovirt-devel] Re: ovirt-ansible-manageiq build jobs not triggered

2018-06-10 Thread Barak Korren
On 8 June 2018 at 12:53, Sandro Bonazzola  wrote:

>
> https://jenkins.ovirt.org/job/oVirt_ovirt-ansible-manageiq_
> standard-on-ghpush/lastSuccessfulBuild/artifact/
>
> shows no build job triggered. Can you please fix it?
> Thanks,
>


If I'm not mistaken, the even that triggered the job above was the pushing
of an annotated tag.

We do not currently support that kind of event, and the fact that it
triggers anything ATM is essentially a bug.

We do have a build of the same commit from the time it was pushed:
https://jenkins.ovirt.org/blue/organizations/jenkins/oVirt_ovirt-ansible-manageiq_standard-on-ghpush/detail/oVirt_ovirt-ansible-manageiq_standard-on-ghpush/63/pipeline/57

If you want to rebuild the same commit because it is now tagged,you can go
into github and re-send the hook event that was sent when it was pushed.


>
> --
>
> SANDRO BONAZZOLA
>
> ASSOCIATE MANAGER, SOFTWARE ENGINEERING, EMEA ENG VIRTUALIZATION R
>
> Red Hat EMEA <https://www.redhat.com/>
>
> sbona...@redhat.com
> <https://red.ht/sig>
>
> ___
> Infra mailing list -- in...@ovirt.org
> To unsubscribe send an email to infra-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: https://www.ovirt.org/community/about/community-
> guidelines/
> List Archives: https://lists.ovirt.org/archives/list/in...@ovirt.org/
> message/H4CTGT5UOGTKBE4EQH4HVDHMP5QISYFZ/
>
>


-- 
Barak Korren
RHV DevOps team , RHCE, RHCi
Red Hat EMEA
redhat.com | TRIED. TESTED. TRUSTED. | redhat.com/trusted
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/4TP4JUGGKVTFGCZKS5RLLOJ447HWRX74/


[ovirt-devel] Re: post-merge CI triggering from GitHub

2018-05-31 Thread Barak Korren
On 31 May 2018 at 15:21, Barak Korren  wrote:

> Its broken right now, we're investigating the issue.
>
>
We now have a fix for this in place:
https://gerrit.ovirt.org/#/c/91842/



>
> Tracker ticket:
> https://ovirt-jira.atlassian.net/browse/OVIRT-2071
>
> Thanks Tomas Golembiovsky for reporting!
>


-- 
Barak Korren
RHV DevOps team , RHCE, RHCi
Red Hat EMEA
redhat.com | TRIED. TESTED. TRUSTED. | redhat.com/trusted
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/K3YNXHU2QB6UUC2YZK2VUYCI77FM362Q/


[ovirt-devel] post-merge CI triggering from GitHub

2018-05-31 Thread Barak Korren
Its broken right now, we're investigating the issue.

Tracker ticket:
https://ovirt-jira.atlassian.net/browse/OVIRT-2071

Thanks Tomas Golembiovsky for reporting!

-- 
Barak Korren
RHV DevOps team , RHCE, RHCi
Red Hat EMEA
redhat.com | TRIED. TESTED. TRUSTED. | redhat.com/trusted
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/2RISWEQ3P7HHTKHCTRUL7HGMCEHVLJBQ/


[ovirt-devel] Re: [VDSM] travis tests fail consistently since Apr 14

2018-05-31 Thread Barak Korren
On 31 May 2018 at 13:00, Milan Zamazal  wrote:

> Nir Soffer  writes:
>
> > I know that new ovirt CI solved some of the issues, but nobody sent
> > patches to convert vdsm to the new standard yet.
>
> I asked Barak about possible Vdsm conversion at his deep dive and he
> responded that the new CI may need more real-use testing before projects
> such as Vdsm switch.  It's a couple of weeks since then and if there are
> no problems with projects that already use it (such as oVirt system
> tests), maybe we should start working on a conversion patch?
>

Let me clarify what I said back then a bit - since engine and VDSM are the
two big flagship projects, I want then to be the last projects to be
converted. So its not a matter of time its a matter of converting all the
other projects first.

Now the thing is, we will not do this on our own - maintainers need to be
in the loop as we move projects, so while we do want to be proactive about
this, given the other task load we have, things work best when the
maintainers actively approach us as some like the ovirt-provider-ovn
maintainers did.

So please if you're a small-ish project maintainer shoot an email to
infra-supp...@ovirt.org asking your project to be covered and then monitor
the jira ticket. The actual setup takes just a few minutes and we will use
the Jira ticket to update you on progress and rely any project-specific
questions you may have.



> ___
> Devel mailing list -- devel@ovirt.org
> To unsubscribe send an email to devel-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: https://www.ovirt.org/community/about/community-
> guidelines/
> List Archives: https://lists.ovirt.org/archives/list/devel@ovirt.org/
> message/BI2TZYNRFSYFZEKQJHZMDV5AKY2DF5QZ/
>



-- 
Barak Korren
RHV DevOps team , RHCE, RHCi
Red Hat EMEA
redhat.com | TRIED. TESTED. TRUSTED. | redhat.com/trusted
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/KDEHVDUSN6RSRM23IS4A2A4H5AOJGDMF/


[ovirt-devel] Re: CI not responding to GitHub web hooks

2018-05-31 Thread Barak Korren
On 30 May 2018 at 19:54, Tomáš Golembiovský  wrote:

> Hi,
>
> it seems CI is not responding to GitHub web hooks... is it congested or
> broken?
>
>
There is no know general issue.

Please be more specific, which job? which project? which PR?


> Tomas
>
> --
> Tomáš Golembiovský 
> ___
> Devel mailing list -- devel@ovirt.org
> To unsubscribe send an email to devel-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: https://www.ovirt.org/community/about/community-
> guidelines/
> List Archives: https://lists.ovirt.org/archives/list/devel@ovirt.org/
> message/SGV223RQJLXJQUFUKANGQMTVO4ITZZOR/
>



-- 
Barak Korren
RHV DevOps team , RHCE, RHCi
Red Hat EMEA
redhat.com | TRIED. TESTED. TRUSTED. | redhat.com/trusted
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/C424DF7BMFXIBVM6KWJBPHMQ7RTSRAER/


[ovirt-devel] Re: [ OST Failure Report ] [ oVirt master ] [ 2018-05-25 ] [AddHost]

2018-05-30 Thread Barak Korren
On 30 May 2018 at 10:36, Martin Perina  wrote:

>
>
> On Wed, May 30, 2018 at 9:31 AM, Barak Korren  wrote:
>
>>
>>
>> On 30 May 2018 at 10:24, Martin Perina  wrote:
>>
>>>
>>>
>>> On Wed, May 30, 2018 at 8:13 AM, Barak Korren 
>>> wrote:
>>>
>>>>
>>>>
>>>> On 29 May 2018 at 22:29, Martin Perina  wrote:
>>>>
>>>>> Master revert patches [1], [2] merged, 4.2 revert patches [3], [4]
>>>>> waiting to be merged.
>>>>>
>>>>> We will repost patches to master tomorrow and will continue to
>>>>> investigate mysterious host-deploy issue.
>>>>>
>>>>> Btw, upgrade-from-prev-release on master [5] currently fails with:
>>>>>
>>>>> 18:59:31 + cp 'ovirt-system-tests/upgrade-fr
>>>>> om-prevrelease-suite-master/*.repo' exported-artifacts
>>>>> 18:59:31 cp: cannot stat 'ovirt-system-tests/upgrade-fr
>>>>> om-prevrelease-suite-master/*.repo': No such file or directory
>>>>> 18:59:31 POST BUILD TASK : FAILURE
>>>>>
>>>>> So how can we test upgrade from 4.2 to master?
>>>>>
>>>>
>>>> This is not the real issue, the real issue is
>>>>
>>>> *00:00:19.190* /tmp/jenkins6944523151752956846.sh: line 4: 
>>>> ovirt-system-tests/upgrade-from-prevrelease-suite-master/extra_sources: No 
>>>> such file or directory
>>>>
>>>>
>>>>
>>>> This is happening because there is no 
>>>> 'upgrade-from-prevrelease-suite-master',
>>>> the suite to be used is 'upgrade-from-release-suite-master'.
>>>>
>>>
>>> ​Yes, but looking at [6]​ we are testing upgrade from 4.1 to master, is
>>> that true? If so, how this can work? We are supporting upgrade only between
>>> directly following versions, so it should not be possible to upgrade from
>>> 4.1 to master directly ...
>>>
>>
>>
>> Well, I wonder where is the patch to change that, should have been
>> created when 4.2 went GA...
>>
>>
>>>
>>> So is this table in [7] valid?
>>>
>>> ​​
>>> *Target oVirt version which will be tested.*
>>> ENGINE_VERSION prev release release
>>> master 4.2 master
>>> --- 4.1 4.2
>>> 4.1 --- 4.1
>>>
>>
>>
>> It looks messed up I uess we'll need to 'git blame'...
>>
>
> ​Right, IMO table should look like:
> ​
> ​
> *Target oVirt version which will be tested.*
> ENGINE_VERSION prev release release
> master 4.2 master
> 4.2
> 4.1 4.2
> 4.1 --- 4.1
>
>
Yeah, but we need OST to reflect that first

Any any case the 4.2 'from prev release' suit seems to be doing the right
thing - so we still need tom figure out how and why the issue discussed in
this thread is affecting it.




> ​And maybe even completely remove last line enabling 4.1 upgrade from 4.1​
> as we are not going to release any 4.1 version ...
>
>
Yeah all the 4.1 suits were dropped from OST already.


>
>
>>
>>>
>>>
>>>>
>>>>>
>>>>> Martin
>>>>>
>>>>>
>>>>> [1] https://gerrit.ovirt.org/91741
>>>>> [2] https://gerrit.ovirt.org/91742
>>>>> [3] https://gerrit.ovirt.org/91744
>>>>> [4] https://gerrit.ovirt.org/91745
>>>>> [5] https://jenkins.ovirt.org/view/oVirt%20system%20tests/job/ov
>>>>> irt-system-tests_manual/2758/console
>>>>>
>>>>
>>> ​[6] https://github.com/oVirt/ovirt-system-tests/blob/master/upgr
>>> ade-from-release-suite-master/pre-reposync-config.repo
>>> [7] https://jenkins.ovirt.org/view/oVirt%20system%20tests/job/ov
>>> irt-system-tests_manual/build?delay=0sec
>>>
>>> ​
>>>
>>>>
>>>>>
>>>>> On Tue, May 29, 2018 at 3:42 PM, Barak Korren 
>>>>> wrote:
>>>>>
>>>>>>
>>>>>>
>>>>>> On 29 May 2018 at 16:30, Martin Perina  wrote:
>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> On Tue, May 29, 2018 at 3:12 PM, Dafna Ron  wrote:
>>>>>>>
>>>>>>>> Martin, do you have any updates? please note that ovirt-engine has
>>>>>>>> been broken for a few days so perhaps we should stop merging or 

[ovirt-devel] Re: [ OST Failure Report ] [ oVirt master ] [ 2018-05-25 ] [AddHost]

2018-05-30 Thread Barak Korren
On 30 May 2018 at 10:24, Martin Perina  wrote:

>
>
> On Wed, May 30, 2018 at 8:13 AM, Barak Korren  wrote:
>
>>
>>
>> On 29 May 2018 at 22:29, Martin Perina  wrote:
>>
>>> Master revert patches [1], [2] merged, 4.2 revert patches [3], [4]
>>> waiting to be merged.
>>>
>>> We will repost patches to master tomorrow and will continue to
>>> investigate mysterious host-deploy issue.
>>>
>>> Btw, upgrade-from-prev-release on master [5] currently fails with:
>>>
>>> 18:59:31 + cp 'ovirt-system-tests/upgrade-fr
>>> om-prevrelease-suite-master/*.repo' exported-artifacts
>>> 18:59:31 cp: cannot stat 'ovirt-system-tests/upgrade-fr
>>> om-prevrelease-suite-master/*.repo': No such file or directory
>>> 18:59:31 POST BUILD TASK : FAILURE
>>>
>>> So how can we test upgrade from 4.2 to master?
>>>
>>
>> This is not the real issue, the real issue is
>>
>> *00:00:19.190* /tmp/jenkins6944523151752956846.sh: line 4: 
>> ovirt-system-tests/upgrade-from-prevrelease-suite-master/extra_sources: No 
>> such file or directory
>>
>>
>>
>> This is happening because there is no 
>> 'upgrade-from-prevrelease-suite-master',
>> the suite to be used is 'upgrade-from-release-suite-master'.
>>
>
> ​Yes, but looking at [6]​ we are testing upgrade from 4.1 to master, is
> that true? If so, how this can work? We are supporting upgrade only between
> directly following versions, so it should not be possible to upgrade from
> 4.1 to master directly ...
>


Well, I wonder where is the patch to change that, should have been created
when 4.2 went GA...


>
> So is this table in [7] valid?
>
> *Target oVirt version which will be tested.*
> ENGINE_VERSION prev release release
> master 4.2 master
> --- 4.1 4.2
> 4.1 --- 4.1
>


It looks messed up I uess we'll need to 'git blame'...




>
>
>>
>>>
>>> Martin
>>>
>>>
>>> [1] https://gerrit.ovirt.org/91741
>>> [2] https://gerrit.ovirt.org/91742
>>> [3] https://gerrit.ovirt.org/91744
>>> [4] https://gerrit.ovirt.org/91745
>>> [5] https://jenkins.ovirt.org/view/oVirt%20system%20tests/job/ov
>>> irt-system-tests_manual/2758/console
>>>
>>
> ​[6] https://github.com/oVirt/ovirt-system-tests/blob/
> master/upgrade-from-release-suite-master/pre-reposync-config.repo
> [7] https://jenkins.ovirt.org/view/oVirt%20system%20tests/
> job/ovirt-system-tests_manual/build?delay=0sec
>
> ​
>
>>
>>>
>>> On Tue, May 29, 2018 at 3:42 PM, Barak Korren 
>>> wrote:
>>>
>>>>
>>>>
>>>> On 29 May 2018 at 16:30, Martin Perina  wrote:
>>>>
>>>>>
>>>>>
>>>>> On Tue, May 29, 2018 at 3:12 PM, Dafna Ron  wrote:
>>>>>
>>>>>> Martin, do you have any updates? please note that ovirt-engine has
>>>>>> been broken for a few days so perhaps we should stop merging or revert 
>>>>>> the
>>>>>> original change?
>>>>>>
>>>>>
>>>>> ​Still looking at it, here are partial results:
>>>>>
>>>>> 1. New host installation: never reproduced, 4.2 host is always
>>>>> installed fine on 4.2 engine
>>>>> 2. Upgrade - never reproduced, upgrade of both 4.1 engine and host to
>>>>> 4.2 was always successfull
>>>>> 3. Reinstallation - once it happened to me that during reinstallation
>>>>> the host remain stucked during Reinstallation and the whole​ 
>>>>> reinstallation
>>>>> failed due to timeout
>>>>> - that may be the issue which can be seen in CI, but so far I
>>>>> don't have reliable reproducer to be able to debug why host-deploy process
>>>>> on the host is stucked
>>>>>
>>>>
>>>> Did you try using OST locally? it reproduces consistently with the OST
>>>> upgrade suit. You can also use the manual job and pass a URL to any engine
>>>> build beyond the marked patch. But there you'll have the same issue as with
>>>> the CQ job where you won't have logs...
>>>>
>>>> Note, the process that happens there is AFAIK:
>>>> 1. The oVirt 4.1 release is installed.
>>>> 2. engine-setup runs
>>>> 3. repos are changed to the master repo
>>>> 4. engine is upgraded
>>>> 5. bootstrap (including AddHost that fails

[ovirt-devel] Re: [ OST Failure Report ] [ oVirt master ] [ 2018-05-25 ] [AddHost]

2018-05-30 Thread Barak Korren
On 29 May 2018 at 22:29, Martin Perina  wrote:

> Master revert patches [1], [2] merged, 4.2 revert patches [3], [4] waiting
> to be merged.
>
> We will repost patches to master tomorrow and will continue to investigate
> mysterious host-deploy issue.
>
> Btw, upgrade-from-prev-release on master [5] currently fails with:
>
> 18:59:31 + cp 
> 'ovirt-system-tests/upgrade-from-prevrelease-suite-master/*.repo'
> exported-artifacts
> 18:59:31 cp: cannot stat 'ovirt-system-tests/upgrade-
> from-prevrelease-suite-master/*.repo': No such file or directory
> 18:59:31 POST BUILD TASK : FAILURE
>
> So how can we test upgrade from 4.2 to master?
>

This is not the real issue, the real issue is

*00:00:19.190* /tmp/jenkins6944523151752956846.sh: line 4:
ovirt-system-tests/upgrade-from-prevrelease-suite-master/extra_sources:
No such file or directory



This is happening because there is no
'upgrade-from-prevrelease-suite-master', the suite to be used is
'upgrade-from-release-suite-master'.


>
> Martin
>
>
> [1] https://gerrit.ovirt.org/91741
> [2] https://gerrit.ovirt.org/91742
> [3] https://gerrit.ovirt.org/91744
> [4] https://gerrit.ovirt.org/91745
> [5] https://jenkins.ovirt.org/view/oVirt%20system%20tests/
> job/ovirt-system-tests_manual/2758/console
>
>
> On Tue, May 29, 2018 at 3:42 PM, Barak Korren  wrote:
>
>>
>>
>> On 29 May 2018 at 16:30, Martin Perina  wrote:
>>
>>>
>>>
>>> On Tue, May 29, 2018 at 3:12 PM, Dafna Ron  wrote:
>>>
>>>> Martin, do you have any updates? please note that ovirt-engine has been
>>>> broken for a few days so perhaps we should stop merging or revert the
>>>> original change?
>>>>
>>>
>>> ​Still looking at it, here are partial results:
>>>
>>> 1. New host installation: never reproduced, 4.2 host is always installed
>>> fine on 4.2 engine
>>> 2. Upgrade - never reproduced, upgrade of both 4.1 engine and host to
>>> 4.2 was always successfull
>>> 3. Reinstallation - once it happened to me that during reinstallation
>>> the host remain stucked during Reinstallation and the whole​ reinstallation
>>> failed due to timeout
>>> - that may be the issue which can be seen in CI, but so far I don't
>>> have reliable reproducer to be able to debug why host-deploy process on the
>>> host is stucked
>>>
>>
>> Did you try using OST locally? it reproduces consistently with the OST
>> upgrade suit. You can also use the manual job and pass a URL to any engine
>> build beyond the marked patch. But there you'll have the same issue as with
>> the CQ job where you won't have logs...
>>
>> Note, the process that happens there is AFAIK:
>> 1. The oVirt 4.1 release is installed.
>> 2. engine-setup runs
>> 3. repos are changed to the master repo
>> 4. engine is upgraded
>> 5. bootstrap (including AddHost that fails is carried out)
>>
>>
>>>
>>>
>>>>
>>>> On Tue, May 29, 2018 at 1:26 PM, Piotr Kliczewski 
>>>> wrote:
>>>>
>>>>> +Martin
>>>>>
>>>>> He is working on it.
>>>>>
>>>>> Thanks,
>>>>> Piotr
>>>>>
>>>>> On Tue, May 29, 2018 at 2:22 PM, Dafna Ron  wrote:
>>>>>
>>>>>> Hi Piotr,
>>>>>>
>>>>>> Any update on this?
>>>>>>
>>>>>> Thanks.
>>>>>> Dafna
>>>>>>
>>>>>>
>>>>>> On Mon, May 28, 2018 at 10:59 AM, Piotr Kliczewski <
>>>>>> piotr.kliczew...@gmail.com> wrote:
>>>>>>
>>>>>>> On Mon, May 28, 2018 at 11:41 AM, Barak Korren 
>>>>>>> wrote:
>>>>>>> >
>>>>>>> >
>>>>>>> > On 28 May 2018 at 12:38, Piotr Kliczewski <
>>>>>>> piotr.kliczew...@gmail.com>
>>>>>>> > wrote:
>>>>>>> >>
>>>>>>> >> On Mon, May 28, 2018 at 10:57 AM, Barak Korren <
>>>>>>> bkor...@redhat.com> wrote:
>>>>>>> >> > Note: we're now seeing a very similar issue in the 4.2 branch
>>>>>>> as well
>>>>>>> >> > that
>>>>>>> >> > seems to have been introduced by the following patch:
>>>>>>> >>
>>>>>>> >> Can you point to specific job

[ovirt-devel] Re: [ OST Failure Report ] [ oVirt master ] [ 2018-05-25 ] [AddHost]

2018-05-29 Thread Barak Korren
On 29 May 2018 at 16:30, Martin Perina  wrote:

>
>
> On Tue, May 29, 2018 at 3:12 PM, Dafna Ron  wrote:
>
>> Martin, do you have any updates? please note that ovirt-engine has been
>> broken for a few days so perhaps we should stop merging or revert the
>> original change?
>>
>
> ​Still looking at it, here are partial results:
>
> 1. New host installation: never reproduced, 4.2 host is always installed
> fine on 4.2 engine
> 2. Upgrade - never reproduced, upgrade of both 4.1 engine and host to 4.2
> was always successfull
> 3. Reinstallation - once it happened to me that during reinstallation the
> host remain stucked during Reinstallation and the whole​ reinstallation
> failed due to timeout
> - that may be the issue which can be seen in CI, but so far I don't
> have reliable reproducer to be able to debug why host-deploy process on the
> host is stucked
>

Did you try using OST locally? it reproduces consistently with the OST
upgrade suit. You can also use the manual job and pass a URL to any engine
build beyond the marked patch. But there you'll have the same issue as with
the CQ job where you won't have logs...

Note, the process that happens there is AFAIK:
1. The oVirt 4.1 release is installed.
2. engine-setup runs
3. repos are changed to the master repo
4. engine is upgraded
5. bootstrap (including AddHost that fails is carried out)


>
>
>>
>> On Tue, May 29, 2018 at 1:26 PM, Piotr Kliczewski 
>> wrote:
>>
>>> +Martin
>>>
>>> He is working on it.
>>>
>>> Thanks,
>>> Piotr
>>>
>>> On Tue, May 29, 2018 at 2:22 PM, Dafna Ron  wrote:
>>>
>>>> Hi Piotr,
>>>>
>>>> Any update on this?
>>>>
>>>> Thanks.
>>>> Dafna
>>>>
>>>>
>>>> On Mon, May 28, 2018 at 10:59 AM, Piotr Kliczewski <
>>>> piotr.kliczew...@gmail.com> wrote:
>>>>
>>>>> On Mon, May 28, 2018 at 11:41 AM, Barak Korren 
>>>>> wrote:
>>>>> >
>>>>> >
>>>>> > On 28 May 2018 at 12:38, Piotr Kliczewski <
>>>>> piotr.kliczew...@gmail.com>
>>>>> > wrote:
>>>>> >>
>>>>> >> On Mon, May 28, 2018 at 10:57 AM, Barak Korren 
>>>>> wrote:
>>>>> >> > Note: we're now seeing a very similar issue in the 4.2 branch as
>>>>> well
>>>>> >> > that
>>>>> >> > seems to have been introduced by the following patch:
>>>>> >>
>>>>> >> Can you point to specific job so we could take a look at the logs?
>>>>> >
>>>>> >
>>>>> > Whoops, sorry, here:
>>>>> > http://jenkins.ovirt.org/job/ovirt-4.2_change-queue-tester/2034/
>>>>> >
>>>>>
>>>>> Looks like the same issue:
>>>>>
>>>>> 2018-05-28 03:41:03,606-04 ERROR
>>>>> [org.ovirt.engine.core.uutils.ssh.SSHDialog]
>>>>> (EE-ManagedThreadFactory-engine-Thread-1) [1244c90f] SSH error running
>>>>> command root@lago-upgrade-from-prevrelease-suite-4-2-host-0:'umask
>>>>> 0077; MYTMP="$(TMPDIR="${OVIRT_TMPDIR}" mktemp -d -t
>>>>> ovirt-XX)"; trap "chmod -R u+rwX \"${MYTMP}\" > /dev/null
>>>>> 2>&1; rm -fr \"${MYTMP}\" > /dev/null 2>&1" 0; tar
>>>>> --warning=no-timestamp -C "${MYTMP}" -x &&
>>>>> "${MYTMP}"/ovirt-host-deploy DIALOG/dialect=str:machine
>>>>> DIALOG/customization=bool:True': TimeLimitExceededException: SSH
>>>>> session timeout host
>>>>> 'root@lago-upgrade-from-prevrelease-suite-4-2-host-0'
>>>>> 2018-05-28 03:41:03,606-04 ERROR
>>>>> [org.ovirt.engine.core.bll.hostdeploy.VdsDeployBase] (VdsDeploy)
>>>>> [1244c90f] Error during deploy dialog
>>>>> 2018-05-28 03:41:03,611-04 ERROR
>>>>> [org.ovirt.engine.core.bll.hostdeploy.VdsDeployBase]
>>>>> (EE-ManagedThreadFactory-engine-Thread-1) [1244c90f] Timeout during
>>>>> host lago-upgrade-from-prevrelease-suite-4-2-host-0 install: SSH
>>>>> session timeout host
>>>>> 'root@lago-upgrade-from-prevrelease-suite-4-2-host-0'
>>>>>
>>>>> >>
>>>>> >>
>>>>> >> >
>>>>> >> > https://gerrit.ovirt.org/c/

[ovirt-devel] Re: [ OST Failure Report ] [ oVirt master ] [ 2018-05-25 ] [AddHost]

2018-05-28 Thread Barak Korren
On 28 May 2018 at 12:38, Piotr Kliczewski <piotr.kliczew...@gmail.com>
wrote:

> On Mon, May 28, 2018 at 10:57 AM, Barak Korren <bkor...@redhat.com> wrote:
> > Note: we're now seeing a very similar issue in the 4.2 branch as well
> that
> > seems to have been introduced by the following patch:
>
> Can you point to specific job so we could take a look at the logs?
>

Whoops, sorry, here:
http://jenkins.ovirt.org/job/ovirt-4.2_change-queue-tester/2034/


>
> >
> > https://gerrit.ovirt.org/c/91638/2 - core: Enable only strong ciphers
> for
> > 4.2 hosts
> >
> > On 28 May 2018 at 10:26, Barak Korren <bkor...@redhat.com> wrote:
> >>
> >>
> >>
> >> On 28 May 2018 at 10:19, Martin Perina <mper...@redhat.com> wrote:
> >>>
> >>>
> >>>
> >>> On Mon, May 28, 2018 at 9:00 AM, Piotr Kliczewski <pklic...@redhat.com
> >
> >>> wrote:
> >>>>
> >>>> Simone,
> >>>>
> >>>> What do you think about this failure?
> >>>>
> >>>> Thanks,
> >>>> Piotr
> >>>>
> >>>> On Mon, May 28, 2018 at 7:12 AM, Barak Korren <bkor...@redhat.com>
> >>>> wrote:
> >>>>>
> >>>>>
> >>>>>
> >>>>> On 27 May 2018 at 14:59, Piotr Kliczewski <pklic...@redhat.com>
> wrote:
> >>>>>>
> >>>>>> Martin,
> >>>>>>
> >>>>>> I only can see:
> >>>>>>
> >>>>>> 2018-05-25 13:57:44,255-04 ERROR
> >>>>>> [org.ovirt.engine.core.uutils.ssh.SSHDialog]
> >>>>>> (EE-ManagedThreadFactory-engine-Thread-1) [55a7b15b] SSH error
> running
> >>>>>> command root@lago-upgrade-from-release-suite-master-host-0:'umask
> 0077;
> >>>>>> MYTMP="$(TMPDIR="${OVIRT_TMPDIR}" mktemp -d -t ovirt-XX)";
> trap
> >>>>>> "chmod -R u+rwX \"${MYTMP}\" > /dev/null 2>&1; rm -fr \"${MYTMP}\" >
> >>>>>> /dev/null 2>&1" 0; tar --warning=no-timestamp -C "${MYTMP}" -x &&
> >>>>>> "${MYTMP}"/ovirt-host-deploy DIALOG/dialect=str:machine
> >>>>>> DIALOG/customization=bool:True': TimeLimitExceededException: SSH
> session
> >>>>>> timeout host 'root@lago-upgrade-from-release-suite-master-host-0'
> >>>>>> 2018-05-25 13:57:44,259-04 ERROR
> >>>>>> [org.ovirt.engine.core.bll.hostdeploy.VdsDeployBase]
> >>>>>> (EE-ManagedThreadFactory-engine-Thread-1) [55a7b15b] Timeout
> during host
> >>>>>> lago-upgrade-from-release-suite-master-host-0 install: SSH session
> timeout
> >>>>>> host 'root@lago-upgrade-from-release-suite-master-host-0'
> >>>>>>
> >>>>>> There are no additional logs. SSH to host timeout. Are we sure that
> it
> >>>>>> is an issue caused by Ravi's change?
> >>>>>
> >>>>>
> >>>>> We have some quite strong circumstantial evidence:
> >>>>> - Issue had affected all engine patches since that patch in a similar
> >>>>> fashion.
> >>>>> - Prior engine patch [1] passed successfully [2]
> >>>>> - Other subsequent OST runs without engine patches passed
> successfully
> >>>>> as well [3].
> >>>>>
> >>>>> [1]: https://gerrit.ovirt.org/c/91595/2
> >>>>> [2]:
> >>>>> http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester//
> >>>>> [3]:
> >>>>> http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/7778/
> >>>>>
> >>>>>
> >>>>> Please note - the issue is affecting a test that is run by an upgrade
> >>>>> suit on the post-upgrade system. It has no affect on the basic suit.
> So it
> >>>>> probably has to do with some behaviour that is specific to upgraded
> systems.
> >>>
> >>>
> >>> I will try to reproduce later today in dev env, but I agree with
> Piotr's
> >>> investigation, engine was not able to connect to the host using SSH and
> >>> that's why no host-deploy logs were fetched.
> >>
> >>
> >> Lago fetches the logs from the host too (And it 

[ovirt-devel] Re: [ OST Failure Report ] [ oVirt master ] [ 2018-05-25 ] [AddHost]

2018-05-28 Thread Barak Korren
Note: we're now seeing a very similar issue in the 4.2 branch as well that
seems to have been introduced by the following patch:

https://gerrit.ovirt.org/c/91638/2 - core: Enable only strong ciphers for
4.2 hosts

On 28 May 2018 at 10:26, Barak Korren <bkor...@redhat.com> wrote:

>
>
> On 28 May 2018 at 10:19, Martin Perina <mper...@redhat.com> wrote:
>
>>
>>
>> On Mon, May 28, 2018 at 9:00 AM, Piotr Kliczewski <pklic...@redhat.com>
>> wrote:
>>
>>> Simone,
>>>
>>> What do you think about this failure?
>>>
>>> Thanks,
>>> Piotr
>>>
>>> On Mon, May 28, 2018 at 7:12 AM, Barak Korren <bkor...@redhat.com>
>>> wrote:
>>>
>>>>
>>>>
>>>> On 27 May 2018 at 14:59, Piotr Kliczewski <pklic...@redhat.com> wrote:
>>>>
>>>>> Martin,
>>>>>
>>>>> I only can see:
>>>>>
>>>>> 2018-05-25 13:57:44,255-04 ERROR 
>>>>> [org.ovirt.engine.core.uutils.ssh.SSHDialog] 
>>>>> (EE-ManagedThreadFactory-engine-Thread-1) [55a7b15b] SSH error running 
>>>>> command root@lago-upgrade-from-release-suite-master-host-0:'umask 0077; 
>>>>> MYTMP="$(TMPDIR="${OVIRT_TMPDIR}" mktemp -d -t ovirt-XX)"; trap 
>>>>> "chmod -R u+rwX \"${MYTMP}\" > /dev/null 2>&1; rm -fr \"${MYTMP}\" > 
>>>>> /dev/null 2>&1" 0; tar --warning=no-timestamp -C "${MYTMP}" -x &&  
>>>>> "${MYTMP}"/ovirt-host-deploy DIALOG/dialect=str:machine 
>>>>> DIALOG/customization=bool:True': TimeLimitExceededException: SSH session 
>>>>> timeout host 'root@lago-upgrade-from-release-suite-master-host-0'
>>>>> 2018-05-25 13:57:44,259-04 ERROR 
>>>>> [org.ovirt.engine.core.bll.hostdeploy.VdsDeployBase] 
>>>>> (EE-ManagedThreadFactory-engine-Thread-1) [55a7b15b] Timeout during host 
>>>>> lago-upgrade-from-release-suite-master-host-0 install: SSH session 
>>>>> timeout host 'root@lago-upgrade-from-release-suite-master-host-0'
>>>>>
>>>>> There are no additional logs. SSH to host timeout. Are we sure that it
>>>>> is an issue caused by Ravi's change?
>>>>>
>>>>
>>>> We have some quite strong circumstantial evidence:
>>>> - Issue had affected all engine patches since that patch in a similar
>>>> fashion.
>>>> - Prior engine patch [1] passed successfully [2]
>>>> - Other subsequent OST runs without engine patches passed successfully
>>>> as well [3].
>>>>
>>>> [1]: https://gerrit.ovirt.org/c/91595/2
>>>> [2]: http://jenkins.ovirt.org/job/ovirt-master_change-queue-teste
>>>> r//
>>>> [3]: http://jenkins.ovirt.org/job/ovirt-master_change-queue-teste
>>>> r/7778/
>>>>
>>>>
>>>> Please note - the issue is affecting a test that is run by an upgrade
>>>> suit on the post-upgrade system. It has no affect on the basic suit. So it
>>>> probably has to do with some behaviour that is specific to upgraded
>>>> systems.
>>>>
>>>
>> ​I will try to reproduce later today in dev env, but I agree with Piotr's
>> investigation, engine was not able to connect to the host using SSH and
>> that's why no host-deploy logs were fetched.​
>>
>
> Lago fetches the logs from the host too (And it can take then from the VM
> image directly if the host is not responsive over SSH), can we get at the
> host-deploy logs that way?
>
>
>
>>
>>>>
>>>>
>>>>>
>>>>> Thanks,
>>>>> Piotr
>>>>>
>>>>> On Sun, May 27, 2018 at 11:21 AM, Martin Perina <mper...@redhat.com>
>>>>> wrote:
>>>>>
>>>>>> Adding also Piotr to the thread
>>>>>>
>>>>>>
>>>>>> On Sun, 27 May 2018, 08:46 Barak Korren, <bkor...@redhat.com> wrote:
>>>>>>
>>>>>>> Test failed: [ AddHost (in upgrade-from-release-suite) ]
>>>>>>>
>>>>>>> Link to suspected patches:
>>>>>>> https://gerrit.ovirt.org/#/c/91445/5 - Disable TLS versions < 1.2
>>>>>>> for hosts with cluster level>=4.1
>>>>>>>
>>>>>>> Link to Job:
>>>>>>> http://

[ovirt-devel] Re: [ OST Failure Report ] [ oVirt master ] [ 2018-05-25 ] [AddHost]

2018-05-28 Thread Barak Korren
On 28 May 2018 at 10:19, Martin Perina <mper...@redhat.com> wrote:

>
>
> On Mon, May 28, 2018 at 9:00 AM, Piotr Kliczewski <pklic...@redhat.com>
> wrote:
>
>> Simone,
>>
>> What do you think about this failure?
>>
>> Thanks,
>> Piotr
>>
>> On Mon, May 28, 2018 at 7:12 AM, Barak Korren <bkor...@redhat.com> wrote:
>>
>>>
>>>
>>> On 27 May 2018 at 14:59, Piotr Kliczewski <pklic...@redhat.com> wrote:
>>>
>>>> Martin,
>>>>
>>>> I only can see:
>>>>
>>>> 2018-05-25 13:57:44,255-04 ERROR 
>>>> [org.ovirt.engine.core.uutils.ssh.SSHDialog] 
>>>> (EE-ManagedThreadFactory-engine-Thread-1) [55a7b15b] SSH error running 
>>>> command root@lago-upgrade-from-release-suite-master-host-0:'umask 0077; 
>>>> MYTMP="$(TMPDIR="${OVIRT_TMPDIR}" mktemp -d -t ovirt-XX)"; trap 
>>>> "chmod -R u+rwX \"${MYTMP}\" > /dev/null 2>&1; rm -fr \"${MYTMP}\" > 
>>>> /dev/null 2>&1" 0; tar --warning=no-timestamp -C "${MYTMP}" -x &&  
>>>> "${MYTMP}"/ovirt-host-deploy DIALOG/dialect=str:machine 
>>>> DIALOG/customization=bool:True': TimeLimitExceededException: SSH session 
>>>> timeout host 'root@lago-upgrade-from-release-suite-master-host-0'
>>>> 2018-05-25 13:57:44,259-04 ERROR 
>>>> [org.ovirt.engine.core.bll.hostdeploy.VdsDeployBase] 
>>>> (EE-ManagedThreadFactory-engine-Thread-1) [55a7b15b] Timeout during host 
>>>> lago-upgrade-from-release-suite-master-host-0 install: SSH session timeout 
>>>> host 'root@lago-upgrade-from-release-suite-master-host-0'
>>>>
>>>> There are no additional logs. SSH to host timeout. Are we sure that it
>>>> is an issue caused by Ravi's change?
>>>>
>>>
>>> We have some quite strong circumstantial evidence:
>>> - Issue had affected all engine patches since that patch in a similar
>>> fashion.
>>> - Prior engine patch [1] passed successfully [2]
>>> - Other subsequent OST runs without engine patches passed successfully
>>> as well [3].
>>>
>>> [1]: https://gerrit.ovirt.org/c/91595/2
>>> [2]: http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester//
>>> [3]: http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/7778/
>>>
>>>
>>> Please note - the issue is affecting a test that is run by an upgrade
>>> suit on the post-upgrade system. It has no affect on the basic suit. So it
>>> probably has to do with some behaviour that is specific to upgraded
>>> systems.
>>>
>>
> ​I will try to reproduce later today in dev env, but I agree with Piotr's
> investigation, engine was not able to connect to the host using SSH and
> that's why no host-deploy logs were fetched.​
>

Lago fetches the logs from the host too (And it can take then from the VM
image directly if the host is not responsive over SSH), can we get at the
host-deploy logs that way?



>
>>>
>>>
>>>>
>>>> Thanks,
>>>> Piotr
>>>>
>>>> On Sun, May 27, 2018 at 11:21 AM, Martin Perina <mper...@redhat.com>
>>>> wrote:
>>>>
>>>>> Adding also Piotr to the thread
>>>>>
>>>>>
>>>>> On Sun, 27 May 2018, 08:46 Barak Korren, <bkor...@redhat.com> wrote:
>>>>>
>>>>>> Test failed: [ AddHost (in upgrade-from-release-suite) ]
>>>>>>
>>>>>> Link to suspected patches:
>>>>>> https://gerrit.ovirt.org/#/c/91445/5 - Disable TLS versions < 1.2
>>>>>> for hosts with cluster level>=4.1
>>>>>>
>>>>>> Link to Job:
>>>>>> http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/7776/
>>>>>>
>>>>>> Link to all logs:
>>>>>> http://jenkins.ovirt.org/job/ovirt-master_change-queue-teste
>>>>>> r/7776/artifact/exported-artifacts/upgrade-from-release-suit
>>>>>> -master-el7/test_logs/upgrade-from-release-suite-master/post
>>>>>> -002_bootstrap.py/
>>>>>>
>>>>>> Error snippet from log:
>>>>>>
>>>>>> From nosetst log:
>>>>>> 
>>>>>>
>>>>>> AssertionError: False != True after 1200 seconds
>>>>>>
>>>>>> 
>>>>>>
>>>>>> Not finding a host deploy log in /var/log/ovirt-engine for some
>>>>>> reason.
>>>>>> This seems to have cause consistent failure in all other engine
>>>>>> patches that followed it.
>>>>>>
>>>>>>
>>>>>> --
>>>>>> Barak Korren
>>>>>> RHV DevOps team , RHCE, RHCi
>>>>>> Red Hat EMEA
>>>>>> redhat.com | TRIED. TESTED. TRUSTED. | redhat.com/trusted
>>>>>>
>>>>>
>>>>
>>>
>>>
>>> --
>>> Barak Korren
>>> RHV DevOps team , RHCE, RHCi
>>> Red Hat EMEA
>>> redhat.com | TRIED. TESTED. TRUSTED. | redhat.com/trusted
>>>
>>
>>
>
>
> --
> Martin Perina
> Associate Manager, Software Engineering
> Red Hat Czech s.r.o.
>



-- 
Barak Korren
RHV DevOps team , RHCE, RHCi
Red Hat EMEA
redhat.com | TRIED. TESTED. TRUSTED. | redhat.com/trusted
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/AGJ46PMJM3NBUTKE6CGXD2HZKEWGTMSF/


[ovirt-devel] Re: [ OST Failure Report ] [ oVirt master ] [ 2018-05-25 ] [AddHost]

2018-05-27 Thread Barak Korren
On 27 May 2018 at 14:59, Piotr Kliczewski <pklic...@redhat.com> wrote:

> Martin,
>
> I only can see:
>
> 2018-05-25 13:57:44,255-04 ERROR [org.ovirt.engine.core.uutils.ssh.SSHDialog] 
> (EE-ManagedThreadFactory-engine-Thread-1) [55a7b15b] SSH error running 
> command root@lago-upgrade-from-release-suite-master-host-0:'umask 0077; 
> MYTMP="$(TMPDIR="${OVIRT_TMPDIR}" mktemp -d -t ovirt-XX)"; trap 
> "chmod -R u+rwX \"${MYTMP}\" > /dev/null 2>&1; rm -fr \"${MYTMP}\" > 
> /dev/null 2>&1" 0; tar --warning=no-timestamp -C "${MYTMP}" -x &&  
> "${MYTMP}"/ovirt-host-deploy DIALOG/dialect=str:machine 
> DIALOG/customization=bool:True': TimeLimitExceededException: SSH session 
> timeout host 'root@lago-upgrade-from-release-suite-master-host-0'
> 2018-05-25 13:57:44,259-04 ERROR 
> [org.ovirt.engine.core.bll.hostdeploy.VdsDeployBase] 
> (EE-ManagedThreadFactory-engine-Thread-1) [55a7b15b] Timeout during host 
> lago-upgrade-from-release-suite-master-host-0 install: SSH session timeout 
> host 'root@lago-upgrade-from-release-suite-master-host-0'
>
> There are no additional logs. SSH to host timeout. Are we sure that it is
> an issue caused by Ravi's change?
>

We have some quite strong circumstantial evidence:
- Issue had affected all engine patches since that patch in a similar
fashion.
- Prior engine patch [1] passed successfully [2]
- Other subsequent OST runs without engine patches passed successfully as
well [3].

[1]: https://gerrit.ovirt.org/c/91595/2
[2]: http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester//
[3]: http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/7778/


Please note - the issue is affecting a test that is run by an upgrade suit
on the post-upgrade system. It has no affect on the basic suit. So it
probably has to do with some behaviour that is specific to upgraded
systems.



>
> Thanks,
> Piotr
>
> On Sun, May 27, 2018 at 11:21 AM, Martin Perina <mper...@redhat.com>
> wrote:
>
>> Adding also Piotr to the thread
>>
>>
>> On Sun, 27 May 2018, 08:46 Barak Korren, <bkor...@redhat.com> wrote:
>>
>>> Test failed: [ AddHost (in upgrade-from-release-suite) ]
>>>
>>> Link to suspected patches:
>>> https://gerrit.ovirt.org/#/c/91445/5 - Disable TLS versions < 1.2 for
>>> hosts with cluster level>=4.1
>>>
>>> Link to Job:
>>> http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/7776/
>>>
>>> Link to all logs:
>>> http://jenkins.ovirt.org/job/ovirt-master_change-queue-teste
>>> r/7776/artifact/exported-artifacts/upgrade-from-release-
>>> suit-master-el7/test_logs/upgrade-from-release-suite-
>>> master/post-002_bootstrap.py/
>>>
>>> Error snippet from log:
>>>
>>> From nosetst log:
>>> 
>>>
>>> AssertionError: False != True after 1200 seconds
>>>
>>> 
>>>
>>> Not finding a host deploy log in /var/log/ovirt-engine for some reason.
>>> This seems to have cause consistent failure in all other engine patches
>>> that followed it.
>>>
>>>
>>> --
>>> Barak Korren
>>> RHV DevOps team , RHCE, RHCi
>>> Red Hat EMEA
>>> redhat.com | TRIED. TESTED. TRUSTED. | redhat.com/trusted
>>>
>>
>


-- 
Barak Korren
RHV DevOps team , RHCE, RHCi
Red Hat EMEA
redhat.com | TRIED. TESTED. TRUSTED. | redhat.com/trusted
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/RKSRCL47ED7QA7D7KKTGCU2JH57UATU4/


[ovirt-devel] [ OST Failure Report ] [ oVirt master ] [ 2018-05-28 ] [004_basic_sanity.disk_operations]

2018-05-27 Thread Barak Korren
Test failed: [ 004_basic_sanity.disk_operations
<http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/7811/testReport/junit/%28root%29/004_basic_sanity/disk_operations/>
]

Link to suspected patches:
https://gerrit.ovirt.org/#/c/91068/4

Link to Job:
http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/7811/

Link to all logs:
http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/7811/artifact/exported-artifacts/basic-suit-master-el7/test_logs/basic-suite-master/post-004_basic_sanity.py/

Error snippet from log:



Note: There is another ongoing regression in engine master ATM, but this
issue is causing a different suit to fail so it was luckily not masked by
it.

False != True after 600 seconds
 >> begin captured logging << 
lago.utils: ERROR: Error while running thread
Traceback (most recent call last):
  File "/usr/lib/python2.7/site-packages/lago/utils.py", line 58, in
_ret_via_queue
queue.put({'return': func()})
  File "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py", line
59, in wrapper
return func(get_test_prefix(), *args, **kwargs)
  File "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py", line
78, in wrapper
prefix.virt_env.engine_vm().get_api(api_ver=4), *args, **kwargs
  File 
"/home/jenkins/workspace/ovirt-master_change-queue-tester/ovirt-system-tests/basic-suite-master/test-scenarios/004_basic_sanity.py",
line 528, in cold_storage_migration
lambda: api.follow_link(
  File "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py", line
271, in assert_true_within_long
assert_equals_within_long(func, True, allowed_exceptions)
  File "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py", line
258, in assert_equals_within_long
func, value, LONG_TIMEOUT, allowed_exceptions=allowed_exceptions
  File "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py", line
237, in assert_equals_within
'%s != %s after %s seconds' % (res, value, timeout)
AssertionError: False != True after 600 seconds
- >> end captured logging << -






-- 
Barak Korren
RHV DevOps team , RHCE, RHCi
Red Hat EMEA
redhat.com | TRIED. TESTED. TRUSTED. | redhat.com/trusted
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/YNNVVH5RD4MUDKTHVOT3TPWRN3KTAMUR/


[ovirt-devel] [ OST Failure Report ] [ oVirt master ] [ 2018-05-25 ] [AddHost]

2018-05-27 Thread Barak Korren
Test failed: [ AddHost (in upgrade-from-release-suite) ]

Link to suspected patches:
https://gerrit.ovirt.org/#/c/91445/5 - Disable TLS versions < 1.2 for hosts
with cluster level>=4.1

Link to Job:
http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/7776/

Link to all logs:
http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/7776/artifact/exported-artifacts/upgrade-from-release-suit-master-el7/test_logs/upgrade-from-release-suite-master/post-002_bootstrap.py/

Error snippet from log:

>From nosetst log:


AssertionError: False != True after 1200 seconds



Not finding a host deploy log in /var/log/ovirt-engine for some reason.
This seems to have cause consistent failure in all other engine patches
that followed it.


-- 
Barak Korren
RHV DevOps team , RHCE, RHCi
Red Hat EMEA
redhat.com | TRIED. TESTED. TRUSTED. | redhat.com/trusted
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org


[ovirt-devel] [ACTION REQUIRED] Sun-setting oVirt 4.1 in CI

2018-05-21 Thread Barak Korren
Hi all,

Given that oVirt 4.1 has now gone EOL, we're going to start dropping all
relates assets in the CI system.

This includes:
- All 4.1 jobs
- All 4.1 change queues
- The 4.1 'tested' repositories
- The 4.1 'nightly snapshot' repositores.

Maintainers, if your projects have any dependencies on the resources above,
please be sure to update them to more up to date or stable resources. In
particular, to depend on any 4.1 packages, please use the 4.1 released repo
at:

http://resources.ovirt.org/pub/ovirt-4.1/

Thanks,

-- 
Barak Korren
RHV DevOps team , RHCE, RHCi
Red Hat EMEA
redhat.com | TRIED. TESTED. TRUSTED. | redhat.com/trusted
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org


[ovirt-devel] Re: ovirt-engine travis CI

2018-05-16 Thread Barak Korren
On 16 May 2018 at 11:03, Sandro Bonazzola <sbona...@redhat.com> wrote:

> Hi,
> today I pushed a fix for Travis CI on oVirt Engine. While discussing the
> fix it has been asked why we need Travis CI in first place, isn't our
> Jenkins testing enough?
>
>
I'm wondering that as well...



-- 
Barak Korren
RHV DevOps team , RHCE, RHCi
Red Hat EMEA
redhat.com | TRIED. TESTED. TRUSTED. | redhat.com/trusted
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org


[ovirt-devel] Re: Propose Dominik Holler as a Network-Backend Maintainer

2018-05-13 Thread Barak Korren
On 13 May 2018 at 09:28, Alona Kaplan <alkap...@redhat.com> wrote:

>
>
> On Sun, May 13, 2018 at 9:25 AM, Barak Korren <bkor...@redhat.com> wrote:
>
>> Eyal will not be available this week, please forward such requests to
>> infra-support next time.
>>
>> Just to be sure - which project are we talking about here? Is it vdsm?
>>
>
> No. ovirt-engine.
>

All right, added 'dhol...@redhat.com'  to the ovirt-engine-maintainers
group.

Good luck Dominik, and please be nice to the CI team and don't merge huge
patch streams on Thursdays and Fridays like some people seem to like to
do... ;)


>
>> On 13 May 2018 at 09:03, Alona Kaplan <alkap...@redhat.com> wrote:
>>
>>> Hi Eyal,
>>>
>>> Please grant +2 powers to Dominik.
>>>
>>> Thanks,
>>> Alona.
>>>
>>> On Thu, May 3, 2018 at 2:48 PM, Sandro Bonazzola <sbona...@redhat.com>
>>> wrote:
>>>
>>>>
>>>>
>>>> 2018-05-01 9:54 GMT+02:00 Alona Kaplan <alkap...@redhat.com>:
>>>>
>>>>> Hi all,
>>>>>
>>>>> Dominik Holler has been working on the oVirt project for more than 1.5
>>>>> years.
>>>>>
>>>>> To share some of Dominik's great stats -
>>>>> ~ 120 patches related to the network backend/ui
>>>>> ~ 95 patches for ovirt-provider-ovn
>>>>> ~ 44 vdsm patches
>>>>> ~ 80 bug fixes
>>>>>
>>>>> He was the feature owner of 'auto sync network provider',
>>>>> 'lldp-reporting' and 'network-filter-parameters'.
>>>>>
>>>>> For the last few months Dominik is helping review network-backend
>>>>> related patches and is doing a great and thorough work.
>>>>> Dominik showed a deep understanding of all the parts of code that he
>>>>> touched or reviewed.
>>>>> He learns fast, thorough and uncompromising.
>>>>>
>>>>> I've reviewed most of Dominik's engine related work (code and reviews).
>>>>> I trust his opinion and think he will be a good addition to the
>>>>> maintainers team.
>>>>>
>>>>> I would like to propose Dominik as a Network backend maintainer.
>>>>>
>>>>
>>>> I think you already got enough +1 but if needed, +1 from me as well.
>>>>
>>>>
>>>>
>>>>>
>>>>>
>>>>> Thanks,
>>>>> Alona.
>>>>>
>>>>> _______
>>>>> Devel mailing list
>>>>> Devel@ovirt.org
>>>>> http://lists.ovirt.org/mailman/listinfo/devel
>>>>>
>>>>
>>>>
>>>>
>>>> --
>>>>
>>>> SANDRO BONAZZOLA
>>>>
>>>> ASSOCIATE MANAGER, SOFTWARE ENGINEERING, EMEA ENG VIRTUALIZATION R
>>>>
>>>> Red Hat EMEA <https://www.redhat.com/>
>>>>
>>>> sbona...@redhat.com
>>>> <https://red.ht/sig>
>>>> <https://redhat.com/summit>
>>>>
>>>
>>>
>>> ___
>>> Devel mailing list -- devel@ovirt.org
>>> To unsubscribe send an email to devel-le...@ovirt.org
>>> %(web_page_url)slistinfo%(cgiext)s/%(_internal_name)s
>>>
>>
>>
>>
>> --
>> Barak Korren
>> RHV DevOps team , RHCE, RHCi
>> Red Hat EMEA
>> redhat.com | TRIED. TESTED. TRUSTED. | redhat.com/trusted
>>
>
>


-- 
Barak Korren
RHV DevOps team , RHCE, RHCi
Red Hat EMEA
redhat.com | TRIED. TESTED. TRUSTED. | redhat.com/trusted
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
%(web_page_url)slistinfo%(cgiext)s/%(_internal_name)s

[ovirt-devel] Re: Mailing-Lists upgrade

2018-05-08 Thread Barak Korren
Kudus! This had been long time in the making

Now I wonder if we can brand the Hyper Kitty UI for oVirt.

On 8 May 2018 at 06:47, Marc Dequènes (Duck) <d...@redhat.com> wrote:

> I forgot to cross-post to other lists. Please read announcement below.
>
> On 05/08/2018 12:45 PM, Marc Dequènes (Duck) wrote:
> > Quack,
> >
> > On 04/28/2018 09:34 AM, Marc Dequènes (Duck) wrote:
> >
> >> A few months ago we had to rollback the migration because of a nasty
> >> bug. This is fixed in recent versions of Mailman 3 so we're rescheduling
> >> it on Tuesday 8th during the slot 11:00-12:00 JST.
> >
> > Just a word to say the migration happened and no problem was detected.
> >
> > The archive posts reindexation is not finished yet, and greylisting is
> > slowing down things a bit, but it should soon be over. Some information
> > like post count and little graphs in the web UI are handled via regular
> > cron jobs and should also soon be accurate.
> >
> > Old links to the archives, for pre-migration posts, are still working.
> > Posts URLS were not stable in Mailman 2 but been synced in
> > https://lists.ovirt.org/pipermail/
> >
> > You can contact me directly by mail or IRC, or open a JIRA ticket if you
> > hit any problem.
> >
> > \_o<
> >
>
>
> _______
> Devel mailing list -- devel@ovirt.org
> To unsubscribe send an email to devel-le...@ovirt.org
> %(web_page_url)slistinfo%(cgiext)s/%(_internal_name)s
>



-- 
Barak Korren
RHV DevOps team , RHCE, RHCi
Red Hat EMEA
redhat.com | TRIED. TESTED. TRUSTED. | redhat.com/trusted
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
%(web_page_url)slistinfo%(cgiext)s/%(_internal_name)s

Re: [ovirt-devel] ovirt-host-deploy and python3

2018-05-07 Thread Barak Korren
On 7 May 2018 at 10:15, Sandro Bonazzola <sbona...@redhat.com> wrote:

>
>
> 2018-05-06 7:53 GMT+02:00 Barak Korren <bkor...@redhat.com>:
>
>>
>>
>> On 4 May 2018 at 16:01, Greg Sheremeta <gsher...@redhat.com> wrote:
>>
>>> ci re-merge please
>>>
>>>
>> Please note that you should never run this on pre-merged patches as it
>> runs all the post-merge code including submission in change-queue.
>>
>
> I hope the code handling "ci re-merge please" is smart enough to check
> that the patch is merged before trying to re-merge it.
> If not, please fix it.
>

As I wrote before, the V2 code is, the V1 code isn't, which is one more
reason to switch...



>
>
>
>>
>>
>>
>>> On Fri, May 4, 2018 at 8:19 AM, Dan Kenigsberg <dan...@redhat.com>
>>> wrote:
>>>
>>>>
>>>>
>>>> On Fri, May 4, 2018 at 2:09 PM, Sandro Bonazzola <sbona...@redhat.com>
>>>> wrote:
>>>>
>>>>>
>>>>>
>>>>> 2018-05-04 11:07 GMT+02:00 Tomáš Golembiovský <tgole...@redhat.com>:
>>>>>
>>>>>> On Fri, 4 May 2018 09:33:11 +0200
>>>>>> Sandro Bonazzola <sbona...@redhat.com> wrote:
>>>>>>
>>>>>> > 2018-05-03 21:58 GMT+02:00 Tomáš Golembiovský <tgole...@redhat.com
>>>>>> >:
>>>>>> >
>>>>>> > > Hi,
>>>>>> > >
>>>>>> > > I'm trying to reinstall a CentOS host (using master-snapshot) and
>>>>>> I
>>>>>> > > noticed otopi is trying to use python3 while the
>>>>>> ovirt-host-deploy is
>>>>>> > > not yet fully python3 compatible:
>>>>>> > >
>>>>>> >
>>>>>> > How did you got python 3 on CentOS?
>>>>>> > It's not in CentOS distribution.
>>>>>>
>>>>>> From EPEL. We have 'python34*' listed in our ovirt-*-epel repos.
>>>>>>
>>>>>
>>>>>
>>>>> Dan, you asked for python34 packages from epel in
>>>>> https://gerrit.ovirt.org/#/c/55415/
>>>>> Are they still needed? I don't see them required anywhere.
>>>>> Can we drop them?
>>>>>
>>>>
>>>> You are perfectly right, Sandro. My attempt to support Python 3 on el7
>>>> failed.
>>>> https://gerrit.ovirt.org/90912 should clean its remainders.
>>>>
>>>> Can anybody remind me how I trigger check-merged job on it, for
>>>> verification?
>>>>
>>>>
>>>> ___
>>>> Devel mailing list
>>>> Devel@ovirt.org
>>>> http://lists.ovirt.org/mailman/listinfo/devel
>>>>
>>>
>>>
>>>
>>> --
>>>
>>> GREG SHEREMETA
>>>
>>> SENIOR SOFTWARE ENGINEER - TEAM LEAD - RHV UX
>>>
>>> Red Hat NA
>>>
>>> <https://www.redhat.com/>
>>>
>>> gsher...@redhat.comIRC: gshereme
>>> <https://red.ht/sig>
>>>
>>> ___
>>> Devel mailing list
>>> Devel@ovirt.org
>>> http://lists.ovirt.org/mailman/listinfo/devel
>>>
>>
>>
>>
>> --
>> Barak Korren
>> RHV DevOps team , RHCE, RHCi
>> Red Hat EMEA
>> redhat.com | TRIED. TESTED. TRUSTED. | redhat.com/trusted
>>
>> ___
>> Devel mailing list
>> Devel@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/devel
>>
>
>
>
> --
>
> SANDRO BONAZZOLA
>
> ASSOCIATE MANAGER, SOFTWARE ENGINEERING, EMEA ENG VIRTUALIZATION R
>
> Red Hat EMEA <https://www.redhat.com/>
>
> sbona...@redhat.com
> <https://red.ht/sig>
> <https://redhat.com/summit>
>



-- 
Barak Korren
RHV DevOps team , RHCE, RHCi
Red Hat EMEA
redhat.com | TRIED. TESTED. TRUSTED. | redhat.com/trusted
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

Re: [ovirt-devel] ovirt-host-deploy and python3

2018-05-06 Thread Barak Korren
On 6 May 2018 at 13:16, Dan Kenigsberg <dan...@redhat.com> wrote:

> Two questions:
> 1. what *should* I do to trigger check-merged?
>

merge the patch

In other words - we need to consider anything you want to be able to run
pre-merge into check-patch. With V2 and sub-stages we have a variety of
options there.


> 2. shouldn't you fail ci re-merge for unmerged changed on the chi side?
>

We have that fixed for V2 but not V1 AFAIR...



> On Sun, May 6, 2018, 01:54 Barak Korren <bkor...@redhat.com> wrote:
>
>>
>>
>> On 4 May 2018 at 16:01, Greg Sheremeta <gsher...@redhat.com> wrote:
>>
>>> ci re-merge please
>>>
>>>
>> Please note that you should never run this on pre-merged patches as it
>> runs all the post-merge code including submission in change-queue.
>>
>>
>>
>>> On Fri, May 4, 2018 at 8:19 AM, Dan Kenigsberg <dan...@redhat.com>
>>> wrote:
>>>
>>>>
>>>>
>>>> On Fri, May 4, 2018 at 2:09 PM, Sandro Bonazzola <sbona...@redhat.com>
>>>> wrote:
>>>>
>>>>>
>>>>>
>>>>> 2018-05-04 11:07 GMT+02:00 Tomáš Golembiovský <tgole...@redhat.com>:
>>>>>
>>>>>> On Fri, 4 May 2018 09:33:11 +0200
>>>>>> Sandro Bonazzola <sbona...@redhat.com> wrote:
>>>>>>
>>>>>> > 2018-05-03 21:58 GMT+02:00 Tomáš Golembiovský <tgole...@redhat.com
>>>>>> >:
>>>>>> >
>>>>>> > > Hi,
>>>>>> > >
>>>>>> > > I'm trying to reinstall a CentOS host (using master-snapshot) and
>>>>>> I
>>>>>> > > noticed otopi is trying to use python3 while the
>>>>>> ovirt-host-deploy is
>>>>>> > > not yet fully python3 compatible:
>>>>>> > >
>>>>>> >
>>>>>> > How did you got python 3 on CentOS?
>>>>>> > It's not in CentOS distribution.
>>>>>>
>>>>>> From EPEL. We have 'python34*' listed in our ovirt-*-epel repos.
>>>>>>
>>>>>
>>>>>
>>>>> Dan, you asked for python34 packages from epel in
>>>>> https://gerrit.ovirt.org/#/c/55415/
>>>>> Are they still needed? I don't see them required anywhere.
>>>>> Can we drop them?
>>>>>
>>>>
>>>> You are perfectly right, Sandro. My attempt to support Python 3 on el7
>>>> failed.
>>>> https://gerrit.ovirt.org/90912 should clean its remainders.
>>>>
>>>> Can anybody remind me how I trigger check-merged job on it, for
>>>> verification?
>>>>
>>>>
>>>> ___
>>>> Devel mailing list
>>>> Devel@ovirt.org
>>>> http://lists.ovirt.org/mailman/listinfo/devel
>>>>
>>>
>>>
>>>
>>> --
>>>
>>> GREG SHEREMETA
>>>
>>> SENIOR SOFTWARE ENGINEER - TEAM LEAD - RHV UX
>>>
>>> Red Hat NA
>>>
>>> <https://www.redhat.com/>
>>>
>>> gsher...@redhat.comIRC: gshereme
>>> <https://red.ht/sig>
>>>
>>> ___
>>> Devel mailing list
>>> Devel@ovirt.org
>>> http://lists.ovirt.org/mailman/listinfo/devel
>>>
>>
>>
>>
>> --
>> Barak Korren
>> RHV DevOps team , RHCE, RHCi
>> Red Hat EMEA
>> redhat.com | TRIED. TESTED. TRUSTED. | redhat.com/trusted
>>
>


-- 
Barak Korren
RHV DevOps team , RHCE, RHCi
Red Hat EMEA
redhat.com | TRIED. TESTED. TRUSTED. | redhat.com/trusted
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

Re: [ovirt-devel] OST: oVirt version used in basic-suite-4.2

2018-05-06 Thread Barak Korren
On 3 May 2018 at 17:31, Milan Zamazal <mzama...@redhat.com> wrote:

> Hi, I wonder why is ovirt-4.2 repo, rather than ovirt-4.2-snapshot or
> so, used in reposync-config for basic-suite-4.2?  Packages in ovirt-4.2
> are relatively old and the suite may fail on bugs that are already
> fixed.  Shouldn't a more up-to-date repo be used in basic-suite-4.2?
>
> Thanks,
> Milan
>


The idea was to have OST run on the released versions by default so it can
be used by 3rd party (non ovirt-core)  contributors.

We've decided to change things a short while ago and the patch to do that
is here:
https://gerrit.ovirt.org/c/89587/

But I've had no time to actually finish it.


-- 
Barak Korren
RHV DevOps team , RHCE, RHCi
Red Hat EMEA
redhat.com | TRIED. TESTED. TRUSTED. | redhat.com/trusted
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

Re: [ovirt-devel] ovirt-host-deploy and python3

2018-05-05 Thread Barak Korren
On 4 May 2018 at 16:01, Greg Sheremeta <gsher...@redhat.com> wrote:

> ci re-merge please
>
>
Please note that you should never run this on pre-merged patches as it runs
all the post-merge code including submission in change-queue.



> On Fri, May 4, 2018 at 8:19 AM, Dan Kenigsberg <dan...@redhat.com> wrote:
>
>>
>>
>> On Fri, May 4, 2018 at 2:09 PM, Sandro Bonazzola <sbona...@redhat.com>
>> wrote:
>>
>>>
>>>
>>> 2018-05-04 11:07 GMT+02:00 Tomáš Golembiovský <tgole...@redhat.com>:
>>>
>>>> On Fri, 4 May 2018 09:33:11 +0200
>>>> Sandro Bonazzola <sbona...@redhat.com> wrote:
>>>>
>>>> > 2018-05-03 21:58 GMT+02:00 Tomáš Golembiovský <tgole...@redhat.com>:
>>>> >
>>>> > > Hi,
>>>> > >
>>>> > > I'm trying to reinstall a CentOS host (using master-snapshot) and I
>>>> > > noticed otopi is trying to use python3 while the ovirt-host-deploy
>>>> is
>>>> > > not yet fully python3 compatible:
>>>> > >
>>>> >
>>>> > How did you got python 3 on CentOS?
>>>> > It's not in CentOS distribution.
>>>>
>>>> From EPEL. We have 'python34*' listed in our ovirt-*-epel repos.
>>>>
>>>
>>>
>>> Dan, you asked for python34 packages from epel in
>>> https://gerrit.ovirt.org/#/c/55415/
>>> Are they still needed? I don't see them required anywhere.
>>> Can we drop them?
>>>
>>
>> You are perfectly right, Sandro. My attempt to support Python 3 on el7
>> failed.
>> https://gerrit.ovirt.org/90912 should clean its remainders.
>>
>> Can anybody remind me how I trigger check-merged job on it, for
>> verification?
>>
>>
>> ___
>> Devel mailing list
>> Devel@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/devel
>>
>
>
>
> --
>
> GREG SHEREMETA
>
> SENIOR SOFTWARE ENGINEER - TEAM LEAD - RHV UX
>
> Red Hat NA
>
> <https://www.redhat.com/>
>
> gsher...@redhat.comIRC: gshereme
> <https://red.ht/sig>
>
> ___
> Devel mailing list
> Devel@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/devel
>



-- 
Barak Korren
RHV DevOps team , RHCE, RHCi
Red Hat EMEA
redhat.com | TRIED. TESTED. TRUSTED. | redhat.com/trusted
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

[ovirt-devel] oVirt CI now supports Fedora 28

2018-05-02 Thread Barak Korren
Since Fedora 28 was released yesterday, we've added support for it in
the oVirt CI system.

To use Fedora 28 just use 'fc28' as you would for other Fedora
versions in the CI YAML files and/or file extensions in the
'automation/' directory.

The synchronisation for our local mirror is ongoing at this time, so
until it is done, the upstream Fedora mirrors will be used, this can
cause some breakage.

-- 
Barak Korren
RHV DevOps team , RHCE, RHCi
Red Hat EMEA
redhat.com | TRIED. TESTED. TRUSTED. | redhat.com/trusted
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel


Re: [ovirt-devel] default maintainers plugin for other projects

2018-05-02 Thread Barak Korren
(adding devel because this might be interesting for others).

On 1 May 2018 at 21:16, Greg Sheremeta <gsher...@redhat.com> wrote:

> How would I get this plugin added to ovirt-engine-dashboard and
> ovirt-engine-node* ?
>

Its already available for all projects in gerrit.ovirt.org.


> And then how would I configure it? (Add myself as a default, at least)
>

Two ways:

With the GUI:
-

In Gerrit UI go to:

Admin -> Projects -> (Old UI) -> (Your project) -> Reviewers (from top bar)

Fill in filter (A file glob pattern - can be '*') and reviewer (An email
address ora gerrit group ) and click "Add".

Don't forget to switch back to new UI from the bottom right link.


>From the command line
---
In a Git clone of the relevant project do:

# git fetch origin refs/meta/config:meta-config
# git checkout meta-config

Then you can edit the 'reviewers.config' file (Make it if it isn't there)
and put in something like the following:

[filter "*"]
reviewer = jenkins-maintainers
reviewer = d...@redhat.com

Once you're done with it commit the change and push it with:

# git push -u origin meta-config refs/meta/config

Since we have '-u' there, the next time around you change the 'meta-config'
branch you can just do:

# git push -u origin

And that's about it.

One thing to note - every project has a '-maintainers' group,
so its useful to add it it default.


> -- Forwarded message --
> From: Roy Golan <rgo...@redhat.com>
> Date: Mon, Jun 19, 2017 at 7:17 AM
> Subject: Re: [ovirt-devel] [ENGINE][ACTION_NEEDED] - default maintainers,
> per path
> To: devel <devel@ovirt.org>
>
>
> The patch is now merged and the it should start working immediately. For
> anyone who is listed as a default reviewer I suggest to test it with a test
> patch by some other submitter. (except for those who used groups of course)
>
> Thanks everyone for the cooperation on this.
>
>
> On Mon, May 29, 2017 at 2:24 PM Roy Golan <rgo...@redhat.com> wrote:
>
>> Hi all,
>>
>> Some time ago the infra team enabled *gerrit default reviewer plugin*
>> which you probably noticed if you logged in to gerrit.
>>
>> What we can do is for example set the stable branch maintainers per
>> branch automatically:
>>
>> [filter "branch:ovirt-engine-4.1"]
>> reviewer = stableBranchMaintainersGroup
>>
>> Put people based on path:
>>
>>[filter "branch:master file:^frontend/.*"]
>>reviewer = j.r@shire.me
>>
>>
>>
>> *Action Needed:*
>> Nominate yourself, or others, by amending this patch online [1]. Once
>> this patch is fully acked and agreed we will merge it. If something can't
>> get consensus we will defer it from the patch till maintainers agree.
>>
>> [1] https://gerrit.ovirt.org/#/c/77488/
>>
>>
>>
>>
>>
>
> ___
> Devel mailing list
> Devel@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/devel
>
>
>
> --
>
> GREG SHEREMETA
>
> SENIOR SOFTWARE ENGINEER - TEAM LEAD - RHV UX
>
> Red Hat NA
>
> <https://www.redhat.com/>
>
> gsher...@redhat.comIRC: gshereme
> <https://red.ht/sig>
>
> ___
> Infra mailing list
> in...@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/infra
>
>


-- 
Barak Korren
RHV DevOps team , RHCE, RHCi
Red Hat EMEA
redhat.com | TRIED. TESTED. TRUSTED. | redhat.com/trusted
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

[ovirt-devel] [PLEASE NOTE]: STDCI V2 presentation moved to next week

2018-04-26 Thread Barak Korren
It will be on May 3rd at 11:00 IST/10:00 CEST/9:00 UTC

An updated calendar invite was already sent on Tuesday, and will be
sent again shortly.

Apologies to anyone who thought this was still hapenning today.

-- 
Barak Korren
RHV DevOps team , RHCE, RHCi
Red Hat EMEA
redhat.com | TRIED. TESTED. TRUSTED. | redhat.com/trusted
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel


Re: [ovirt-devel] Invitation: oVirt STDCI v2 deep dive @ Thu Apr 26, 2018 11:00 - 12:00 (IDT) (devel@ovirt.org)

2018-04-24 Thread Barak Korren
On 23 April 2018 at 14:43, Greg Sheremeta <gsher...@redhat.com> wrote:

> Is this the only meeting planned? 4am US time, I'll have to get up a few
> mins early :)
>
>
Terribly sorry, but its really hard to find a time that will be suited to a
divers group such as the oVirt developers.

The talk will be recorded, and I will consider re-doing it at another time
if there is demand.


>
> On Mon, Apr 23, 2018 at 7:36 AM, Sandro Bonazzola <sbona...@redhat.com>
> wrote:
>
>> From past experience, sending calendar invitation to ovirt mailing lists
>> doesn't work well. see https://lists.ovirt.org/pi
>> permail/users/2018-March/087616.html for reference.
>>
>> I would recommend to track this on https://ovirt.org/events/ by adding
>> the event following https://github.com/OSAS/rh-events/wiki/Adding-and-
>> modifying-events
>>
>> I would also recommend to send personal invitation to oVirt team leads to
>> be sure they see it.
>>
>> If you need to track who's going to join, I would recommend a ticketing
>> system like eventbrite.
>>
>> 2018-04-23 10:36 GMT+02:00 <bkor...@redhat.com>:
>>
>>> more details »
>>> <https://www.google.com/calendar/event?action=VIEW=MHBvcmNoYWQ5YmtzNmlmdWw2M25jNzM5djMgZGV2ZWxAb3ZpcnQub3Jn=MTgjYmtvcnJlbkByZWRoYXQuY29tODBiMjViYzFjZmZhYWYzMmJiNmNlNWU3NTA3OGRjOGQwYmJiNTBhOA=Asia%2FJerusalem=en=0>
>>> oVirt STDCI v2 deep dive
>>>
>>> *When*
>>> Thu Apr 26, 2018 11:00 – 12:00 Jerusalem
>>> *Where*
>>> raanana-04-asia-8-p-vc; https://bluejeans.com/8705030462 (map
>>> <https://maps.google.com/maps?q=raanana-04-asia-8-p-vc;+https://bluejeans.com/8705030462=en>
>>> )
>>> *Calendar*
>>> devel@ovirt.org
>>> *Who*
>>> •
>>> bkor...@redhat.com - organizer
>>> •
>>> devel@ovirt.org
>>>
>>> Introduction to the 2nd version of oVirt's CI standard - What is it,
>>> what can it do, how to use it and how does it work.
>>>
>>> BJ link:
>>> https://bluejeans.com/8705030462
>>> <https://www.google.com/url?q=https%3A%2F%2Fbluejeans.com%2F8705030462=D=2=AFQjCNEElRX6SA9bn6kPQbako_CGx68GmA>
>>>
>>> Going?   *Yes
>>> <https://www.google.com/calendar/event?action=RESPOND=MHBvcmNoYWQ5YmtzNmlmdWw2M25jNzM5djMgZGV2ZWxAb3ZpcnQub3Jn=1=MTgjYmtvcnJlbkByZWRoYXQuY29tODBiMjViYzFjZmZhYWYzMmJiNmNlNWU3NTA3OGRjOGQwYmJiNTBhOA=Asia%2FJerusalem=en=0>
>>> - Maybe
>>> <https://www.google.com/calendar/event?action=RESPOND=MHBvcmNoYWQ5YmtzNmlmdWw2M25jNzM5djMgZGV2ZWxAb3ZpcnQub3Jn=3=MTgjYmtvcnJlbkByZWRoYXQuY29tODBiMjViYzFjZmZhYWYzMmJiNmNlNWU3NTA3OGRjOGQwYmJiNTBhOA=Asia%2FJerusalem=en=0>
>>> - No
>>> <https://www.google.com/calendar/event?action=RESPOND=MHBvcmNoYWQ5YmtzNmlmdWw2M25jNzM5djMgZGV2ZWxAb3ZpcnQub3Jn=2=MTgjYmtvcnJlbkByZWRoYXQuY29tODBiMjViYzFjZmZhYWYzMmJiNmNlNWU3NTA3OGRjOGQwYmJiNTBhOA=Asia%2FJerusalem=en=0>*
>>> more options »
>>> <https://www.google.com/calendar/event?action=VIEW=MHBvcmNoYWQ5YmtzNmlmdWw2M25jNzM5djMgZGV2ZWxAb3ZpcnQub3Jn=MTgjYmtvcnJlbkByZWRoYXQuY29tODBiMjViYzFjZmZhYWYzMmJiNmNlNWU3NTA3OGRjOGQwYmJiNTBhOA=Asia%2FJerusalem=en=0>
>>>
>>> Invitation from Google Calendar <https://www.google.com/calendar/>
>>>
>>> You are receiving this courtesy email at the account devel@ovirt.org
>>> because you are an attendee of this event.
>>>
>>> To stop receiving future updates for this event, decline this event.
>>> Alternatively you can sign up for a Google account at
>>> https://www.google.com/calendar/ and control your notification settings
>>> for your entire calendar.
>>>
>>> Forwarding this invitation could allow any recipient to modify your RSVP
>>> response. Learn More
>>> <https://support.google.com/calendar/answer/37135#forwarding>.
>>>
>>> ___
>>> Devel mailing list
>>> Devel@ovirt.org
>>> http://lists.ovirt.org/mailman/listinfo/devel
>>>
>>
>>
>>
>> --
>>
>> SANDRO BONAZZOLA
>>
>> ASSOCIATE MANAGER, SOFTWARE ENGINEERING, EMEA ENG VIRTUALIZATION R
>>
>> Red Hat EMEA <https://www.redhat.com/>
>>
>> sbona...@redhat.com
>> <https://red.ht/sig>
>> <https://redhat.com/summit>
>>
>> ___
>> Devel mailing list
>> Devel@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/devel
>>
>
>
>


-- 
Barak Korren
RHV DevOps team , RHCE, RHCi
Red Hat EMEA
redhat.com | TRIED. TESTED. TRUSTED. | redhat.com/trusted
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

Re: [ovirt-devel] Invitation: oVirt STDCI v2 deep dive @ Thu Apr 26, 2018 11:00 - 12:00 (IDT) (devel@ovirt.org)

2018-04-24 Thread Barak Korren
On 24 April 2018 at 11:22, Martin Sivak <msi...@redhat.com> wrote:

> Hi,
>
> I did see it already and added it to the team calendar. I wonder if it
> forwards the confirmations back to you.
>

It forwards some of them back as if they were made by devel@ovirt.org - so
not very useful for actual RSVP tracking...

Please note that I've just moved it.



>
> Martin
>
> On Tue, Apr 24, 2018 at 9:27 AM, Sandro Bonazzola <sbona...@redhat.com>
> wrote:
>
>>
>>
>> 2018-04-24 6:11 GMT+02:00 Barak Korren <bkor...@redhat.com>:
>>
>>>
>>>
>>> On 23 April 2018 at 14:36, Sandro Bonazzola <sbona...@redhat.com> wrote:
>>>
>>>> From past experience, sending calendar invitation to ovirt mailing
>>>> lists doesn't work well. see https://lists.ovirt.org/pi
>>>> permail/users/2018-March/087616.html for reference.
>>>>
>>>
>>> I agree the experience is less then optional - I do seem to get
>>> confirmation emails, so some people manage to use it anyway.
>>>
>>>
>>>> I would recommend to track this on https://ovirt.org/events/ by adding
>>>> the event following https://github.com/OSAS/rh-events/wiki/Adding-and-
>>>> modifying-events
>>>>
>>>>
>>> Are you sure that is the right place for this? that repo seems to list
>>> events at the scale of conferences...
>>>
>>
>> Well, this is a kind of conference with just 1 talk :-)
>>
>>
>>>
>>>
>>>> I would also recommend to send personal invitation to oVirt team leads
>>>> to be sure they see it.
>>>>
>>>
>>> Sure of you can give me a list...
>>>
>>
>> Sandro Bonazzola <sbona...@redhat.com> - Integration / Release
>> Engineering
>> Ryan Barry <rba...@redhat.com> - Node
>> Michal Skrivanek <mskri...@redhat.com> - Virtualization
>> Shirly Radco <sra...@redhat.com> - Metrics / Data Warehouse
>> Tal Nisan <tni...@redhat.com> - Storage
>> Martin Sivak <msi...@redhat.com> - SLA
>> Eyal Edri <ee...@redhat.com> - Project infrastructure
>> Martin Perina <mper...@redhat.com> - Infra
>> Dan Kenigsberg <dan...@redhat.com> - Network
>> Sahina Bose <sab...@redhat.com> - Gluster
>> Tomas Jelinek <tjeli...@redhat.com> - UX
>>
>> I'm missing a reference person for other teams listed in
>> https://ovirt.org/develop/#ovirt-teams:
>> Docs
>> I18N
>> Marketing
>> Spice
>>
>>
>>
>>
>>>
>>>
>>>>
>>>> If you need to track who's going to join, I would recommend a ticketing
>>>> system like eventbrite.
>>>>
>>>
>>> Not sure I want to force people to use yet another 3rd party platform
>>> for the benefit of me having some tracking information. Do people prefer
>>> this?
>>>
>>>
>>>>
>>>> 2018-04-23 10:36 GMT+02:00 <bkor...@redhat.com>:
>>>>
>>>>> more details »
>>>>> <https://www.google.com/calendar/event?action=VIEW=MHBvcmNoYWQ5YmtzNmlmdWw2M25jNzM5djMgZGV2ZWxAb3ZpcnQub3Jn=MTgjYmtvcnJlbkByZWRoYXQuY29tODBiMjViYzFjZmZhYWYzMmJiNmNlNWU3NTA3OGRjOGQwYmJiNTBhOA=Asia%2FJerusalem=en=0>
>>>>> oVirt STDCI v2 deep dive
>>>>>
>>>>> *When*
>>>>> Thu Apr 26, 2018 11:00 – 12:00 Jerusalem
>>>>> *Where*
>>>>> raanana-04-asia-8-p-vc; https://bluejeans.com/8705030462 (map
>>>>> <https://maps.google.com/maps?q=raanana-04-asia-8-p-vc;+https://bluejeans.com/8705030462=en>
>>>>> )
>>>>> *Calendar*
>>>>> devel@ovirt.org
>>>>> *Who*
>>>>> •
>>>>> bkor...@redhat.com - organizer
>>>>> •
>>>>> devel@ovirt.org
>>>>>
>>>>> Introduction to the 2nd version of oVirt's CI standard - What is it,
>>>>> what can it do, how to use it and how does it work.
>>>>>
>>>>> BJ link:
>>>>> https://bluejeans.com/8705030462
>>>>> <https://www.google.com/url?q=https%3A%2F%2Fbluejeans.com%2F8705030462=D=2=AFQjCNEElRX6SA9bn6kPQbako_CGx68GmA>
>>>>>
>>>>> Going?   *Yes
>>>>> <https://www.google.com/calendar/event?action=RESPOND=MHBvcmNoYWQ5YmtzNmlmdWw2M25jNzM5djMgZGV2ZWxAb3ZpcnQub3Jn=1=MTgjYmtvcnJlbkByZWRoYXQuY29tODBiMjViYzFjZmZhYWYzMmJiNmNlNWU3NTA3OGRjOGQwYmJiNTBhOA=As

Re: [ovirt-devel] Invitation: oVirt STDCI v2 deep dive @ Thu Apr 26, 2018 11:00 - 12:00 (IDT) (devel@ovirt.org)

2018-04-23 Thread Barak Korren
On 23 April 2018 at 14:36, Sandro Bonazzola <sbona...@redhat.com> wrote:

> From past experience, sending calendar invitation to ovirt mailing lists
> doesn't work well. see https://lists.ovirt.org/pi
> permail/users/2018-March/087616.html for reference.
>

I agree the experience is less then optional - I do seem to get
confirmation emails, so some people manage to use it anyway.


> I would recommend to track this on https://ovirt.org/events/ by adding
> the event following https://github.com/OSAS/rh-events/wiki/Adding-and-
> modifying-events
>
>
Are you sure that is the right place for this? that repo seems to list
events at the scale of conferences...


> I would also recommend to send personal invitation to oVirt team leads to
> be sure they see it.
>

Sure of you can give me a list...


>
> If you need to track who's going to join, I would recommend a ticketing
> system like eventbrite.
>

Not sure I want to force people to use yet another 3rd party platform for
the benefit of me having some tracking information. Do people prefer this?


>
> 2018-04-23 10:36 GMT+02:00 <bkor...@redhat.com>:
>
>> more details »
>> <https://www.google.com/calendar/event?action=VIEW=MHBvcmNoYWQ5YmtzNmlmdWw2M25jNzM5djMgZGV2ZWxAb3ZpcnQub3Jn=MTgjYmtvcnJlbkByZWRoYXQuY29tODBiMjViYzFjZmZhYWYzMmJiNmNlNWU3NTA3OGRjOGQwYmJiNTBhOA=Asia%2FJerusalem=en=0>
>> oVirt STDCI v2 deep dive
>>
>> *When*
>> Thu Apr 26, 2018 11:00 – 12:00 Jerusalem
>> *Where*
>> raanana-04-asia-8-p-vc; https://bluejeans.com/8705030462 (map
>> <https://maps.google.com/maps?q=raanana-04-asia-8-p-vc;+https://bluejeans.com/8705030462=en>
>> )
>> *Calendar*
>> devel@ovirt.org
>> *Who*
>> •
>> bkor...@redhat.com - organizer
>> •
>> devel@ovirt.org
>>
>> Introduction to the 2nd version of oVirt's CI standard - What is it, what
>> can it do, how to use it and how does it work.
>>
>> BJ link:
>> https://bluejeans.com/8705030462
>> <https://www.google.com/url?q=https%3A%2F%2Fbluejeans.com%2F8705030462=D=2=AFQjCNEElRX6SA9bn6kPQbako_CGx68GmA>
>>
>> Going?   *Yes
>> <https://www.google.com/calendar/event?action=RESPOND=MHBvcmNoYWQ5YmtzNmlmdWw2M25jNzM5djMgZGV2ZWxAb3ZpcnQub3Jn=1=MTgjYmtvcnJlbkByZWRoYXQuY29tODBiMjViYzFjZmZhYWYzMmJiNmNlNWU3NTA3OGRjOGQwYmJiNTBhOA=Asia%2FJerusalem=en=0>
>> - Maybe
>> <https://www.google.com/calendar/event?action=RESPOND=MHBvcmNoYWQ5YmtzNmlmdWw2M25jNzM5djMgZGV2ZWxAb3ZpcnQub3Jn=3=MTgjYmtvcnJlbkByZWRoYXQuY29tODBiMjViYzFjZmZhYWYzMmJiNmNlNWU3NTA3OGRjOGQwYmJiNTBhOA=Asia%2FJerusalem=en=0>
>> - No
>> <https://www.google.com/calendar/event?action=RESPOND=MHBvcmNoYWQ5YmtzNmlmdWw2M25jNzM5djMgZGV2ZWxAb3ZpcnQub3Jn=2=MTgjYmtvcnJlbkByZWRoYXQuY29tODBiMjViYzFjZmZhYWYzMmJiNmNlNWU3NTA3OGRjOGQwYmJiNTBhOA=Asia%2FJerusalem=en=0>*
>> more options »
>> <https://www.google.com/calendar/event?action=VIEW=MHBvcmNoYWQ5YmtzNmlmdWw2M25jNzM5djMgZGV2ZWxAb3ZpcnQub3Jn=MTgjYmtvcnJlbkByZWRoYXQuY29tODBiMjViYzFjZmZhYWYzMmJiNmNlNWU3NTA3OGRjOGQwYmJiNTBhOA=Asia%2FJerusalem=en=0>
>>
>> Invitation from Google Calendar <https://www.google.com/calendar/>
>>
>> You are receiving this courtesy email at the account devel@ovirt.org
>> because you are an attendee of this event.
>>
>> To stop receiving future updates for this event, decline this event.
>> Alternatively you can sign up for a Google account at
>> https://www.google.com/calendar/ and control your notification settings
>> for your entire calendar.
>>
>> Forwarding this invitation could allow any recipient to modify your RSVP
>> response. Learn More
>> <https://support.google.com/calendar/answer/37135#forwarding>.
>>
>> ___
>> Devel mailing list
>> Devel@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/devel
>>
>
>
>
> --
>
> SANDRO BONAZZOLA
>
> ASSOCIATE MANAGER, SOFTWARE ENGINEERING, EMEA ENG VIRTUALIZATION R
>
> Red Hat EMEA <https://www.redhat.com/>
>
> sbona...@redhat.com
> <https://red.ht/sig>
> <https://redhat.com/summit>
>



-- 
Barak Korren
RHV DevOps team , RHCE, RHCi
Red Hat EMEA
redhat.com | TRIED. TESTED. TRUSTED. | redhat.com/trusted
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

[ovirt-devel] [ANNOUNCE] Introducing STDCI V2

2018-04-16 Thread Barak Korren
’ or ‘OperatingSystems’ as all these forms  (and others) will
work and mean the same thing.

To create complex test/build matrices, ‘stage’, ‘distribution’, ‘architecture’
and ‘sub-stage’ definitions can be nested within one another. We find this to be
more intuitive than having to maintain tedious ‘exclude’ lists as was needed in
V1.

Here is an example of an STDCI V2 YAML file that is compatible with the current
master branch V1 configuration of many oVirt projects:

---
Architectures:
  - x86_64:
  Distributions: [ "el7", "fc27" ]
  - ppc64le:
  Distribution: el7
  - s390x:
  Distribution: fc27
Release Branches:
  master: ovirt-master

Note: since the file is committed into the project’s own repo, having different
configuration for different branches can be done by simply having different
files in the different branches, so there is no need for a big convoluted file
to configure all branches.

Since the above file does not mention stages, any STDCI scripts that exists in
the project repo and belong to a particular stage will be run on all specified
distribution and architecture combinations. Since it is sometimes desired to run
‘check-patch.sh’ on less platforms then build-artifacts for example, a slightly
different file would be needed:

---
Architectures:
  - x86_64:
  Distributions: [ “el7”, “fc27” ]
  - ppc64le:
  Distribution: el7
  - s390x:
  Distribution: fc27
Stages:
  - check-patch:
  Architecture: x86_64
  Distribution: el7
  - build-artifacts
Release Branches:
  master: ovirt-master

The above file makes ‘check-patch’ run only on el7/x86_64, while build-artifacts
runs on all platforms specified and check-merged would not run at all because it
is not listed in the file.

Great efforts have been made to make the file format very flexible but intuitive
to use. Additionally there are many defaults in place to allow specifying
complex behaviours with very brief YAML code. For further details about the file
format, please see the documentation linked below.

About the relation between STDCI V2 and the change-queue

In STDCI V1 the change queue that would run the OST tests and release a given
patch was determined by looking at the “version” part of the name of the
project’s build-artifacts jobs that got invoked for the patch.

This was confusing for people as most people understood “version” to mean the
internal version for their own project rather then the oVirt version.

In V2 we decided to be more explicit and simply include a map from branches to
change queues in the YAML configuration under the “release-branches” option, as
can be seen in the examples above.

We also chose to no longer allow specifying the oVirt version as a shorthand for
the equivalent queue name (E.g specifying ‘4.2’ instead of ‘ovirt-4.2’), this
should reduce the chance for confusing between project versions and queue names,
and also allows us to create and use change queues for projects that are not
oVirt.

A project can choose not to include a “release-branches” option, in which case
its patches will not get submitted to any queues.

Further information
---
The documentation for STDCI can be found at [1].

The documentation update for V2 are still in progress and expected to be merged
soon. In the meatine, the GitHub-specific documentation [2] already provides a
great deal of information which is relevant for V2.

[1]: 
http://ovirt-infra-docs.readthedocs.io/en/latest/CI/Build_and_test_standards
[2]: http://ovirt-infra-docs.readthedocs.io/en/latest/CI/Using_STDCI_with_GitHub

---
Barak Korren
RHV DevOps team , RHCE, RHCi
Red Hat EMEA
redhat.com | TRIED. TESTED. TRUSTED. | redhat.com/trusted
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

[ovirt-devel] [ OST Failure Report ] [ oVirt master ] [ 2018-04-08 ] [098_ovirt_provider_ovn.use_ovn_provider]

2018-04-08 Thread Barak Korren
Test failed: 098_ovirt_provider_ovn.use_ovn_provider

Link to suspected patches:
https://gerrit.ovirt.org/#/c/89581/3

Link to Job:
https://gerrit.ovirt.org/#/c/89581/3

Link to all logs:
http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/6714/artifact/exported-artifacts/basic-suit-master-el7/test_logs/basic-suite-master/post-098_ovirt_provider_ovn.py/

Error snippet from log:



'name'
 >> begin captured logging << 
lago.providers.libvirt.cpu: DEBUG: numa
: cpus_per_cell: 1, total_cells: 2
lago.providers.libvirt.cpu: DEBUG: numa:

  
  


lago.providers.libvirt.cpu: DEBUG: numa
: cpus_per_cell: 1, total_cells: 2
lago.providers.libvirt.cpu: DEBUG: numa:

  
  


lago.providers.libvirt.cpu: DEBUG: numa
: cpus_per_cell: 1, total_cells: 2
lago.providers.libvirt.cpu: DEBUG: numa:

  
  


requests.packages.urllib3.connectionpool: INFO: * Starting new
HTTPS connection (1): 192.168.201.4
py.warnings: WARNING: * Unverified HTTPS request is being made.
Adding certificate verification is strongly advised. See:
https://urllib3.readthedocs.org/en/latest/security.html
requests.packages.urllib3.connectionpool: DEBUG: "POST /v2.0/tokens/
HTTP/1.1" 200 None
requests.packages.urllib3.connectionpool: INFO: * Starting new
HTTPS connection (1): 192.168.201.4
requests.packages.urllib3.connectionpool: DEBUG: "GET /v2.0/networks/
HTTP/1.1" 200 None
requests.packages.urllib3.connectionpool: INFO: * Starting new
HTTPS connection (1): 192.168.201.4
requests.packages.urllib3.connectionpool: DEBUG: "GET /v2.0/ports/
HTTP/1.1" 200 None
requests.packages.urllib3.connectionpool: INFO: * Starting new
HTTPS connection (1): 192.168.201.4
requests.packages.urllib3.connectionpool: DEBUG: "GET /v2.0/subnets/
HTTP/1.1" 200 None
requests.packages.urllib3.connectionpool: INFO: * Starting new
HTTPS connection (1): 192.168.201.4
requests.packages.urllib3.connectionpool: DEBUG: "POST /v2.0/networks/
HTTP/1.1" 201 None
requests.packages.urllib3.connectionpool: INFO: * Starting new
HTTPS connection (1): 192.168.201.4
requests.packages.urllib3.connectionpool: DEBUG: "POST /v2.0/subnets/
HTTP/1.1" 201 None
requests.packages.urllib3.connectionpool: INFO: * Starting new
HTTPS connection (1): 192.168.201.4
requests.packages.urllib3.connectionpool: DEBUG: "POST /v2.0/ports/
HTTP/1.1" 201 None
requests.packages.urllib3.connectionpool: INFO: * Starting new
HTTPS connection (1): 192.168.201.4
requests.packages.urllib3.connectionpool: DEBUG: "GET /v2.0/networks/
HTTP/1.1" 200 None
requests.packages.urllib3.connectionpool: INFO: * Starting new
HTTPS connection (1): 192.168.201.4
requests.packages.urllib3.connectionpool: DEBUG: "GET /v2.0/ports/
HTTP/1.1" 200 None
requests.packages.urllib3.connectionpool: INFO: * Starting new
HTTPS connection (1): 192.168.201.4
requests.packages.urllib3.connectionpool: DEBUG: "GET /v2.0/subnets/
HTTP/1.1" 200 None
- >> end captured logging << -----



Note: we're seeing similar issues on the same patches in both the
'master' and the 4.2 change queues.

-- 
Barak Korren
RHV DevOps team , RHCE, RHCi
Red Hat EMEA
redhat.com | TRIED. TESTED. TRUSTED. | redhat.com/trusted
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel


Re: [ovirt-devel] [ OST Failure Report ] [ oVirt 4.2 ] [ 2018-04-04 ] [006_migrations.prepare_migration_attachments_ipv6]

2018-04-08 Thread Barak Korren
On 7 April 2018 at 00:30, Dan Kenigsberg <dan...@redhat.com> wrote:
> No, I am afraid that we have not managed to understand why setting and
> ipv6 address too the host off the grid. We shall continue researching
> this next week.
>
> Edy, https://gerrit.ovirt.org/#/c/88637/ is already 4 weeks old, but
> could it possibly be related (I really doubt that)?
>

at this point I think we should seriously consider disabling the
relevant test, as its impacting a large number of changes.

> On Fri, Apr 6, 2018 at 2:20 PM, Dafna Ron <d...@redhat.com> wrote:
>> Dan, was there a fix for the issues?
>> can I have a link to the fix if there was?
>>
>> Thanks,
>> Dafna
>>
>>
>> On Wed, Apr 4, 2018 at 5:01 PM, Gal Ben Haim <gbenh...@redhat.com> wrote:
>>>
>>> From lago's log, I see that lago collected the logs from the VMs using ssh
>>> (after the test failed), which means
>>> that the VM didn't crash.
>>>
>>> On Wed, Apr 4, 2018 at 5:27 PM, Dan Kenigsberg <dan...@redhat.com> wrote:
>>>>
>>>> On Wed, Apr 4, 2018 at 4:59 PM, Barak Korren <bkor...@redhat.com> wrote:
>>>> > Test failed: [ 006_migrations.prepare_migration_attachments_ipv6 ]
>>>> >
>>>> > Link to suspected patches:
>>>> > (Probably unrelated)
>>>> > https://gerrit.ovirt.org/#/c/89812/1 (ovirt-engine-sdk) - examples:
>>>> > export template to an export domain
>>>> >
>>>> > This seems to happen multiple times sporadically, I thought this would
>>>> > be solved by
>>>> > https://gerrit.ovirt.org/#/c/89781/ but it isn't.
>>>>
>>>> right, it is a completely unrelated issue there (with external networks).
>>>> here, however, the host dies while setting setupNetworks of an ipv6
>>>> address. Setup network waits for Engine's confirmation at 08:33:00,711
>>>>
>>>> http://jenkins.ovirt.org/job/ovirt-4.2_change-queue-tester/1537/artifact/exported-artifacts/basic-suit-4.2-el7/test_logs/basic-suite-4.2/post-006_migrations.py/lago-basic-suite-4-2-host-0/_var_log/vdsm/supervdsm.log
>>>> but kernel messages stop at 08:33:23
>>>>
>>>> http://jenkins.ovirt.org/job/ovirt-4.2_change-queue-tester/1537/artifact/exported-artifacts/basic-suit-4.2-el7/test_logs/basic-suite-4.2/post-006_migrations.py/lago-basic-suite-4-2-host-0/_var_log/messages/*view*/
>>>>
>>>> Does the lago VM of this host crash? pause?
>>>>
>>>>
>>>> >
>>>> > Link to Job:
>>>> > http://jenkins.ovirt.org/job/ovirt-4.2_change-queue-tester/1537/
>>>> >
>>>> > Link to all logs:
>>>> >
>>>> > http://jenkins.ovirt.org/job/ovirt-4.2_change-queue-tester/1537/artifact/exported-artifacts/basic-suit-4.2-el7/test_logs/basic-suite-4.2/post-006_migrations.py/
>>>> >
>>>> > Error snippet from log:
>>>> >
>>>> > 
>>>> >
>>>> > Traceback (most recent call last):
>>>> >   File "/usr/lib64/python2.7/unittest/case.py", line 369, in run
>>>> > testMethod()
>>>> >   File "/usr/lib/python2.7/site-packages/nose/case.py", line 197, in
>>>> > runTest
>>>> > self.test(*self.arg)
>>>> >   File "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py", line
>>>> > 129, in wrapped_test
>>>> > test()
>>>> >   File "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py", line
>>>> > 59, in wrapper
>>>> > return func(get_test_prefix(), *args, **kwargs)
>>>> >   File "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py", line
>>>> > 78, in wrapper
>>>> > prefix.virt_env.engine_vm().get_api(api_ver=4), *args, **kwargs
>>>> >   File
>>>> > "/home/jenkins/workspace/ovirt-4.2_change-queue-tester/ovirt-system-tests/basic-suite-4.2/test-scenarios/006_migrations.py",
>>>> > line 139, in prepare_migration_attachments_ipv6
>>>> > engine, host_service, MIGRATION_NETWORK, ip_configuration)
>>>> >   File
>>>> > "/home/jenkins/workspace/ovirt-4.2_change-queue-tester/ovirt-system-tests/basic-suite-4.2/test_utils/network_utils_v4.py",
>>>> > line 71, in modify_ip_config
>>>> > check_connectivity=True)
>>&

[ovirt-devel] [ OST Failure Report ] [ oVirt 4.2 ] [ 2018-04-04 ] [006_migrations.prepare_migration_attachments_ipv6]

2018-04-04 Thread Barak Korren
Test failed: [ 006_migrations.prepare_migration_attachments_ipv6 ]

Link to suspected patches:
(Probably unrelated)
https://gerrit.ovirt.org/#/c/89812/1 (ovirt-engine-sdk) - examples:
export template to an export domain

This seems to happen multiple times sporadically, I thought this would
be solved by
https://gerrit.ovirt.org/#/c/89781/ but it isn't.

Link to Job:
http://jenkins.ovirt.org/job/ovirt-4.2_change-queue-tester/1537/

Link to all logs:
http://jenkins.ovirt.org/job/ovirt-4.2_change-queue-tester/1537/artifact/exported-artifacts/basic-suit-4.2-el7/test_logs/basic-suite-4.2/post-006_migrations.py/

Error snippet from log:



Traceback (most recent call last):
  File "/usr/lib64/python2.7/unittest/case.py", line 369, in run
testMethod()
  File "/usr/lib/python2.7/site-packages/nose/case.py", line 197, in runTest
self.test(*self.arg)
  File "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py", line
129, in wrapped_test
test()
  File "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py", line
59, in wrapper
return func(get_test_prefix(), *args, **kwargs)
  File "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py", line
78, in wrapper
prefix.virt_env.engine_vm().get_api(api_ver=4), *args, **kwargs
  File 
"/home/jenkins/workspace/ovirt-4.2_change-queue-tester/ovirt-system-tests/basic-suite-4.2/test-scenarios/006_migrations.py",
line 139, in prepare_migration_attachments_ipv6
engine, host_service, MIGRATION_NETWORK, ip_configuration)
  File 
"/home/jenkins/workspace/ovirt-4.2_change-queue-tester/ovirt-system-tests/basic-suite-4.2/test_utils/network_utils_v4.py",
line 71, in modify_ip_config
check_connectivity=True)
  File "/usr/lib64/python2.7/site-packages/ovirtsdk4/services.py",
line 36729, in setup_networks
return self._internal_action(action, 'setupnetworks', None,
headers, query, wait)
  File "/usr/lib64/python2.7/site-packages/ovirtsdk4/service.py", line
299, in _internal_action
return future.wait() if wait else future
  File "/usr/lib64/python2.7/site-packages/ovirtsdk4/service.py", line
55, in wait
return self._code(response)
  File "/usr/lib64/python2.7/site-packages/ovirtsdk4/service.py", line
296, in callback
self._check_fault(response)
  File "/usr/lib64/python2.7/site-packages/ovirtsdk4/service.py", line
132, in _check_fault
self._raise_error(response, body)
  File "/usr/lib64/python2.7/site-packages/ovirtsdk4/service.py", line
118, in _raise_error
raise error
Error: Fault reason is "Operation Failed". Fault detail is "[Network
error during communication with the Host.]". HTTP response code is
400.







-- 
Barak Korren
RHV DevOps team , RHCE, RHCi
Red Hat EMEA
redhat.com | TRIED. TESTED. TRUSTED. | redhat.com/trusted
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel


Re: [ovirt-devel] [ OST Failure Report ] [ oVirt 4.2 ] [ 2018-04-03 ] [006_migrations.prepare_migration_attachments_ipv6]

2018-04-04 Thread Barak Korren
On 3 April 2018 at 17:43, Dan Kenigsberg <dan...@redhat.com> wrote:
> On Tue, Apr 3, 2018 at 3:57 PM, Piotr Kliczewski <pklic...@redhat.com> wrote:
>> Dan,
>>
>> It looks like it was one of the calls triggered when vdsm was down:
>>
>> 2018-04-03 05:30:16,065-0400 INFO  (mailbox-hsm)
>> [storage.MailBox.HsmMailMonitor] HSM_MailMonitor sending mail to SPM -
>> ['/usr/bin/dd',
>> 'of=/rhev/data-center/ddb765d2-2137-437d-95f8-c46dbdbc7711/mastersd/dom_md/inbox',
>> 'iflag=fullblock', 'oflag=direct', 'conv=notrunc', 'bs=4096', 'count=1',
>> 'seek=1'] (mailbox:387)
>> 2018-04-03 05:31:22,441-0400 INFO  (MainThread) [vds] (PID: 20548) I am the
>> actual vdsm 4.20.23-28.gitd11ed44.el7.centos lago-basic-suite-4-2-host-0
>> (3.10.0-693.21.1.el7.x86_64) (vdsmd:149)
>>
>>
>> which failed and caused timeout.
>>
>> Thanks,
>> Piotr
>>
>> On Tue, Apr 3, 2018 at 1:57 PM, Dan Kenigsberg <dan...@redhat.com> wrote:
>>>
>>> On Tue, Apr 3, 2018 at 2:07 PM, Barak Korren <bkor...@redhat.com> wrote:
>>> > Test failed: [ 006_migrations.prepare_migration_attachments_ipv6 ]
>>> >
>>> > Link to suspected patches:
>>> >
>>> > (Patch seems unrelated - do we have sporadic communication issues
>>> > arising in PST?)
>>> > https://gerrit.ovirt.org/c/89737/1 - vdsm - automation: check-patch:
>>> > attempt to install vdsm-gluster
>>> >
>>> > Link to Job:
>>> > http://jenkins.ovirt.org/job/ovirt-4.2_change-queue-tester/1521/
>>> >
>>> > Link to all logs:
>>> >
>>> > http://jenkins.ovirt.org/job/ovirt-4.2_change-queue-tester/1521/artifact/exported-artifacts/basic-suit-4.2-el7/test_logs/basic-suite-4.2/post-006_migrations.py/
>>> >
>>> > Error snippet from log:
>>> >
>>> > 
>>> >
>>> > Traceback (most recent call last):
>>> >   File "/usr/lib64/python2.7/unittest/case.py", line 369, in run
>>> > testMethod()
>>> >   File "/usr/lib/python2.7/site-packages/nose/case.py", line 197, in
>>> > runTest
>>> > self.test(*self.arg)
>>> >   File "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py", line
>>> > 129, in wrapped_test
>>> > test()
>>> >   File "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py", line
>>> > 59, in wrapper
>>> > return func(get_test_prefix(), *args, **kwargs)
>>> >   File "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py", line
>>> > 78, in wrapper
>>> > prefix.virt_env.engine_vm().get_api(api_ver=4), *args, **kwargs
>>> >   File
>>> > "/home/jenkins/workspace/ovirt-4.2_change-queue-tester/ovirt-system-tests/basic-suite-4.2/test-scenarios/006_migrations.py",
>>> > line 139, in prepare_migration_attachments_ipv6
>>> > engine, host_service, MIGRATION_NETWORK, ip_configuration)
>>> >   File
>>> > "/home/jenkins/workspace/ovirt-4.2_change-queue-tester/ovirt-system-tests/basic-suite-4.2/test_utils/network_utils_v4.py",
>>> > line 71, in modify_ip_config
>>> > check_connectivity=True)
>>> >   File "/usr/lib64/python2.7/site-packages/ovirtsdk4/services.py",
>>> > line 36729, in setup_networks
>>> > return self._internal_action(action, 'setupnetworks', None,
>>> > headers, query, wait)
>>> >   File "/usr/lib64/python2.7/site-packages/ovirtsdk4/service.py", line
>>> > 299, in _internal_action
>>> > return future.wait() if wait else future
>>> >   File "/usr/lib64/python2.7/site-packages/ovirtsdk4/service.py", line
>>> > 55, in wait
>>> > return self._code(response)
>>> >   File "/usr/lib64/python2.7/site-packages/ovirtsdk4/service.py", line
>>> > 296, in callback
>>> > self._check_fault(response)
>>> >   File "/usr/lib64/python2.7/site-packages/ovirtsdk4/service.py", line
>>> > 132, in _check_fault
>>> > self._raise_error(response, body)
>>> >   File "/usr/lib64/python2.7/site-packages/ovirtsdk4/service.py", line
>>> > 118, in _raise_error
>>> > raise error
>>> > Error: Fault reason is "Operation Failed". Fault detail is "[Network
>>> > error during communication with the Host.]". HTTP response code is
&

Re: [ovirt-devel] [ OST Failure Report ] [ oVirt 4.2 ] [ 2018-04-03 ] [006_migrations.prepare_migration_attachments_ipv6]

2018-04-03 Thread Barak Korren
On 3 April 2018 at 15:01, Greg Sheremeta <gsher...@redhat.com> wrote:

> Barak, I was getting these 400s and 409s sporadically all last week while
> iterating on my docker stuff. I thought maybe it was my messing with the
> http_proxy stuff or doing docker rms. Is it possible I'm breaking things?
> I'm still working on it. Been working in straight for a while now:
> https://gerrit.ovirt.org/#/c/67166/
>
>
Greg, your work shouldn't be affecting other things unless you merge it...


-- 
Barak Korren
RHV DevOps team , RHCE, RHCi
Red Hat EMEA
redhat.com | TRIED. TESTED. TRUSTED. | redhat.com/trusted
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

Re: [ovirt-devel] [ OST Failure Report ] [ oVirt 4.2 ] [ 2018-04-03 ] [006_migrations.prepare_migration_attachments_ipv6]

2018-04-03 Thread Barak Korren
On 3 April 2018 at 14:07, Barak Korren <bkor...@redhat.com> wrote:
> Test failed: [ 006_migrations.prepare_migration_attachments_ipv6 ]
>
> Link to suspected patches:
>
> (Patch seems unrelated - do we have sporadic communication issues
> arising in PST?)
> https://gerrit.ovirt.org/c/89737/1 - vdsm - automation: check-patch:
> attempt to install vdsm-gluster
>
> Link to Job:
> http://jenkins.ovirt.org/job/ovirt-4.2_change-queue-tester/1521/
>
> Link to all logs:
> http://jenkins.ovirt.org/job/ovirt-4.2_change-queue-tester/1521/artifact/exported-artifacts/basic-suit-4.2-el7/test_logs/basic-suite-4.2/post-006_migrations.py/
>
> Error snippet from log:
>
> 
>
> Traceback (most recent call last):
>   File "/usr/lib64/python2.7/unittest/case.py", line 369, in run
> testMethod()
>   File "/usr/lib/python2.7/site-packages/nose/case.py", line 197, in runTest
> self.test(*self.arg)
>   File "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py", line
> 129, in wrapped_test
> test()
>   File "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py", line
> 59, in wrapper
> return func(get_test_prefix(), *args, **kwargs)
>   File "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py", line
> 78, in wrapper
> prefix.virt_env.engine_vm().get_api(api_ver=4), *args, **kwargs
>   File 
> "/home/jenkins/workspace/ovirt-4.2_change-queue-tester/ovirt-system-tests/basic-suite-4.2/test-scenarios/006_migrations.py",
> line 139, in prepare_migration_attachments_ipv6
> engine, host_service, MIGRATION_NETWORK, ip_configuration)
>   File 
> "/home/jenkins/workspace/ovirt-4.2_change-queue-tester/ovirt-system-tests/basic-suite-4.2/test_utils/network_utils_v4.py",
> line 71, in modify_ip_config
> check_connectivity=True)
>   File "/usr/lib64/python2.7/site-packages/ovirtsdk4/services.py",
> line 36729, in setup_networks
> return self._internal_action(action, 'setupnetworks', None,
> headers, query, wait)
>   File "/usr/lib64/python2.7/site-packages/ovirtsdk4/service.py", line
> 299, in _internal_action
> return future.wait() if wait else future
>   File "/usr/lib64/python2.7/site-packages/ovirtsdk4/service.py", line
> 55, in wait
> return self._code(response)
>   File "/usr/lib64/python2.7/site-packages/ovirtsdk4/service.py", line
> 296, in callback
> self._check_fault(response)
>   File "/usr/lib64/python2.7/site-packages/ovirtsdk4/service.py", line
> 132, in _check_fault
> self._raise_error(response, body)
>   File "/usr/lib64/python2.7/site-packages/ovirtsdk4/service.py", line
> 118, in _raise_error
> raise error
> Error: Fault reason is "Operation Failed". Fault detail is "[Network
> error during communication with the Host.]". HTTP response code is
> 400.
>
>
>
> 
>


Same failure seems to have happened again  - on a different patch -
this time foe ovirt-engine:
https://gerrit.ovirt.org/#/c/89748/1

Failed test run:
http://jenkins.ovirt.org/job/ovirt-4.2_change-queue-tester/1523/


-- 
Barak Korren
RHV DevOps team , RHCE, RHCi
Red Hat EMEA
redhat.com | TRIED. TESTED. TRUSTED. | redhat.com/trusted
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel


  1   2   3   4   >