[ovirt-devel] Re: Patch Gating summary + FAQ

2019-09-19 Thread Marcin Sobczyk


On 9/19/19 4:08 PM, Ehud Yonasi wrote:


Hey everyone,

Following the presentation we did last week [1] 
, I wanted to summarize the new patch 
gating workflow that will be pushed to oVirt soon and will impact all 
the developers.



Summary:


The purpose of the new workflow is to verify patches earlier ( shift 
left ) before they are merged and provide much faster feedback for 
developers if their patch fails OST.



1.

Feedback from OST will now be posted directly to Gerrit instead of
requiring human intervention from the infra team to notify developers


We expect developers to check why their patch is not passing the gate 
(OST), debug it, find the root cause and fix it before merging the patch.



2.

Any concerns regarding the stability of OST should be communicated
and addressed ASAP.

The status today is that if OST fails post-mergegatingpackages are 
notpushed to tested and QE doesn’t get to test them. The change with 
pre-merge gatingis thatpatches won’t be merged if OST fails, so if 
there are any fragile or flaky tests they should be examined by their 
maintainers and fixed, skipped or removed by the maintainers.



3.

FYI, we are not removing the Merge button at this point, so
maintainers will still be able to merge patches that they believe
100% is not breaking the build and failing OST tests.


Please note that merging patches that break OST will cause it to start 
failing for all other patches, we urge you to avoid trying to bypass 
it if at all possible.



In the following section, I will explain more on Patch Gating, how to 
onboard it, etc.




FAQ on oVirt’s Gating System and how to onboard your project on it:


Q.What is Patch Gating?

A.It is triggered pre-mergeon patches and running OST as the gate 
system tests, unliketoday


where we have post-mergeOST that runs the patches after the projects 
are merged. This means developers get early feedbackon their patches 
if it is passing OST.



Q.What causes the gating process to start?A.Once a patch is verified, 
passed CI and has Code-Review +2 labels, the gating process will be 
started. You will receive a message in the patch



Q. How does it report results to my patches?

A. A comment will be posted in your patch with the job URL failure.



Q. How will my patch get merged?

A.If the patch has passed the gating (OST), Zuul (The new CI system 
for patch gating) will merge the patch automatically.




Q. How do I onboard my project?

A.

1.

Open a JIRA ticket or mail to infra-supp...@ovirt.org


2.

Creating a file named 'zuul.yaml' under your project root OR
`zuul.d/zuul.yaml` and fill with the following content:


- project:

templates:

  - ost-gated-project



Q. My projects run on STDCI V1, is that ok?

A. No, the patch gating logic runs on STDCI V2 only! meaning that you 
will have to shift your project to V2.


If you need help regarding the transition to V2 you can open a JIRA[2] 
ticket or mail to 
infra-supp...@ovirt.org 


and visit the docs [3] 
.



Q. What if I want to merge the patch regardless of OST results?

A. If you are a maintainer of the project, you can still merge it. we 
are not removing the merge button option.


But,merging when failing OST can break your project so merging on 
failure is unadvertised.



Q. What if my patch failing because of dependency on different project 
patch?


A.Patch Gating (Zuul) has a mechanism for cross-project dependency! 
All you need to do is to add to the


commit message the patch URL you are dependent on:


Depends-On: https://gerrit.ovirt.org/patch_number


And they will be tested together.


Note: you can have multiple dependencies.


Q. How do I debug OST?

A. There are various ways of looking in the logs and output for the 
errors:


1.

Blue Ocean view,

you
can see the jobs that were run inside the gate and find the suites
which failed.

2.

ci_build_summary view, An internal tool to view the threads and
redirect to the specific logs/artifacts.

3.

Test results analyzer, available if the tests were run. you can
view the failed tests and their output and OST maintainers and
your team leads should be able to assist.

4.

For further learning on how to debug OST please visit the OST FAQ

.


Q.Will the current infrastructure be able to support all the patches?

A. The CI team has made tremendous work in utilizing the infrastructure.

The OST gating will run inside OpenShift pods unlike before as bare 
metals and we can


gain from that right now approximately 50 pods in parallel to run 
simultaneously and we will review adding more if 

[ovirt-devel] FYI: RAM needed to compile the engine

2019-09-19 Thread Fedor Gavrilov
Hi,

This is all out of curiosity and just FYI.

I noticed that most recent engine master on CentOS 7.7 now requires about 8 GB 
of memory to compile. Otherwise it will fail during GWT part. 6 GB - not good 
enough.
Strange thing is about a month ago it worked perfectly fine for me on 4 GB. 
Also, I am running it on VM, so old rule applies: it is 8 gigs starting memory, 
not max memory.
I wonder if someone had similar experience.

Fedor
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/P5B7MAKXNKGELKLR4WYL3NMMDUBFHP2W/


[ovirt-devel] Re: [ovirt-users] CentOS 7.7 has been released: note it's not supported for oVirt 4.2

2019-09-19 Thread Logan Kuhn
Are there any specific incompatibilities between oVirt 4.2 and CentOS 7.7 or is 
it simply untested and won't be tested? 

Regards, 
Logan 

- On Sep 19, 2019, at 2:45 AM, Sandro Bonazzola  
wrote: 

| CentOS 7.7 has been released this week.
| Please note that CentOS 7.7 is not supported for oVirt 4.2 and older releases.
| If you want to upgrade to CentOS 7.7 please upgrade to oVirt 4.3 too.

| If you don't plan to upgrade to oVirt 4.3 please be sure your yum repo files
| points to [ http://mirror.centos.org/centos/7.6.1810/ |
| http://mirror.centos.org/centos/7.6.1810/ ] instead of [
| http://mirror.centos.org/centos/7/ | http://mirror.centos.org/centos/7/ ] .
| The [ http://mirror.centos.org/centos/7/ | http://mirror.centos.org/centos/7/ 
]
| URL is now pointing to [ http://mirror.centos.org/centos/7.7.1908/ |
| http://mirror.centos.org/centos/7.7.1908/ ] so yum upgrade will bring you 7.7
| content which may break your 4.2 installation.

| Thanks,

| --

| Sandro Bonazzola

| MANAGER, SOFTWARE ENGINEERING, EMEA R RHV

| [ https://www.redhat.com/ | Red Hat EMEA ]

| [ mailto:sbona...@redhat.com | sbona...@redhat.com ]
| [ https://www.redhat.com/ ] Red Hat respects your work life balance. Therefore
| there is no need to answer this email out of your office hours.

| ___
| Users mailing list -- us...@ovirt.org
| To unsubscribe send an email to users-le...@ovirt.org
| Privacy Statement: https://www.ovirt.org/site/privacy-policy/
| oVirt Code of Conduct:
| https://www.ovirt.org/community/about/community-guidelines/
| List Archives:
| 
https://lists.ovirt.org/archives/list/us...@ovirt.org/message/R52BOXOVIPJL6CON7X6S5WO4UVBKW52A/
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/RMAPRLMOX4CJLXDM6SJ4OARVJKC75HXT/


[ovirt-devel] Patch Gating summary + FAQ

2019-09-19 Thread Ehud Yonasi
Hey everyone,

Following the presentation we did last week [1]
, I wanted to summarize the new patch
gating workflow that will be pushed to oVirt soon and will impact all the
developers.

Summary:

The purpose of the new workflow is to verify patches earlier ( shift left )
before they are merged and provide much faster feedback for developers if
their patch fails OST.


   1.

   Feedback from OST will now be posted directly to Gerrit instead of
   requiring human intervention from the infra team to notify developers


We expect developers to check why their patch is not passing the gate
(OST), debug it, find the root cause and fix it before merging the patch.


   1.

   Any concerns regarding the stability of OST should be communicated and
   addressed ASAP.

The status today is that if OST fails post-merge gating packages are not
pushed to tested and QE doesn’t get to test them. The change with pre-merge
gating is that patches won’t be merged if OST fails, so if there are any
fragile or flaky tests they should be examined by their maintainers and
fixed, skipped or removed by the maintainers.


   1.

   FYI, we are not removing the Merge button at this point, so maintainers
   will still be able to merge patches that they believe 100% is not breaking
   the build and failing OST tests.


Please note that merging patches that break OST will cause it to start
failing for all other patches, we urge you to avoid trying to bypass it if
at all possible.

In the following section, I will explain more on Patch Gating, how to
onboard it, etc.


FAQ on oVirt’s Gating System and how to onboard your project on it:

Q. What is Patch Gating?

A. It is triggered pre-merge on patches and running OST as the gate system
tests, unlike today

where we have post-merge OST that runs the patches after the projects are
merged. This means developers get early feedback on their patches if it is
passing OST.

Q. What causes the gating process to start?
A. Once a patch is verified, passed CI and has Code-Review +2 labels, the
gating process will be started. You will receive a message in the patch

Q. How does it report results to my patches?

A. A comment will be posted in your patch with the job URL failure.


Q. How will my patch get merged?

A. If the patch has passed the gating (OST), Zuul (The new CI system for
patch gating) will merge the patch automatically.


Q. How do I onboard my project?

A.

   1.

   Open a JIRA ticket or mail to infra-supp...@ovirt.org
   2.

   Creating a file named 'zuul.yaml' under your project root OR
   `zuul.d/zuul.yaml` and fill with the following content:


- project:

templates:

  - ost-gated-project


Q. My projects run on STDCI V1, is that ok?

A. No, the patch gating logic runs on STDCI V2 only! meaning that you will
have to shift your project to V2.

If you need help regarding the transition to V2 you can open a JIRA[2]
ticket or mail to infra-supp...@ovirt.org

and visit the docs [3] .

Q. What if I want to merge the patch regardless of OST results?

A. If you are a maintainer of the project, you can still merge it. we are
not removing the merge button option.

But, merging when failing OST can break your project so merging on failure
is unadvertised.

Q. What if my patch failing because of dependency on different project
patch?

A. Patch Gating (Zuul) has a mechanism for cross-project dependency! All
you need to do is to add to the

commit message the patch URL you are dependent on:

Depends-On: https://gerrit.ovirt.org/patch_number

And they will be tested together.

Note: you can have multiple dependencies.

Q. How do I debug OST?

A. There are various ways of looking in the logs and output for the errors:

   1.

   Blue Ocean view,
   

   you can see the jobs that were run inside the gate and find the suites
   which failed.
   2.

   ci_build_summary view, An internal tool to view the threads and redirect
   to the specific logs/artifacts.
   3.

   Test results analyzer, available if the tests were run. you can view the
   failed tests and their output and OST maintainers and your team leads
   should be able to assist.
   4.

   For further learning on how to debug OST please visit the OST FAQ
   

   .


Q. Will the current infrastructure be able to support all the patches?

A. The CI team has made tremendous work in utilizing the infrastructure.

The OST gating will run inside OpenShift pods unlike before as bare metals
and we can

gain from that right now approximately 50 pods in parallel to run
simultaneously
and we will review adding more if the need arises.

Q. When I have multiple patches, in which order will they be tested by the
gating system?

A. The patches will be tested as the flow 

[ovirt-devel] Re: Removing big (hc-* and metrics) suits from OST's check-patch (How to make OST faster in CI)

2019-09-19 Thread Barak Korren
On Thu, 19 Sep 2019 at 16:21, Yedidyah Bar David  wrote:

> On Thu, Sep 19, 2019 at 3:47 PM Barak Korren  wrote:
> >
> > I haven't seen any comments on this thread, so we are going to move
> forward with the change.
>
> I started writing some reply, then realized that the only effect on
> developers is when pushing patches to OST, not to their own project.
> Right? CQ will continue as normal, nightly runs, etc.? So I didn't
> reply...
>

Yeah, this only has to do with the big suits that are listed in $subject,
none of those are used by the CQ ATM.


>
> If so, that's fine for me.
>
> Please document that somewhere. Specifically, how to do the last two
> points in [1]:
>
> >
> > On Mon, 2 Sep 2019 at 09:03, Barak Korren  wrote:
> >>
> >> Adding Evgeny and Shirly who are AFAIK the owners of the metrics suit.
> >>
> >> On Sun, 1 Sep 2019 at 17:07, Barak Korren  wrote:
> >>>
> >>> If you have been using or monitoring any OST suits recently, you may
> have noticed we've been suffering from long delays in allocating CI
> hardware resources for running OST suits. I'd like to briefly discuss the
> reasons behind this, what are planning to do to resolve this and the
> implication of those actions for big suit owners.
> >>>
> >>> As you might know, we have moved a while ago from running OST suits
> each on its own dedicated server to running them inside containers managed
> by OpenShift. That had allowed us to run multiple OST suits on the same
> bare-metal host which in turn increased our overall capacity by 50% while
> still allowing us to free up hardware for accommodating the kubevirt
> project on our CI hardware.
> >>>
> >>> Our infrastructure is currently built in a way where we use the exact
> same POD specification (and therefore resource settings) for all suits.
> Making it more flexible at this point would require significant code
> changes we are not likely to make. What this means is that we need to make
> sure our PODs have enough resources to run the most demanding suits. It
> also means we waste some resources when running less demanding ones.
> >>>
> >>> Given the set of OST suits we have ATM, we sized our PODs to allocate
> 32Gibs of RAM. Given the servers we have, this means we can run 15 suits at
> a time in parallel. This was sufficient for a while, but given increasing
> demand, and the expectation for it to increase further once we introduce
> the patch gating features we've been working on, we must find a way to
> significantly increase our suit running capacity.
> >>>
> >>> We have measured the amount of RAM required by each suit and came to
> the conclusion that for the vast majority of suits, we could settle for
> PODs that allocate only 14Gibs of RAM. If we make that change, we would be
> able to run a total of 40 suits at a time, almost tripling our current
> capacity.
> >>>
> >>> The downside of making this change is that our STDCI V2 infrastructure
> will no longer be able to run suits that require more then 14Gib of RAM.
> This effectively means it would no longer be possible to run these suits
> from OST's check-patch job or from the OST manual job.
> >>>
> >>> The list of relevant suits that would be affected follows, the suit
> owners, as documented in the CI configuration, have be added as "to"
> recipients to the message:
> >>>
> >>> hc-basic-suite-4.3
> >>> hc-basic-suite-master
> >>> metrics-suite-4.3
> >>>
> >>> Since we're aware people would still like to be able to work with the
> bigger suits, we will leverage the nightly suit invocation jobs to enable
> then to be run in the CI infra. We will support the following use cases:
> >>>
> >>> Periodically running the suit on the latest oVirt packages - this will
> be done by the nightly job like it is done today
> >>> Running the suit to test changes to the suit`s code - while currently
> this is done automatically by check-patch, this would have to be done
> manually in the future by manually triggering the nightly job and setting
> the REFSPEC parameter to point to the examined patch
> >>> Triggering the suit manually - This would be done by triggering the
> suit-specific nightly job (as opposed to the general OST manual job)
>
> [1] ^^
>
> >>>
> >>>  The patches listed below implement the changes outlined above:
> >>>
> >>> 102757 nightly-system-tests: big suits -> big containers
> >>> 102771: stdci: Drop `big` suits from check-patch
> >>>
> >>> We know that making the changes we presented will make things a little
> less convenient for users and maintainers of the big suits, but we believe
> the benefits of having vastly increased execution capacity for all other
> suits outweigh those shortcomings.
> >>>
> >>> We would like to hear all relevant comment and questions from the
> quite owners and other interested parties, especially is you think we
> should not carry out the changes we propose.
> >>> Please take the time to respond on this thread, or on the linked
> patches.
> >>>
> >>> Thanks,
> >>>
> >>> --
> >>> Barak 

[ovirt-devel] Re: Removing big (hc-* and metrics) suits from OST's check-patch (How to make OST faster in CI)

2019-09-19 Thread Yedidyah Bar David
On Thu, Sep 19, 2019 at 3:47 PM Barak Korren  wrote:
>
> I haven't seen any comments on this thread, so we are going to move forward 
> with the change.

I started writing some reply, then realized that the only effect on
developers is when pushing patches to OST, not to their own project.
Right? CQ will continue as normal, nightly runs, etc.? So I didn't
reply...

If so, that's fine for me.

Please document that somewhere. Specifically, how to do the last two
points in [1]:

>
> On Mon, 2 Sep 2019 at 09:03, Barak Korren  wrote:
>>
>> Adding Evgeny and Shirly who are AFAIK the owners of the metrics suit.
>>
>> On Sun, 1 Sep 2019 at 17:07, Barak Korren  wrote:
>>>
>>> If you have been using or monitoring any OST suits recently, you may have 
>>> noticed we've been suffering from long delays in allocating CI hardware 
>>> resources for running OST suits. I'd like to briefly discuss the reasons 
>>> behind this, what are planning to do to resolve this and the implication of 
>>> those actions for big suit owners.
>>>
>>> As you might know, we have moved a while ago from running OST suits each on 
>>> its own dedicated server to running them inside containers managed by 
>>> OpenShift. That had allowed us to run multiple OST suits on the same 
>>> bare-metal host which in turn increased our overall capacity by 50% while 
>>> still allowing us to free up hardware for accommodating the kubevirt 
>>> project on our CI hardware.
>>>
>>> Our infrastructure is currently built in a way where we use the exact same 
>>> POD specification (and therefore resource settings) for all suits. Making 
>>> it more flexible at this point would require significant code changes we 
>>> are not likely to make. What this means is that we need to make sure our 
>>> PODs have enough resources to run the most demanding suits. It also means 
>>> we waste some resources when running less demanding ones.
>>>
>>> Given the set of OST suits we have ATM, we sized our PODs to allocate 
>>> 32Gibs of RAM. Given the servers we have, this means we can run 15 suits at 
>>> a time in parallel. This was sufficient for a while, but given increasing 
>>> demand, and the expectation for it to increase further once we introduce 
>>> the patch gating features we've been working on, we must find a way to 
>>> significantly increase our suit running capacity.
>>>
>>> We have measured the amount of RAM required by each suit and came to the 
>>> conclusion that for the vast majority of suits, we could settle for PODs 
>>> that allocate only 14Gibs of RAM. If we make that change, we would be able 
>>> to run a total of 40 suits at a time, almost tripling our current capacity.
>>>
>>> The downside of making this change is that our STDCI V2 infrastructure will 
>>> no longer be able to run suits that require more then 14Gib of RAM. This 
>>> effectively means it would no longer be possible to run these suits from 
>>> OST's check-patch job or from the OST manual job.
>>>
>>> The list of relevant suits that would be affected follows, the suit owners, 
>>> as documented in the CI configuration, have be added as "to" recipients to 
>>> the message:
>>>
>>> hc-basic-suite-4.3
>>> hc-basic-suite-master
>>> metrics-suite-4.3
>>>
>>> Since we're aware people would still like to be able to work with the 
>>> bigger suits, we will leverage the nightly suit invocation jobs to enable 
>>> then to be run in the CI infra. We will support the following use cases:
>>>
>>> Periodically running the suit on the latest oVirt packages - this will be 
>>> done by the nightly job like it is done today
>>> Running the suit to test changes to the suit`s code - while currently this 
>>> is done automatically by check-patch, this would have to be done manually 
>>> in the future by manually triggering the nightly job and setting the 
>>> REFSPEC parameter to point to the examined patch
>>> Triggering the suit manually - This would be done by triggering the 
>>> suit-specific nightly job (as opposed to the general OST manual job)

[1] ^^

>>>
>>>  The patches listed below implement the changes outlined above:
>>>
>>> 102757 nightly-system-tests: big suits -> big containers
>>> 102771: stdci: Drop `big` suits from check-patch
>>>
>>> We know that making the changes we presented will make things a little less 
>>> convenient for users and maintainers of the big suits, but we believe the 
>>> benefits of having vastly increased execution capacity for all other suits 
>>> outweigh those shortcomings.
>>>
>>> We would like to hear all relevant comment and questions from the quite 
>>> owners and other interested parties, especially is you think we should not 
>>> carry out the changes we propose.
>>> Please take the time to respond on this thread, or on the linked patches.
>>>
>>> Thanks,
>>>
>>> --
>>> Barak Korren
>>> RHV DevOps team , RHCE, RHCi
>>> Red Hat EMEA
>>> redhat.com | TRIED. TESTED. TRUSTED. | redhat.com/trusted
>>
>>
>>
>> --
>> Barak Korren
>> RHV DevOps team , RHCE, RHCi
>> 

[ovirt-devel] Re: Removing big (hc-* and metrics) suits from OST's check-patch (How to make OST faster in CI)

2019-09-19 Thread Barak Korren
I haven't seen any comments on this thread, so we are going to move forward
with the change.

On Mon, 2 Sep 2019 at 09:03, Barak Korren  wrote:

> Adding Evgeny and Shirly who are AFAIK the owners of the metrics suit.
>
> On Sun, 1 Sep 2019 at 17:07, Barak Korren  wrote:
>
>> If you have been using or monitoring any OST suits recently, you may have
>> noticed we've been suffering from long delays in allocating CI hardware
>> resources for running OST suits. I'd like to briefly discuss the reasons
>> behind this, what are planning to do to resolve this and the implication of
>> those actions for big suit owners.
>>
>> As you might know, we have moved a while ago from running OST suits each
>> on its own dedicated server to running them inside containers managed by
>> OpenShift. That had allowed us to run multiple OST suits on the same
>> bare-metal host which in turn increased our overall capacity by 50% while
>> still allowing us to free up hardware for accommodating the kubevirt
>> project on our CI hardware.
>>
>> Our infrastructure is currently built in a way where we use the exact
>> same POD specification (and therefore resource settings) for all suits.
>> Making it more flexible at this point would require significant code
>> changes we are not likely to make. What this means is that we need to make
>> sure our PODs have enough resources to run the most demanding suits. It
>> also means we waste some resources when running less demanding ones.
>>
>> Given the set of OST suits we have ATM, we sized our PODs to allocate
>> 32Gibs of RAM. Given the servers we have, this means we can run 15 suits at
>> a time in parallel. This was sufficient for a while, but given increasing
>> demand, and the expectation for it to increase further once we introduce
>> the patch gating features we've been working on, we must find a way to
>> significantly increase our suit running capacity.
>>
>> We have measured the amount of RAM required by each suit and came to the
>> conclusion that for the vast majority of suits, we could settle for PODs
>> that allocate only 14Gibs of RAM. If we make that change, we would be able
>> to run a total of 40 suits at a time, almost tripling our current capacity.
>>
>> The downside of making this change is that our STDCI V2 infrastructure
>> will no longer be able to run suits that require more then 14Gib of RAM.
>> This effectively means it would no longer be possible to run these suits
>> from OST's check-patch job or from the OST manual job.
>>
>> The list of relevant suits that would be affected follows, the suit
>> owners, as documented in the CI configuration, have be added as "to"
>> recipients to the message:
>>
>>- hc-basic-suite-4.3
>>- hc-basic-suite-master
>>- metrics-suite-4.3
>>
>> Since we're aware people would still like to be able to work with the
>> bigger suits, we will leverage the nightly suit invocation jobs to enable
>> then to be run in the CI infra. We will support the following use cases:
>>
>>- *Periodically running the suit on the latest oVirt packages* - this
>>will be done by the nightly job like it is done today
>>- *Running the suit to test changes to the suit`s code* - while
>>currently this is done automatically by check-patch, this would have to be
>>done manually in the future by manually triggering the nightly job and
>>setting the REFSPEC parameter to point to the examined patch
>>- *Triggering the suit manually* - This would be done by triggering
>>the suit-specific nightly job (as opposed to the general OST manual job)
>>
>>  The patches listed below implement the changes outlined above:
>>
>>- 102757  nightly-system-tests: big
>>suits -> big containers
>>- 102771 : stdci: Drop `big` suits
>>from check-patch
>>
>> We know that making the changes we presented will make things a little
>> less convenient for users and maintainers of the big suits, but we believe
>> the benefits of having vastly increased execution capacity for all other
>> suits outweigh those shortcomings.
>>
>> We would like to hear all relevant comment and questions from the quite
>> owners and other interested parties, especially is you think we should not
>> carry out the changes we propose.
>> Please take the time to respond on this thread, or on the linked patches.
>>
>> Thanks,
>>
>> --
>> Barak Korren
>> RHV DevOps team , RHCE, RHCi
>> Red Hat EMEA
>> redhat.com | TRIED. TESTED. TRUSTED. | redhat.com/trusted
>>
>
>
> --
> Barak Korren
> RHV DevOps team , RHCE, RHCi
> Red Hat EMEA
> redhat.com | TRIED. TESTED. TRUSTED. | redhat.com/trusted
>


-- 
Barak Korren
RHV DevOps team , RHCE, RHCi
Red Hat EMEA
redhat.com | TRIED. TESTED. TRUSTED. | redhat.com/trusted
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: 

[ovirt-devel] Re: OST fails

2019-09-19 Thread Vojtech Juranek
> I've stumbled upon similar problem on my local ost run,

should be already fixed, thanks to Galit.

> although I have even more unsatisfied rpm dependencies [1]
> 
> Andrej
> 
> [1] http://pastebin.test.redhat.com/798792
> 
> On Tue, Sep 17, 2019 at 4:46 PM Vojtech Juranek  wrote:
> > Hi,
> > OST has started to fail during today, fails with unsatisfied rpm
> > dependencies,
> > e.g.
> > 
> > + yum install --nogpgcheck --downloaddir=/dev/shm -y ntp net-snmp
> > ovirt-engine
> > ovirt-log-collector 'ovirt-engine-extension-aaa-ldap*' otopi-debug-plugins
> > cronie
> > Error: Package: rsyslog-mmjsonparse-8.24.0-38.el7.x86_64 (alocalsync)
> > 
> > Requires: rsyslog = 8.24.0-38.el7
> > Installed: rsyslog-8.24.0-34.el7.x86_64 (installed)
> > 
> > rsyslog = 8.24.0-34.el7
> > 
> > and many more, see [1] for more details.
> > 
> > Any idea what's wrong?
> > 
> > Thanks
> > Vojta
> > 
> > 
> > [1] https://jenkins.ovirt.org/job/ovirt-system-tests_manual/5601/console
> > ___
> > Devel mailing list -- devel@ovirt.org
> > To unsubscribe send an email to devel-le...@ovirt.org
> > Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> > oVirt Code of Conduct:
> > https://www.ovirt.org/community/about/community-guidelines/
> > List Archives:
> > https://lists.ovirt.org/archives/list/devel@ovirt.org/message/AY6D4RVQWSAQ
> > ORJHWGW2ZY3OPW2ZDY4H/



signature.asc
Description: This is a digitally signed message part.
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/O7FT7IUHPDCWULFQIP5UAYYFWXW6RDFP/


[ovirt-devel] Re: Certificate checking on TLS migrations to an IP address

2019-09-19 Thread Milan Zamazal
Daniel P. Berrangé  writes:

> On Wed, Sep 18, 2019 at 12:18:32PM +0200, Milan Zamazal wrote:
>> Daniel P. Berrangé  writes:
>> 
>
>> > On Wed, Sep 04, 2019 at 03:38:25PM +0200, Milan Zamazal wrote:
>> >> Hi, I'm trying to add TLS migrations to oVirt, but I've hit a problem
>> >> with certificate checking.
>> >
>> >> 
>> >> oVirt uses the destination host IP address, rather than the host name,
>> >> in the migration URI passed to virDomainMigrateToURI3.  One reason for
>> >> doing that is that a separate migration network may be used for
>> >> migrations, while the host name resolves to the management network
>> >> interface.
>> >> 
>> >> But it causes a problem with certificate checking.  The destination IP
>> >> address is checked against the name, which is a host name, given in the
>> >> destination certificate.  That means there is mismatch and the migration
>> >> fails.  I don't think it'd be a very good idea to avoid the problem by
>> >> putting IP addresses into server certificates.
>> >
>> > In fact that is *exactly* what you should be doing.
>> >
>> > Traditionally certificates were created with the 'common name' field
>> > holding the fully qualified DNS based hostname for the server.
>> >
>> > This was long known to be a problem because it is very common for
>> > servers to have multiple DNS names, or for clients to use the
>> > unqualified hostname, or use the IP address(es).
>> 
>> The problem with putting IP addresses into certificates is that the
>> certificate must be updated each time an IP address changes, is added or
>> is removed.  Doing this in oVirt would be complicated and error-prone.
>> While host names are stable, host networks and the related IP addresses
>> may change.
>> 
>> > Thus, the "Subject alt name" extension was created. This allows
>> > certificates to be created containing multiple hostnames and
>> > multiple IP addresses. The certificate will be validated correctly
>> > if any one of those data items matches. When 'subject alt name' is
>> > present in a certificate, the 'common name' field should be completely
>> > ignored by compliant TLS clients, so you are free to put whatever
>> > you want in the common name - hostname or IP address or blah...
>> 
>> We can switch to using Subject Alt Name and we have a patch for that now
>> based on your advice, but it doesn't solve the problem with tracking IP
>> address changes and updating the corresponding certificates whenever a
>> change occurs.
>> 
>> > If you look at our docs, we updated them to illustrate how to
>> > issue certs containing hostnames + IP addresses:
>> >
>> > https://libvirt.org/remote.html#Remote_TLS_server_certificates
>> >
>> >> 
>> >> Is there any way to make TLS migrations working under these
>> >> circumstances?  For instance, SPICE remote-viewer allows the client to
>> >> specify the certificate subject to expect on the host when connecting to
>> >> it using an IP address.  Can (or could) libvirt do something similar?
>> 
>> Would it be possible?  We have host names in the certificates under our
>> control and we know which host name to expect in the certificate
>> regardless the IP address used for the given connection.  Checking the
>> certificate against a given host name would solve the problem easily and
>> robustly for us.
>
> There's two options that could make it work
>
>  - Define a new migration parameter which lets apps pass in the hostname
>to use for TLS cert validation to libvirt, which would have to then
>pass it into QEMU

I think this is the best option.  We know the destination host name,
while we need to use an IP address to connect to it in order to use a
particular network.

>  - The source host libvirtd has a connection to the dest host libvirtd.
>It can thus ask dest host what its primary hostname is, and then
>automatically tell QEMu to use that for TLS cert validation. This
>could cause problems though for people already using TLS certs
>with IP addresses in.

This doesn't look very good from the security point of view, since then
the source doesn't check it really connects to the host it expects, just
that the destination host has a valid certificate signed by the right CA
(I suppose).  It may be good enough or even useful for some scenarios,
but not for others.

Thanks,
Milan
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/BN4HV3QGGKPF7IH62ZK5TE5RMDXZ5KOE/


[ovirt-devel] Re: OST fails

2019-09-19 Thread Andrej Cernek
Hi,
I've stumbled upon similar problem on my local ost run,
although I have even more unsatisfied rpm dependencies [1]

Andrej

[1] http://pastebin.test.redhat.com/798792

On Tue, Sep 17, 2019 at 4:46 PM Vojtech Juranek  wrote:

> Hi,
> OST has started to fail during today, fails with unsatisfied rpm
> dependencies,
> e.g.
>
> + yum install --nogpgcheck --downloaddir=/dev/shm -y ntp net-snmp
> ovirt-engine
> ovirt-log-collector 'ovirt-engine-extension-aaa-ldap*' otopi-debug-plugins
> cronie
> Error: Package: rsyslog-mmjsonparse-8.24.0-38.el7.x86_64 (alocalsync)
> Requires: rsyslog = 8.24.0-38.el7
> Installed: rsyslog-8.24.0-34.el7.x86_64 (installed)
> rsyslog = 8.24.0-34.el7
>
> and many more, see [1] for more details.
>
> Any idea what's wrong?
>
> Thanks
> Vojta
>
>
> [1] https://jenkins.ovirt.org/job/ovirt-system-tests_manual/5601/console
> ___
> Devel mailing list -- devel@ovirt.org
> To unsubscribe send an email to devel-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/devel@ovirt.org/message/AY6D4RVQWSAQORJHWGW2ZY3OPW2ZDY4H/
>
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/SW34WMG5BCZP5HBLEVGAAWLNPHITJR36/


[ovirt-devel] Re: [User is not authorized to perform this action.]

2019-09-19 Thread Fedor Gavrilov
Hello,

Maybe you can try calling GET on 
vms/74a1a99f-87d1--b394-f90e4f29f51f/diskattachments for testing purpose?
I suggest using REST client since it takes care of authentication when needed 
and to make sure the problem is not in the code. In dev environment ovirt uses 
basic authentication and that is something you have to set up when calling this 
API. For example, for admin user username would be admin@internal and password 
is whatever you specified during setup.

Fedor 

- Original Message -
From: smidhun...@gmail.com
To: devel@ovirt.org
Sent: Wednesday, September 18, 2019 3:39:28 PM
Subject: [ovirt-devel] [User is not authorized to perform this action.]

I am getting error as User is not authorized to perform this action.I am using 
oVirt api to manually attach a disk and snapshot  to another virtual 
machine.This the code i used.The API i followed is

'
   true
   false
virtio


My disk
 cow
mydisk
  
   



==
The php code i have tried is ...

public function attachSnapshotandActivateDisk( ) {


$data = array();

$xmlStr = '
   true
   false
virtio


My disk
 cow
mydisk
  
   

 ';

$curlParam = array(
"url" => "vms/74a1a99f-87d1--b394-f90e4f29f51f/diskattachments",
"method" => "POST",
"data" => $xmlStr
);

// $text = print_r($curlParam, true);
//echo "console.log( 'curl 
Parameter".json_encode($text)."');";

$data = $this->processCurlRequest($curlParam, "vm-create");

// if instance update failed
if ($data['status'] != 'success') {
Common::ovirtLog(array(
"requestParam" => $curlParam,
"responseParam" => $data,
));
}

// show the output of print_r function as a string
//  $text = print_r($data, true);
echo $data['message'];
die;
echo " console.log('checking the attach 
snapshot activate disk function diskid=" . json_encode($data) . "') ";


//Detach the disk
//  $this->detachSnapshotDisk() ; 

return $data;
}


Please help me.
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/42Z54W3JDZSTAN6PIOGWGGMVNZ5IDA47/
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/LXAX63LLZCQKR4H2WCSCDCYGJRXGXYOQ/


[ovirt-devel] Re: vdsm has been tagged (v4.30.31)

2019-09-19 Thread Sandro Bonazzola
Il giorno gio 19 set 2019 alle ore 11:01 Evgheni Dereveanchin <
edere...@redhat.com> ha scritto:

> Since the patch should have no effect on CentOS and the job failed in
> indeed looks like an infra failure caused by mirror inconsistency or mock
> caches. I've re-triggered the job, let's see if this run succeeds.
> Otherwise will go ahead and delete mock caches on our ppc64le VMs.
>

rebuild worked, thanks


>
> On Thu, Sep 19, 2019 at 9:35 AM Sandro Bonazzola 
> wrote:
>
>>
>>
>> Il giorno gio 19 set 2019 alle ore 09:31 Anton Marchukov <
>> amarc...@redhat.com> ha scritto:
>>
>>> Please note that “yum” errors might be related to recent CentOS 7.7
>>> release. We saw a couple of places where we had to purge the yum cache as
>>> yum is confused with metadata change after the release. From the error it
>>> looks to be the case here.
>>>
>>>
>> can you please purge the yum cache and re-trigger the build?
>>
>>
>>
>>> > On 19 Sep 2019, at 09:25, Sandro Bonazzola 
>>> wrote:
>>> >
>>> >
>>> >
>>> > Il giorno mer 18 set 2019 alle ore 17:45 Milan Zamazal <
>>> mzama...@redhat.com> ha scritto:
>>> >
>>> >
>>> > vdsm v4.30.31 didn't pass CI build
>>> https://jenkins.ovirt.org/job/vdsm_standard-on-merge/1551/
>>> > so it can't be released. Please check.
>>> >
>>> >
>>> > --
>>> > Sandro Bonazzola
>>> > MANAGER, SOFTWARE ENGINEERING, EMEA R RHV
>>> > Red Hat EMEA
>>> > sbona...@redhat.com
>>> >
>>> > Red Hat respects your work life balance. Therefore there is no need to
>>> answer this email out of your office hours.
>>>
>>> --
>>> Anton Marchukov
>>> Associate Manager - RHV DevOps - Red Hat
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>
>> --
>>
>> Sandro Bonazzola
>>
>> MANAGER, SOFTWARE ENGINEERING, EMEA R RHV
>>
>> Red Hat EMEA 
>>
>> sbona...@redhat.com
>> *Red Hat respects your work life balance.
>> Therefore there is no need to answer this email out of your office hours.
>> *
>>
>
>
> --
> Regards,
> Evgheni Dereveanchin
>


-- 

Sandro Bonazzola

MANAGER, SOFTWARE ENGINEERING, EMEA R RHV

Red Hat EMEA 

sbona...@redhat.com
*Red Hat respects your work life balance.
Therefore there is no need to answer this email out of your office hours.
*
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/JKE345VGPHW2NU6JNVCG34T4VKSWXIZK/


[ovirt-devel] how to login into ovirts particular virtualmachine from a remote client

2019-09-19 Thread smidhunraj
Is there any way i can login into the virtual machine in ovirt from my local 
machine.Please help me.
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/IJFYGANLRFU5GSBPVVTLW6F5NP6T5XMD/


[ovirt-devel] how to login into ovirts particular virtualmachine from a remote client

2019-09-19 Thread smidhunraj
Is there any way i can login into the virtual machine in ovirt from my local 
machine.Please help me.
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/HTW45NVTNAI7WLC2CNEYEWHLQUYM4XYF/


[ovirt-devel] Re: vdsm has been tagged (v4.30.31)

2019-09-19 Thread Evgheni Dereveanchin
Since the patch should have no effect on CentOS and the job failed in
indeed looks like an infra failure caused by mirror inconsistency or mock
caches. I've re-triggered the job, let's see if this run succeeds.
Otherwise will go ahead and delete mock caches on our ppc64le VMs.

On Thu, Sep 19, 2019 at 9:35 AM Sandro Bonazzola 
wrote:

>
>
> Il giorno gio 19 set 2019 alle ore 09:31 Anton Marchukov <
> amarc...@redhat.com> ha scritto:
>
>> Please note that “yum” errors might be related to recent CentOS 7.7
>> release. We saw a couple of places where we had to purge the yum cache as
>> yum is confused with metadata change after the release. From the error it
>> looks to be the case here.
>>
>>
> can you please purge the yum cache and re-trigger the build?
>
>
>
>> > On 19 Sep 2019, at 09:25, Sandro Bonazzola  wrote:
>> >
>> >
>> >
>> > Il giorno mer 18 set 2019 alle ore 17:45 Milan Zamazal <
>> mzama...@redhat.com> ha scritto:
>> >
>> >
>> > vdsm v4.30.31 didn't pass CI build
>> https://jenkins.ovirt.org/job/vdsm_standard-on-merge/1551/
>> > so it can't be released. Please check.
>> >
>> >
>> > --
>> > Sandro Bonazzola
>> > MANAGER, SOFTWARE ENGINEERING, EMEA R RHV
>> > Red Hat EMEA
>> > sbona...@redhat.com
>> >
>> > Red Hat respects your work life balance. Therefore there is no need to
>> answer this email out of your office hours.
>>
>> --
>> Anton Marchukov
>> Associate Manager - RHV DevOps - Red Hat
>>
>>
>>
>>
>>
>>
>>
>
> --
>
> Sandro Bonazzola
>
> MANAGER, SOFTWARE ENGINEERING, EMEA R RHV
>
> Red Hat EMEA 
>
> sbona...@redhat.com
> *Red Hat respects your work life balance.
> Therefore there is no need to answer this email out of your office hours.
> *
>


-- 
Regards,
Evgheni Dereveanchin
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/F3APZC4PJQ2P34HHWCQPHWULLCZKVH7W/


[ovirt-devel] Re: Certificate checking on TLS migrations to an IP address

2019-09-19 Thread Daniel P . Berrangé
On Wed, Sep 18, 2019 at 12:18:32PM +0200, Milan Zamazal wrote:
> Daniel P. Berrangé  writes:
> 
> > On Wed, Sep 04, 2019 at 03:38:25PM +0200, Milan Zamazal wrote:
> >> Hi, I'm trying to add TLS migrations to oVirt, but I've hit a problem
> >> with certificate checking.
> >
> >> 
> >> oVirt uses the destination host IP address, rather than the host name,
> >> in the migration URI passed to virDomainMigrateToURI3.  One reason for
> >> doing that is that a separate migration network may be used for
> >> migrations, while the host name resolves to the management network
> >> interface.
> >> 
> >> But it causes a problem with certificate checking.  The destination IP
> >> address is checked against the name, which is a host name, given in the
> >> destination certificate.  That means there is mismatch and the migration
> >> fails.  I don't think it'd be a very good idea to avoid the problem by
> >> putting IP addresses into server certificates.
> >
> > In fact that is *exactly* what you should be doing.
> >
> > Traditionally certificates were created with the 'common name' field
> > holding the fully qualified DNS based hostname for the server.
> >
> > This was long known to be a problem because it is very common for
> > servers to have multiple DNS names, or for clients to use the
> > unqualified hostname, or use the IP address(es).
> 
> The problem with putting IP addresses into certificates is that the
> certificate must be updated each time an IP address changes, is added or
> is removed.  Doing this in oVirt would be complicated and error-prone.
> While host names are stable, host networks and the related IP addresses
> may change.
> 
> > Thus, the "Subject alt name" extension was created. This allows
> > certificates to be created containing multiple hostnames and
> > multiple IP addresses. The certificate will be validated correctly
> > if any one of those data items matches. When 'subject alt name' is
> > present in a certificate, the 'common name' field should be completely
> > ignored by compliant TLS clients, so you are free to put whatever
> > you want in the common name - hostname or IP address or blah...
> 
> We can switch to using Subject Alt Name and we have a patch for that now
> based on your advice, but it doesn't solve the problem with tracking IP
> address changes and updating the corresponding certificates whenever a
> change occurs.
> 
> > If you look at our docs, we updated them to illustrate how to
> > issue certs containing hostnames + IP addresses:
> >
> > https://libvirt.org/remote.html#Remote_TLS_server_certificates
> >
> >> 
> >> Is there any way to make TLS migrations working under these
> >> circumstances?  For instance, SPICE remote-viewer allows the client to
> >> specify the certificate subject to expect on the host when connecting to
> >> it using an IP address.  Can (or could) libvirt do something similar?
> 
> Would it be possible?  We have host names in the certificates under our
> control and we know which host name to expect in the certificate
> regardless the IP address used for the given connection.  Checking the
> certificate against a given host name would solve the problem easily and
> robustly for us.

There's two options that could make it work

 - Define a new migration parameter which lets apps pass in the hostname
   to use for TLS cert validation to libvirt, which would have to then
   pass it into QEMU

 - The source host libvirtd has a connection to the dest host libvirtd.
   It can thus ask dest host what its primary hostname is, and then
   automatically tell QEMu to use that for TLS cert validation. This
   could cause problems though for people already using TLS certs
   with IP addresses in.

Regards,
Daniel
-- 
|: https://berrange.com  -o-https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org -o-https://fstop138.berrange.com :|
|: https://entangle-photo.org-o-https://www.instagram.com/dberrange :|
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/HL4YMURQMSU4YT327HZ2LOPM656P3AGQ/


[ovirt-devel] CentOS 7.7 has been released: note it's not supported for oVirt 4.2

2019-09-19 Thread Sandro Bonazzola
CentOS 7.7 has been released this week.
Please note that CentOS 7.7 is not supported for oVirt 4.2 and older
releases.
If you want to upgrade to CentOS 7.7 please upgrade to oVirt 4.3 too.

If you don't plan to upgrade to oVirt 4.3 please be sure your yum repo
files points to http://mirror.centos.org/centos/7.6.1810/ instead of
http://mirror.centos.org/centos/7/.
The http://mirror.centos.org/centos/7/ URL is now pointing to
http://mirror.centos.org/centos/7.7.1908/ so yum upgrade will bring you 7.7
content which may break your 4.2 installation.

Thanks,

-- 

Sandro Bonazzola

MANAGER, SOFTWARE ENGINEERING, EMEA R RHV

Red Hat EMEA 

sbona...@redhat.com
*Red Hat respects your work life balance.
Therefore there is no need to answer this email out of your office hours.*
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/R52BOXOVIPJL6CON7X6S5WO4UVBKW52A/


[ovirt-devel] Re: vdsm has been tagged (v4.30.31)

2019-09-19 Thread Sandro Bonazzola
Il giorno gio 19 set 2019 alle ore 09:31 Anton Marchukov <
amarc...@redhat.com> ha scritto:

> Please note that “yum” errors might be related to recent CentOS 7.7
> release. We saw a couple of places where we had to purge the yum cache as
> yum is confused with metadata change after the release. From the error it
> looks to be the case here.
>
>
can you please purge the yum cache and re-trigger the build?



> > On 19 Sep 2019, at 09:25, Sandro Bonazzola  wrote:
> >
> >
> >
> > Il giorno mer 18 set 2019 alle ore 17:45 Milan Zamazal <
> mzama...@redhat.com> ha scritto:
> >
> >
> > vdsm v4.30.31 didn't pass CI build
> https://jenkins.ovirt.org/job/vdsm_standard-on-merge/1551/
> > so it can't be released. Please check.
> >
> >
> > --
> > Sandro Bonazzola
> > MANAGER, SOFTWARE ENGINEERING, EMEA R RHV
> > Red Hat EMEA
> > sbona...@redhat.com
> >
> > Red Hat respects your work life balance. Therefore there is no need to
> answer this email out of your office hours.
>
> --
> Anton Marchukov
> Associate Manager - RHV DevOps - Red Hat
>
>
>
>
>
>
>

-- 

Sandro Bonazzola

MANAGER, SOFTWARE ENGINEERING, EMEA R RHV

Red Hat EMEA 

sbona...@redhat.com
*Red Hat respects your work life balance.
Therefore there is no need to answer this email out of your office hours.
*
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/D6NVL2SSM53D2U44JRQ7RWOWKZF6MN57/


[ovirt-devel] Re: vdsm has been tagged (v4.30.31)

2019-09-19 Thread Anton Marchukov
Please note that “yum” errors might be related to recent CentOS 7.7 release. We 
saw a couple of places where we had to purge the yum cache as yum is confused 
with metadata change after the release. From the error it looks to be the case 
here.

> On 19 Sep 2019, at 09:25, Sandro Bonazzola  wrote:
> 
> 
> 
> Il giorno mer 18 set 2019 alle ore 17:45 Milan Zamazal  
> ha scritto:
> 
> 
> vdsm v4.30.31 didn't pass CI build 
> https://jenkins.ovirt.org/job/vdsm_standard-on-merge/1551/
> so it can't be released. Please check.
> 
> 
> -- 
> Sandro Bonazzola
> MANAGER, SOFTWARE ENGINEERING, EMEA R RHV
> Red Hat EMEA
> sbona...@redhat.com   
> 
> Red Hat respects your work life balance. Therefore there is no need to answer 
> this email out of your office hours.

-- 
Anton Marchukov
Associate Manager - RHV DevOps - Red Hat





___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/PUHQUNG44HFKA7NQJNR77443S7USTMVJ/


[ovirt-devel] Re: vdsm has been tagged (v4.30.31)

2019-09-19 Thread Sandro Bonazzola
Il giorno mer 18 set 2019 alle ore 17:45 Milan Zamazal 
ha scritto:

>

vdsm v4.30.31 didn't pass CI build
https://jenkins.ovirt.org/job/vdsm_standard-on-merge/1551/
so it can't be released. Please check.


-- 

Sandro Bonazzola

MANAGER, SOFTWARE ENGINEERING, EMEA R RHV

Red Hat EMEA 

sbona...@redhat.com
*Red Hat respects your work life balance.
Therefore there is no need to answer this email out of your office hours.
*
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/RPYWWDHUSSE2NMNTKPV4LJEJJT32X75I/