[ovirt-devel] Re: oVirt 4.6 OS versions

2023-12-21 Thread Michal Skrivanek


> On 11. 12. 2023, at 2:36, Diggy Mc  wrote:
> 
> Has it yet been decided what OS and versions will be used for oVirt 4.6 node 
> and hosted engine?

no, not really. There's very little new development these days, bugfixes 
mostly, and there doesn't seem to be anyone stepping up to add anything 
significant. That said, the latest 4.5.z should work decently on CentOS Stream 9

Thanks,
michal

> ___
> Devel mailing list -- devel@ovirt.org
> To unsubscribe send an email to devel-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/devel@ovirt.org/message/ZLRAYOTT5V32V3237GVPOHEUSAIH7EKC/
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/EZ4D6XI7NCLSGS3NNTBLQ2I5R4ZXEYAV/


[ovirt-devel] Re: ovirt-system-tests

2023-02-08 Thread Michal Skrivanek


> On 7. 2. 2023, at 18:55, stephan.du...@bareos.com wrote:
> 
> Hi,
> 
> about two years ago, I used OST to setup a oVirt test/dev environment.
> 
> Now I tried again, following  
> https://github.com/oVirt/ovirt-system-tests/blob/master/README.md
> but it fails because https://templates.ovirt.org/yum/ is unavailable.
> 
> Or isn't it meant to be available for public access any more?

Hi,
we had to reduce the hosting a bit and the repo is not used by any of the 
currently running CI systems, so we have decided to shut it down a month ago.
It's not a problem to keep building the images, we just don't have a good place 
where to store them.

any machine capable of running OST is also capable of building the images from 
https://github.com/oVirt/ost-images

Thanks,
michal

> 
> Regards,
> Stephan
> ___
> Devel mailing list -- devel@ovirt.org
> To unsubscribe send an email to devel-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/devel@ovirt.org/message/FMAMIJ67ZQI6LIMPSXUBTEGNYEGP4XKJ/

___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/JUUKG6BOPMZBKU7KI37YZOAVMUQLZE6R/


[ovirt-devel] Re: oVirt engine builds on el9

2022-09-23 Thread Michal Skrivanek
with that you now have to either
define OFFLINE_BUILD=0 for make to still use maven central to download all jars
or 
install ovirt-engine-build-dependencies to get all teh jars locally

> On 23. 9. 2022, at 9:13, Michal Skrivanek  wrote:
> 
> Hi all,
> we are going to bundle maven dependencies in a new package 
> ovirt-engine-build-dependencies. This allows us to use CBS to build 
> ovirt-engine as any other package, and enable el9 builds.
> This is just a build dependency, there's no change to a built product.
> We will now start publishing el9stream builds of ovirt-engine and start 
> switching testing to el9
> 
> Thanks,
> michal
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/6HMHUNO4UW2TQLQUUSRCODYQMD3M5CUK/


[ovirt-devel] oVirt engine builds on el9

2022-09-23 Thread Michal Skrivanek
Hi all,
we are going to bundle maven dependencies in a new package 
ovirt-engine-build-dependencies. This allows us to use CBS to build 
ovirt-engine as any other package, and enable el9 builds.
This is just a build dependency, there's no change to a built product.
We will now start publishing el9stream builds of ovirt-engine and start 
switching testing to el9

Thanks,
michal
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/5E6DVD4MDRHRNCVU4RHYQKLISVYGWONJ/


[ovirt-devel] Re: oVirt bug reports to move from bugzilla to github issues in future

2022-09-23 Thread Michal Skrivanek
Hi all,
I'm planning to close bugzilla for new submissions today. Please use GitHub 
issues (even if a README says otherwise, until we update all of them:)
Existing bugs are not affected

Thanks,
michal

> On 6. 9. 2022, at 15:56, Michal Skrivanek  wrote:
> 
> Hi all,
> as a final stage of out gerrit to github transition that started ~9 months 
> ago we are planning to eliminate the use of bugzilla.redhat.com for all oVirt 
> projects (bugs with Classification: "oVirt") and use the native issue 
> tracking in github as well. We used to have integrations with gerrit and 
> bugzilla that we moved to github actions instead, and the overhead (and 
> notorious slowness) of bugzilla.redhat.com becomes the only "benefit" of 
> using it these days.
> There's about 50 bugs total left in oVirt bugzilla so it's not that much to 
> move, the biggest change would be that the new bugs are to be filed elsewhere.
> 
> This is just a heads up for now, we haven't set a cut off date just yet, but 
> you can expect this change in the coming weeks.
> 
> Thanks,
> michal
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/K6TAON66Y56QSZ4OLD357PYGCDKBWUCX/


[ovirt-devel] oVirt bug reports to move from bugzilla to github issues in future

2022-09-06 Thread Michal Skrivanek
Hi all,
as a final stage of out gerrit to github transition that started ~9 months ago 
we are planning to eliminate the use of bugzilla.redhat.com for all oVirt 
projects (bugs with Classification: "oVirt") and use the native issue tracking 
in github as well. We used to have integrations with gerrit and bugzilla that 
we moved to github actions instead, and the overhead (and notorious slowness) 
of bugzilla.redhat.com becomes the only "benefit" of using it these days.
There's about 50 bugs total left in oVirt bugzilla so it's not that much to 
move, the biggest change would be that the new bugs are to be filed elsewhere.

This is just a heads up for now, we haven't set a cut off date just yet, but 
you can expect this change in the coming weeks.

Thanks,
michal
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/BF7EU23GWOZSLKBTXEYQFLGHZHL6WSAN/


[ovirt-devel] vbmc - simple IPMI support for oVirt VMs

2022-09-05 Thread Michal Skrivanek
Hi all,
I wanted to share the news about a new oVirt (mini)project - vbmc[1]. It's 
fairly small and simple, but it may be handy if you'd like to treat oVirt VMs 
as bare metal machines controllable via IPMI. It only implements power on, off, 
reboot, and set boot order, but that should be enough for bootstrapping over 
pxe.

Thanks,
michal

[1] https://github.com/oVirt/vbmc
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/IXUWMUYOLEHEJ76C4YOHR3GJP7JDLP7V/


[ovirt-devel] Re: github testing: merge with branch, or use PR HEAD?

2022-08-16 Thread Michal Skrivanek


> On 11. 8. 2022, at 8:24, Yedidyah Bar David  wrote:
> 
> On Thu, Jul 21, 2022 at 5:04 PM Scott Dickerson  wrote:
>> 
>> 
>> 
>> On Thu, Jul 21, 2022 at 9:35 AM Michal Skrivanek  wrote:
>>> 
>>> 
>>> 
>>> On 21. 7. 2022, at 9:09, Yedidyah Bar David  wrote:
>>> 
>>> On Fri, Jul 8, 2022 at 11:30 AM Martin Perina  wrote:
>>>> 
>>>> 
>>>> 
>>>> On Fri, Jul 8, 2022 at 10:27 AM Michal Skrivanek  
>>>> wrote:
>>>>> 
>>>>> 
>>>>> 
>>>>>> On 7. 7. 2022, at 19:28, Nir Soffer  wrote:
>>>>>> 
>>>>>> On Wed, Jun 15, 2022 at 12:26 PM Yedidyah Bar David  
>>>>>> wrote:
>>>>>>> 
>>>>>>> Hi all,
>>>>>>> 
>>>>>>> I was annoyed for some time now by the fact that when I used some
>>>>>>> github-CI-generated RPMs, with a git hash in their names, I could
>>>>>>> never find this git hash anywhere - not in my local git repo, nor in
>>>>>>> github. Why is it so? Because, if I got it right, the default for
>>>>>>> 'actions/checkout@v2' is to merge the PR HEAD with the branch HEAD.
>>>>>>> See e.g. [1]:
>>>>>>> 
>>>>>>>   HEAD is now at 7bbb40c9a Merge
>>>>>>> 026bb9c672bf46786dd6d16f4cbe0ecfa84c531d into
>>>>>>> 35e217936b5571e9657946b47333a563373047bb
>>>>>>> 
>>>>>>> Meaning: my patch was 026bb9c, master was 35e2179, and the generated
>>>>>>> RPMs will have 7bbb40c9a, not to be found anywhere else. If you check
>>>>>>> the main PR page [3], you can find there '026bb9c', but not
>>>>>>> '7bbb40c9a'.
>>>>>>> 
>>>>>>> (Even 026bb9c might require some effort, e.g. "didib force-pushed the
>>>>>>> add-hook-log-console branch 2 times, most recently from c90e658 to
>>>>>>> 66ebc88 yesterday". I guess this is the result of github discouraging
>>>>>>> force-pushes, in direct opposite of gerrit, which had a notion of
>>>>>>> different patchsets for a single change. I already ranted about this
>>>>>>> in the past, but that's not the subject of the current message).
>>>>>>> 
>>>>>>> This is not just an annoyance, it's a real difference in semantics. In
>>>>>>> gerrit/jenkins days, IIRC most/all projects I worked on, ran CI
>>>>>>> testing/building on the pushed HEAD, and didn't touch it. Rebase, if
>>>>>>> at all, happened either explicitly, or at merge time.
>>>>>> 
>>>>>> I don't think that the action *rebases* the pr, it uses a merge commit
>>>>>> but this adds newer commits on master on top of the pr, which may
>>>>>> conflict or change the semantics of the pr.
>>>>>> 
>>>>>>> actions/checkout's default, to auto-merge, is probably meant to be
>>>>>>> more "careful" - to test what would happen if the code is merged. I
>>>>>>> agree this makes sense. But I personally think it's almost always ok
>>>>>>> to test on the pushed HEAD and not rebase/merge _implicitely_.
>>>>>>> 
>>>>>>> What do you think?
>>>>>> 
>>>>>> I agree, this is unexpected and unwanted behavior in particular for
>>>>>> projects that disable merge commits (e.g. vdsm).
>>>>> 
>>>>> merge commits are disabled for all oVirt projects as per 
>>>>> https://www.ovirt.org/develop/developer-guide/migrating_to_github.html
>>>>> 
>>>>>> 
>>>>>>> It should be easy to change, using [2]:
>>>>>>> 
>>>>>>> - uses: actions/checkout@v2
>>>>>>> with:
>>>>>>>   ref: ${{ github.event.pull_request.head.sha }}
>>>>> 
>>>>> we can really just create a trivial wrapper and replace globally with e.g.
>>>>> - uses: ovirt/checkout
>>>> 
>>>> 
>>>> +1
>>>> 
>>>> As this needs to be included in each project separately, then I'd say 
>>>> let's minimize available options to ensure maximum consistency across all 
>>>> oVirt pr

[ovirt-devel] Re: github testing: merge with branch, or use PR HEAD?

2022-07-21 Thread Michal Skrivanek


> On 21. 7. 2022, at 9:09, Yedidyah Bar David  wrote:
> 
> On Fri, Jul 8, 2022 at 11:30 AM Martin Perina  <mailto:mper...@redhat.com>> wrote:
> 
> 
> On Fri, Jul 8, 2022 at 10:27 AM Michal Skrivanek  <mailto:mskri...@redhat.com>> wrote:
> 
> 
> > On 7. 7. 2022, at 19:28, Nir Soffer  > <mailto:nsof...@redhat.com>> wrote:
> > 
> > On Wed, Jun 15, 2022 at 12:26 PM Yedidyah Bar David  > <mailto:d...@redhat.com>> wrote:
> >> 
> >> Hi all,
> >> 
> >> I was annoyed for some time now by the fact that when I used some
> >> github-CI-generated RPMs, with a git hash in their names, I could
> >> never find this git hash anywhere - not in my local git repo, nor in
> >> github. Why is it so? Because, if I got it right, the default for
> >> 'actions/checkout@v2' is to merge the PR HEAD with the branch HEAD.
> >> See e.g. [1]:
> >> 
> >>HEAD is now at 7bbb40c9a Merge
> >> 026bb9c672bf46786dd6d16f4cbe0ecfa84c531d into
> >> 35e217936b5571e9657946b47333a563373047bb
> >> 
> >> Meaning: my patch was 026bb9c, master was 35e2179, and the generated
> >> RPMs will have 7bbb40c9a, not to be found anywhere else. If you check
> >> the main PR page [3], you can find there '026bb9c', but not
> >> '7bbb40c9a'.
> >> 
> >> (Even 026bb9c might require some effort, e.g. "didib force-pushed the
> >> add-hook-log-console branch 2 times, most recently from c90e658 to
> >> 66ebc88 yesterday". I guess this is the result of github discouraging
> >> force-pushes, in direct opposite of gerrit, which had a notion of
> >> different patchsets for a single change. I already ranted about this
> >> in the past, but that's not the subject of the current message).
> >> 
> >> This is not just an annoyance, it's a real difference in semantics. In
> >> gerrit/jenkins days, IIRC most/all projects I worked on, ran CI
> >> testing/building on the pushed HEAD, and didn't touch it. Rebase, if
> >> at all, happened either explicitly, or at merge time.
> > 
> > I don't think that the action *rebases* the pr, it uses a merge commit
> > but this adds newer commits on master on top of the pr, which may
> > conflict or change the semantics of the pr.
> > 
> >> actions/checkout's default, to auto-merge, is probably meant to be
> >> more "careful" - to test what would happen if the code is merged. I
> >> agree this makes sense. But I personally think it's almost always ok
> >> to test on the pushed HEAD and not rebase/merge _implicitely_.
> >> 
> >> What do you think?
> > 
> > I agree, this is unexpected and unwanted behavior in particular for
> > projects that disable merge commits (e.g. vdsm).
> 
> merge commits are disabled for all oVirt projects as per 
> https://www.ovirt.org/develop/developer-guide/migrating_to_github.html 
> <https://www.ovirt.org/develop/developer-guide/migrating_to_github.html>
> 
> > 
> >> It should be easy to change, using [2]:
> >> 
> >> - uses: actions/checkout@v2
> >>  with:
> >>ref: ${{ github.event.pull_request.head.sha }}
> 
> we can really just create a trivial wrapper and replace globally with e.g.
> - uses: ovirt/checkout
> 
> +1 
> 
> As this needs to be included in each project separately, then I'd say let's 
> minimize available options to ensure maximum consistency across all oVirt 
> projects
> 
> 1. I don't know how, and would have to learn quite a bit of github, to do 
> this. That's the main reason I neglected this in my TODO folder and didn't 
> reply yet. Perhaps someone already did something similar and would like to 
> take over?

Take a look at https://github.com/oVirt/upload-rpms-action
minus tests (I hope Janos is not looking)...that makes it a new repo, and 
license, readme, and yaml file with that snippet. that's it.

> 
> 2. I already pushed (2 weeks ago) and merged (yesterday) to otopi, [1], which 
> simply does the above.
> 
> 3. Scott now pushed [2], to the engine, doing the same, and I agree with him. 
> So am going to merge it soon, unless there are objections. If eventually 
> someone creates an oVirt action for this, we can always update to use it.
> 
> Best regards,
> 
> [1] https://github.com/oVirt/otopi/pull/25 
> <https://github.com/oVirt/otopi/pull/25>
> 
> [2] https://github.com/oVirt/ovirt-engine/pull/543 
> <https://github.com/oVirt/ovirt-engine/pull/543>
>  
> 
> > 
> > +1
> > 
> > Nir
> > 

[ovirt-devel] Re: github testing: merge with branch, or use PR HEAD?

2022-07-08 Thread Michal Skrivanek


> On 7. 7. 2022, at 19:28, Nir Soffer  wrote:
> 
> On Wed, Jun 15, 2022 at 12:26 PM Yedidyah Bar David  wrote:
>> 
>> Hi all,
>> 
>> I was annoyed for some time now by the fact that when I used some
>> github-CI-generated RPMs, with a git hash in their names, I could
>> never find this git hash anywhere - not in my local git repo, nor in
>> github. Why is it so? Because, if I got it right, the default for
>> 'actions/checkout@v2' is to merge the PR HEAD with the branch HEAD.
>> See e.g. [1]:
>> 
>>HEAD is now at 7bbb40c9a Merge
>> 026bb9c672bf46786dd6d16f4cbe0ecfa84c531d into
>> 35e217936b5571e9657946b47333a563373047bb
>> 
>> Meaning: my patch was 026bb9c, master was 35e2179, and the generated
>> RPMs will have 7bbb40c9a, not to be found anywhere else. If you check
>> the main PR page [3], you can find there '026bb9c', but not
>> '7bbb40c9a'.
>> 
>> (Even 026bb9c might require some effort, e.g. "didib force-pushed the
>> add-hook-log-console branch 2 times, most recently from c90e658 to
>> 66ebc88 yesterday". I guess this is the result of github discouraging
>> force-pushes, in direct opposite of gerrit, which had a notion of
>> different patchsets for a single change. I already ranted about this
>> in the past, but that's not the subject of the current message).
>> 
>> This is not just an annoyance, it's a real difference in semantics. In
>> gerrit/jenkins days, IIRC most/all projects I worked on, ran CI
>> testing/building on the pushed HEAD, and didn't touch it. Rebase, if
>> at all, happened either explicitly, or at merge time.
> 
> I don't think that the action *rebases* the pr, it uses a merge commit
> but this adds newer commits on master on top of the pr, which may
> conflict or change the semantics of the pr.
> 
>> actions/checkout's default, to auto-merge, is probably meant to be
>> more "careful" - to test what would happen if the code is merged. I
>> agree this makes sense. But I personally think it's almost always ok
>> to test on the pushed HEAD and not rebase/merge _implicitely_.
>> 
>> What do you think?
> 
> I agree, this is unexpected and unwanted behavior in particular for
> projects that disable merge commits (e.g. vdsm).

merge commits are disabled for all oVirt projects as per 
https://www.ovirt.org/develop/developer-guide/migrating_to_github.html

> 
>> It should be easy to change, using [2]:
>> 
>> - uses: actions/checkout@v2
>>  with:
>>ref: ${{ github.event.pull_request.head.sha }}

we can really just create a trivial wrapper and replace globally with e.g.
- uses: ovirt/checkout

> 
> +1
> 
> Nir
> ___
> Devel mailing list -- devel@ovirt.org
> To unsubscribe send an email to devel-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/devel@ovirt.org/message/WZ3W6BII34CTQXXLBYJB6W6ECCWEGM4J/
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/HLUUV2YMDGN4ZSSLU75ME4K6KUIITFO4/


[ovirt-devel] Re: github testing: merge with branch, or use PR HEAD?

2022-06-15 Thread Michal Skrivanek


> On 15. 6. 2022, at 11:25, Yedidyah Bar David  wrote:
> 
> Hi all,
> 
> I was annoyed for some time now by the fact that when I used some
> github-CI-generated RPMs, with a git hash in their names, I could
> never find this git hash anywhere - not in my local git repo, nor in
> github. Why is it so?

huh, I wondered about that same thing today
Thank you for explaining why I couldn't find that hash anywhere

> Because, if I got it right, the default for
> 'actions/checkout@v2' is to merge the PR HEAD with the branch HEAD.
> See e.g. [1]:
> 
>HEAD is now at 7bbb40c9a Merge
> 026bb9c672bf46786dd6d16f4cbe0ecfa84c531d into
> 35e217936b5571e9657946b47333a563373047bb
> 
> Meaning: my patch was 026bb9c, master was 35e2179, and the generated
> RPMs will have 7bbb40c9a, not to be found anywhere else. If you check
> the main PR page [3], you can find there '026bb9c', but not
> '7bbb40c9a'.
> 
> (Even 026bb9c might require some effort, e.g. "didib force-pushed the
> add-hook-log-console branch 2 times, most recently from c90e658 to
> 66ebc88 yesterday". I guess this is the result of github discouraging
> force-pushes, in direct opposite of gerrit, which had a notion of
> different patchsets for a single change. I already ranted about this
> in the past, but that's not the subject of the current message).

We should create ovirt-github-ra...@ovirt.org, I'd certainly contribute:-) It's 
amazing how horrible _and_ popular github is.

> 
> This is not just an annoyance, it's a real difference in semantics. In
> gerrit/jenkins days, IIRC most/all projects I worked on, ran CI
> testing/building on the pushed HEAD, and didn't touch it. Rebase, if
> at all, happened either explicitly, or at merge time.
> 
> actions/checkout's default, to auto-merge, is probably meant to be
> more "careful" - to test what would happen if the code is merged. I
> agree this makes sense. But I personally think it's almost always ok
> to test on the pushed HEAD and not rebase/merge _implicitely_.
> 
> What do you think?
> 
> It should be easy to change, using [2]:
> 
> - uses: actions/checkout@v2
>  with:
>ref: ${{ github.event.pull_request.head.sha }}
> 
> No need to reach a complete consensus - can be decided upon
> per-project/repo.

github is always quite horrible to maintain some consistency across 
projects...yeah, I'd really like to have the same approach for every single 
project, it simplifies the maintenacewe do have a lot of projects and many 
are not very active and they easily fall behind. After all we have 160 projects 
in oVirt org but only ~30 are activeor rather 30 are in use for oVirt 
compose and ~10 are active.

+1 on using it everywhere
we have our own action for rpms and buildcontainer for unified build 
environment (with a shameful exception of vdsm!)it's probably overkill for 
checkout to use oVirt's action

> But if you disagree, I'd like to understand why.
> Thanks.
> 
> Best regards,
> 
> [1] https://github.com/oVirt/vdsm/runs/6881311961?check_suite_focus=true
> 
> [2] 
> https://github.com/marketplace/actions/checkout?version=v2.4.2#checkout-pull-request-head-commit-instead-of-merge-commit
> 
> [3] https://github.com/oVirt/vdsm/pull/249
> -- 
> Didi
> ___
> Devel mailing list -- devel@ovirt.org
> To unsubscribe send an email to devel-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/devel@ovirt.org/message/7SEFKOASOATTMO2NK2SBWMOV4CV6LZOS/
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/73U2DK57PATX7R23UCEJBQASQ55X7EGC/


[ovirt-devel] Projects missing safe dir config in COPR

2022-06-13 Thread Michal Skrivanek
Hi,
I scanned the current projects and AFAICT these are the active projects that 
don't have builds configured properly. Please add "git config --global --add 
safe.directory ..." to the COPR makefile
Otherwise COPR build may not work at all or (worse) they build something wrong.

Thanks,
michal


imgbased
ioprocess
mom
ovirt-ansible-collection
ovirt-cockpit-sso
ovirt-engine-api-metamodel
ovirt-engine-api-model
ovirt-engine-wildfly
ovirt-hosted-engine-ha
ovirt-hosted-engine-setup
ovirt-lldp-labeler
ovirt-log-collector
ovirt-node-ng
ovirt-openvswitch
ovirt-setup-lib
ovirt-vmconsole
python-ovirt-engine-sdk4
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/EIF2PRJF2L76E5ZUJXFAALN4SEFITVG6/


[ovirt-devel] OST on el9stream

2022-05-27 Thread Michal Skrivanek
Hi all,
we finally have a functional host on el9stream, up and running all basic suite 
tests.
This is a great achievement that we run on latest major RHEL-derivative 
operating system. Engine should follow eventually.
We released el9stream support as a tech preview in 4.5 knowing that we barely 
have repoclosure and that the host can't really be installed. Kudos to the OST 
team (and others) to get it working!

Thanks,
michal
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/JVFNSIPKDEO6UGQQ55O24SWDBJDRAF5K/


[ovirt-devel] finish github migration

2022-02-08 Thread Michal Skrivanek
Hi,
we're about to finish migrating all the projects to GitHub. If you're still 
missing anything, let us knowotherwise we plan to switch off completely the 
gerrit replication this week

Thanks,
michal
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/YUOC45RE3CUNR2EE3JMQHD7MLC62QZE5/


[ovirt-devel] Re: gerrit.ovirt.org upgrade

2022-01-21 Thread Michal Skrivanek


> On 21. 1. 2022, at 17:05, Denis Volkov  wrote:
> 
> Hello
> 
> I'm starting upgrade, gerrit will be unavailable

ETA? any complications?

> 
> -- 
> Denis Volkov
> 
> ___
> Devel mailing list -- devel@ovirt.org
> To unsubscribe send an email to devel-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/devel@ovirt.org/message/IOO66W7HHFP6KAXOM2GM24P3GFWV4VYV/

___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/FVATZHNON5L4YN5TAO7P5YJE22LS5CGH/


[ovirt-devel] Re: Important: two-factor authentication on GitHub

2022-01-17 Thread Michal Skrivanek


> On 13. 1. 2022, at 18:24, Milan Zamazal  wrote:
> 
> Janos Bonic  writes:
> 
>> Dear oVirt contributors,
>> 
>> As we are moving to GitHub for our main development platform and CI we are
>> also securing the oVirt organization.
>> 
>> If you are a member of the oVirt organization on GitHub please make sure
>> that you *enable two factor authentication on your GitHub account by the
>> 20th of January*.
> 
> Hi,
> 
> it would be useful to provide information about good security practices
> together with similar requests.  For example, the question how and where
> to store the recovery codes is non-trivial, if the 2FA should be
> meaningful and not creating a false sense of better security.

from oVirt org side of things we don't have any special requirement. You can 
follow generally recommended practice I guess

> 
> Regards,
> Milan
> 
>> If you don't enable 2FA the security enforcement will drop you from the
>> organization automatically. Should this happen to you please let me or
>> Sandro know via a private mail so we can re-add you.
>> 
>> Janos
>> ___
>> Devel mailing list -- devel@ovirt.org
>> To unsubscribe send an email to devel-le...@ovirt.org
>> Privacy Statement: https://www.ovirt.org/privacy-policy.html
>> oVirt Code of Conduct: 
>> https://www.ovirt.org/community/about/community-guidelines/
>> List Archives:
>> https://lists.ovirt.org/archives/list/devel@ovirt.org/message/FOK4OP7INAGM76JTQTPK27EJCKLUOVVZ/
> ___
> Devel mailing list -- devel@ovirt.org
> To unsubscribe send an email to devel-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/devel@ovirt.org/message/J7IMYY3ZU2JPLDL75XR4O35RZFRF6VLD/
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/TXOS2RELOMTU3E5SR27STGDVHWR7HB2D/


[ovirt-devel] Re: git backup

2022-01-04 Thread Michal Skrivanek


> On 4. 1. 2022, at 13:55, Yedidyah Bar David  wrote:
> 
> On Tue, Jan 4, 2022 at 2:20 PM Michal Skrivanek
>  wrote:
>> 
>> It would be great if we can mirror it vice versa, to gerrit. It would help 
>> with some parts of our automation.
>> But I'm not sure it's that easy to do...
> 
> You mean gerrit.ovirt.org?

yes

> Do we even intend to keep maintaining it
> going forward? I had a feeling we won't, or at most a read-only static
> copy, generated once when all projects finish migrating.

read-only would be ok, but the history there is important. 
With that I'm hoping it's possible to just sync merged commits from github

> 
> Anyway, this will fall into my (2.) below, which is probably the most
> expensive work-wise

well,the gerrit->github replication is almost for free, it's a "native" gerrit 
feature. Other way arounddunno, but if custom means setting up a cron job 
adding new commits...it would be worth it.

> , but is obviously any developer's instinctive first,
> or even only, reply...
> 
> I think keeping gerrit.ovirt.org?'s content is an important item on its
> own.
> 
>> 
>>> On 29. 12. 2021, at 11:36, Yedidyah Bar David  wrote:
>>> 
>>> Hi all,
>>> 
>>> With the decision and on-going process to migrate from gerrit to
>>> github, we do not anymore have a backup - github used to be a backup
>>> for gerrit, automatically synced.
>>> 
>>> Do we want a backup for github? Some options:
>>> 
>>> 1. Do nothing. github as-is might be good enough, and it also has an
>>> archive program [1]. AFAICT, right now none of the partners in this
>>> program allow 'git clone'. There are plans to allow that in the
>>> future.
>>> 
>>> 2. Do something custom like we did so far with gerrit->github.
>>> 
>>> 3. Find some service. Searching for 'github backup' finds lots of
>>> options. I didn't check any.
>>> 
>>> Thoughts?
>>> 
>>> [1] https://archiveprogram.github.com/
>>> 
>>> Best regards,
>>> --
>>> Didi
>>> ___
>>> Devel mailing list -- devel@ovirt.org
>>> To unsubscribe send an email to devel-le...@ovirt.org
>>> Privacy Statement: https://www.ovirt.org/privacy-policy.html
>>> oVirt Code of Conduct: 
>>> https://www.ovirt.org/community/about/community-guidelines/
>>> List Archives: 
>>> https://lists.ovirt.org/archives/list/devel@ovirt.org/message/LP3IZ67IUERRWTW4CDBJPGQPBCYHFGVP/
>> 
> 
> 
> -- 
> Didi
> 
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/3TTBWOG73EYTEYZI7GY43QPCERNYUYOV/


[ovirt-devel] Re: git backup

2022-01-04 Thread Michal Skrivanek
It would be great if we can mirror it vice versa, to gerrit. It would help with 
some parts of our automation.
But I'm not sure it's that easy to do...

> On 29. 12. 2021, at 11:36, Yedidyah Bar David  wrote:
> 
> Hi all,
> 
> With the decision and on-going process to migrate from gerrit to
> github, we do not anymore have a backup - github used to be a backup
> for gerrit, automatically synced.
> 
> Do we want a backup for github? Some options:
> 
> 1. Do nothing. github as-is might be good enough, and it also has an
> archive program [1]. AFAICT, right now none of the partners in this
> program allow 'git clone'. There are plans to allow that in the
> future.
> 
> 2. Do something custom like we did so far with gerrit->github.
> 
> 3. Find some service. Searching for 'github backup' finds lots of
> options. I didn't check any.
> 
> Thoughts?
> 
> [1] https://archiveprogram.github.com/
> 
> Best regards,
> -- 
> Didi
> ___
> Devel mailing list -- devel@ovirt.org
> To unsubscribe send an email to devel-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/devel@ovirt.org/message/LP3IZ67IUERRWTW4CDBJPGQPBCYHFGVP/
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/SWOII7GXV6LTPHHHNO257DLZAEJNOMEQ/


[ovirt-devel] Re: Switching to GH actions build: some not-yet-clear parts and some tips

2021-12-06 Thread Michal Skrivanek


> On 6. 12. 2021, at 12:16, Martin Perina  wrote:
> 
> Hi,
> 
> I've tried to move ovirt-engine-extensions-api 
>  to github and setup GH 
> action to build to have some sort of a basic template for all our Java based 
> projects:
> 
> https://github.com/oVirt/ovirt-engine-extensions-api/blob/master/.github/workflows/build.yml
>  
> 
> 
> During the work I discovered some parts which should be further discussed:
> 
> 1. Most of projects requires packages from ovirt-master-release RPM 
> repositories to build itself
> - In past we have been using 
> https://resources.ovirt.org/pub/yum-repo/ovirt-release-master.rpm 
>  to fetch 
> the latest version of this RPM
> - Do we plan to keep it and have this URL as the main source for oVirt 
> master repositories?
> - ovirt-engine-extensions-api projects requires only Virt SIG repo so 
> I've applied below hack to provide it:
> 
> https://github.com/oVirt/ovirt-engine-extensions-api/blob/master/.automation/prepare-env-cs8.sh#L10
>  
> 
> - I don't like the solution, it would be much nicer to have a constant 
> URL for the latest ovirt-release-master RPM

I'd say the canonical way to use it should be
dnf copr enable ovirt/ovirt-master-snapshot
dnf install ovirt-release-master

> 
> 2. Providing RPM repositories for different OS inside single build artifact
> - I wanted to build the project for both CentOS Stream 8 and 9, but I 
> figured out that upload-artifact action merges results for all different OS 
> builds into a single directory, so created repositories are mixed together.
> - I've use GH strategy option and used centos-stream-8 for CS8 and 
> centos-stream-9 for CS9 builds:
> 
> https://github.com/oVirt/ovirt-engine-extensions-api/blob/master/.github/workflows/build.yml#L14
>  
> 
> - Do we already have that documented so an OST plugin to use those 
> artifacts per OS can be created? If not, where to put it?

not yet. but we wanted to "develop" a unified artifact structiure with repodata 
so taht it can be then consumed as a repo elsewhere

> 
> 3. Using maven cache for Java project builds
> - When build maven projects in jenkins.ovirt.org 
>  we were using artifactory.ovirt.org 
>  to cache maven artifacts (and also save 
> download bandwidth to PHX datacenter)
> - It doesn't make to use artifactory.ovirt.org 
>  inside GH actions, so I've tried to define a 
> GH action specific cache:
> 
> https://github.com/oVirt/ovirt-engine-extensions-api/blob/master/.github/workflows/build.yml#L37
>  
> 
> 
> 4. Caching required dependencies
> - Above maven dependencies helps for maven invocations before RPM build 
> but it doesn't help to fetch all required RPM dependencies for pro RPM maven 
> based build
> - Right now it's almost 500 GB of packages to download and install on top 
> of the default container and it for ovirt-engine-extensions-api it takes 50 % 
> of the whole build
>   time just to download and install those packages (I know it's just 1 
> minute, but anyway :-)
> - I've noticed that VDSM is using its own customized container 
> 
> - Wouldn't it be worthwhile to also create customized containers for Java 
> based projects?

I think our own container with these dependencies would be great

> 
> 5. Differences between CS8 and CS9
> - There are again some disturbing changes between CS8 and CS9:
> - PowerTools repo from CS8 is now called CRB in CS9
> - pki-deps javapackages-tools modules from CS8 don't exist in CS9
> - Core DNF plugins are not installed by default in CS9 container
> - To bypass those differences I've create 2 different shell scripts and 
> plug them into a single workflow job:
> 
> https://github.com/oVirt/ovirt-engine-extensions-api/blob/master/.github/workflows/build.yml#L33
>  
> 
> 
> https://github.com/oVirt/ovirt-engine-extensions-api/blob/master/.automation/prepare-env-cs8.sh
>  
> 
> 
> https://github.com/oVirt/ovirt-engine-extensions-api/blob/master/.automation/prepare-env-cs9.sh
>  
> 

[ovirt-devel] Re: Updates on oVirt Node and oVirt appliance building outside Jenkins

2021-12-04 Thread Michal Skrivanek


> On 2. 12. 2021, at 14:07, Michal Skrivanek  wrote:
> 
> 
> 
>> On 2. 12. 2021, at 13:19, Sandro Bonazzola > <mailto:sbona...@redhat.com>> wrote:
>> 
>> Hi, just a quick update on current issues in trying to build oVirt Node and 
>> the engine appliance outside Jenkins.
>> 
>> 1) Using GitHub Actions: an attempt to build it is in progress here: 
>> https://github.com/sandrobonazzola/ovirt-appliance/pull/1 
>> <https://github.com/sandrobonazzola/ovirt-appliance/pull/1>
>> it's currently failing due to lorax not being able to perform the build. It 
>> kind of make sense as we are trying to do a virt-install within a container 
>> without the needed virtualization hardware exposed.
>> I'm currently investigating how to make use of software virtualization in 
>> order to drop the requirement on missing hardware / nested virtualization.
>> Also investigating on how to use self hosted runners for providing a build 
>> system with usable virtualization hardware.
> 
> it can usually be worked around by bypassing libvirt and/or using full 
> emulation.
> can we somehow get to the virt-install log?

I played with it a bit and couldn't make it work. It does start but the full 
emulation seems to make it so slow that it is entirely unusable. Nesting won't 
work since those are azure vms.
But it's ok - the fact that it start all right means there won't be an issue 
doing this on our own runner.

> 
>> 
>> 2) Using COPR: we have basically the same issue: lorax fails not having 
>> access to the virtualization hardware
>> 
>> 3) Using CentOS Community Build System
>> This is a fully fledged Koji instance and it allows to build using the 
>> image-build variant. It has a completely different configuration system and 
>> it is more similar to what we are doing within the downstream build of oVirt 
>> Node. An attempt of providing the configuration started here: 
>> https://gerrit.ovirt.org/c/ovirt-appliance/+/117801 
>> <https://gerrit.ovirt.org/c/ovirt-appliance/+/117801>
>> The issue there is that all the packages needed to be included within Node 
>> and Appliance must be built within CentOS Community Build System build root.
>> The system has no external access to the internet so everything we need it 
>> needs to come from CentOS infra.
>> 
>> I haven't started digging into oVirt Node but the build flow is very similar 
>> to the appliance one, so once one is solved, the other should be simple.
>> 
>> -- 
>> Sandro Bonazzola
>> MANAGER, SOFTWARE ENGINEERING, EMEA R RHV
>> Red Hat EMEA <https://www.redhat.com/>
>> sbona...@redhat.com <mailto:sbona...@redhat.com>   
>>  <https://www.redhat.com/>   
>> Red Hat respects your work life balance. Therefore there is no need to 
>> answer this email out of your office hours.
>> 
>> 
> 

___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/3Z3T4CA3LN5XTDBO7MW4CFVQML3KG3NH/


[ovirt-devel] Re: Integrating OST with artifacts built in github

2021-12-02 Thread Michal Skrivanek


> On 2. 12. 2021, at 19:09, Nir Soffer  wrote:
> 
> Looking this very helpful document
> https://ovirt.org/develop/developer-guide/migrating_to_github.html
> 
> The suggested solution is to create artifacts.zip with all the rpms
> for a project.
> 
> But to use the rpms in OST, we need to create a yum repository before 
> uploading
> the artifacts, so we can pass a URL of a zip file with a yum repository.
> 
> Here is what we have now in ovirt-imageio:
> 
> 1. We build for multiple distros:
> https://github.com/nirs/ovirt-imageio/blob/790e6b79e756de24ef5134aa583bea46e7fbbfb4/.github/workflows/ci.yml#L32
> 
> 2. Every build creates a repo in exported-artifacts
> https://github.com/nirs/ovirt-imageio/blob/790e6b79e756de24ef5134aa583bea46e7fbbfb4/ci/rpm.sh#L7
> 
> 3. Every build uploads the exported artifacts to rpm-{distro}.zip
> https://github.com/nirs/ovirt-imageio/blob/790e6b79e756de24ef5134aa583bea46e7fbbfb4/.github/workflows/ci.yml#L53

Yes, something like this looks ideal. The only thing I'd like to get to is a 
common organization-wide template or action  so that we do not have to 
reimplement this in every single oVirt project.

> 
> An example build:
> https://github.com/nirs/ovirt-imageio/actions/runs/1531658722
> 
> To start OST manually, developer can copy a link the right zip file
> (e.g for centos stream 8):
> https://github.com/nirs/ovirt-imageio/suites/4535392764/artifacts/121520882
> 
> And pass the link to OST build with parameters job.
> 
> In this solution, OST side gets a repo that can be included in the
> build without any
> additional code or logic - just unzip and use the repo from the directory.

We plan to add this to OST directly, we currently have a helper code that 
handles stdci's jenkins repos, we can implement similar functionality for GH's 
zip files

> 
> I think this is the minimal solution to allow running OST with built
> artifacts from github.
> 
> For triggering jobs automatically, we will need a way to find the
> right artifacts for a build,
> or use some convention for naming the artifacts in all projects.

yeah, so probably a common oVirt's action that does the repo creation and is 
used by all projects would do the job...

> 
> I started with the simple convention of jobname-containername since it
> is easy to integrate
> with the infrastructure we already have in the project.
> 
> Nir
> ___
> Devel mailing list -- devel@ovirt.org
> To unsubscribe send an email to devel-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/devel@ovirt.org/message/K46FB3JIV6HALXJKC3MARNHDWHAXPG5K/
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/VWVEKLOM5OVM7E632BATLL3YOD762MVE/


[ovirt-devel] Re: Updates on oVirt Node and oVirt appliance building outside Jenkins

2021-12-02 Thread Michal Skrivanek


> On 2. 12. 2021, at 13:19, Sandro Bonazzola  wrote:
> 
> Hi, just a quick update on current issues in trying to build oVirt Node and 
> the engine appliance outside Jenkins.
> 
> 1) Using GitHub Actions: an attempt to build it is in progress here: 
> https://github.com/sandrobonazzola/ovirt-appliance/pull/1 
> 
> it's currently failing due to lorax not being able to perform the build. It 
> kind of make sense as we are trying to do a virt-install within a container 
> without the needed virtualization hardware exposed.
> I'm currently investigating how to make use of software virtualization in 
> order to drop the requirement on missing hardware / nested virtualization.
> Also investigating on how to use self hosted runners for providing a build 
> system with usable virtualization hardware.

it can usually be worked around by bypassing libvirt and/or using full 
emulation.
can we somehow get to the virt-install log?

> 
> 2) Using COPR: we have basically the same issue: lorax fails not having 
> access to the virtualization hardware
> 
> 3) Using CentOS Community Build System
> This is a fully fledged Koji instance and it allows to build using the 
> image-build variant. It has a completely different configuration system and 
> it is more similar to what we are doing within the downstream build of oVirt 
> Node. An attempt of providing the configuration started here: 
> https://gerrit.ovirt.org/c/ovirt-appliance/+/117801 
> 
> The issue there is that all the packages needed to be included within Node 
> and Appliance must be built within CentOS Community Build System build root.
> The system has no external access to the internet so everything we need it 
> needs to come from CentOS infra.
> 
> I haven't started digging into oVirt Node but the build flow is very similar 
> to the appliance one, so once one is solved, the other should be simple.
> 
> -- 
> Sandro Bonazzola
> MANAGER, SOFTWARE ENGINEERING, EMEA R RHV
> Red Hat EMEA 
> sbona...@redhat.com    
>  
> Red Hat respects your work life balance. Therefore there is no need to answer 
> this email out of your office hours.
> 
> 

___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/P7SNY6SSQYNNLFVV5NQIRPURDOEIIQAD/


[ovirt-devel] Re: oVirt Community - open discussion around repositories and workflows

2021-12-02 Thread Michal Skrivanek


> On 2. 12. 2021, at 12:51, Milan Zamazal  wrote:
> 
> Michal Skrivanek  writes:
> 
>>> On 1. 12. 2021, at 16:57, Nir Soffer  wrote:
>>> 
>>> On Wed, Dec 1, 2021 at 11:38 AM Milan Zamazal  wrote:
>>>> 
>>>> Michal Skrivanek  writes:
>>>> 
>>>>> Hi all,
>>>>> so far we haven't encounter any blocking issue with this effort, I
>>>>> wanted to propose to decide on oVirt development moving to GitHub,
>>>>> COPR and CBS. Recent issue with decommissioning of our CI datacenter
>>>>> is a good reminder why we are doing that...
>>>>> What do we want to do?
>>>>> 1) move "ovirt-master-snapshot" compose to COPR
>>>>> it is feasible for all projects except ovirt-node and
>>>>> appliance due to COPR limitations, for these two we plan to use a
>>>>> self-hosted runner in github env.
>>>>> it replaces the "build-artifacts" stdci stage
>>>>> 2) move release to CentOS Community Build System to simplify our oVirt 
>>>>> releases
>>>>> replaces our custom releng-tools process and aligns us better
>>>>> with CentOS that is our main (and only) platform we support.
>>>>> 3) move development from Gerrit to GitHub
>>>>> this is a very visible change and affects every oVirt
>>>>> developer. We need a way how to test posted patches and the current
>>>>> stdci "check-patch" stage is overly complex and slow to run, we lack
>>>>> people for stdci maintenance in general (bluntly, it's a dead
>>>>> project). Out of the various options that exist we ended up converging
>>>>> to Github. Why? Just because it's the most simple thing to do for us,
>>>>> with least amount of effort, least amount of additional people and hw
>>>>> resources, with a manageable learning curve. It comes at a price - it
>>>>> only works if we switch our primary development from Gerrit to Github
>>>>> for all the remaining projects. It is a big change to our processes,
>>>>> but I believe we have to go through that transition in order to solve
>>>>> our CI troubles for good.  We started preparing guide and templates to
>>>>> use so that we keep a uniform "look and feel" for all sub-projects, is
>>>>> shall be ready soon.
>>>>> 
>>>>> I'd like us to move from "POC" stage to "production", and actively
>>>>> start working on the above, start moving project after project.
>>>>> Let me ask for a final round of thoughts, comments, objections, we are 
>>>>> ready to go ahead.
>>>> 
>>>> Hi,
>>>> 
>>>> the Vdsm maintainers have discussed the possibility of moving Vdsm
>>>> development to GitHub and we consider it a reasonable and feasible
>>>> option.  Although GitHub is not on par with Gerrit as for code reviews,
>>>> having a more reliable common development platform outweighs the
>>>> disadvantages.  There is already an ongoing work on having a fully
>>>> usable Vdsm CI on GitHub.
>>>> 
>>>> One thing related to the move is that we would like to retain the
>>>> history of code reviews from Gerrit.  The comments there contain
>>>> valuable information that we wouldn't like to lose.  Is there a way to
>>>> export the public Gerrit contents, once we make a switch to GitHub for
>>>> each particular project, to something that could be reasonably used for
>>>> patch archaeology when needed?
>>> 
>>> I think keeping a readonly instance would be best, one all projects
>>> migrated to github.
>> 
>> 
>>> 
>>> I hope there is a way to export the data to static html so it will be
>>> available forever without running an actual gerrit instance.
>> 
>> no idea...it can definitely be scraped patch after patch..but it's
>> going to be really huge and again, it will keep running, there's no
>> plan to shut it down or anything.
>> gerrit.ovirt.org will stay up 
> 
> And working / properly maintained?  Can you guarantee it will remain
> usable for the relevant purposes?  If yes then it would be indeed the
> best option.

why not? it has been in care of the infra@ovirt team for more than a decade 
now, why would it change?
The reason for our move to github is primarily due to stdci "issues. It's 
hurting our effect

[ovirt-devel] Re: oVirt Community - open discussion around repositories and workflows

2021-12-02 Thread Michal Skrivanek


> On 1. 12. 2021, at 16:57, Nir Soffer  wrote:
> 
> On Wed, Dec 1, 2021 at 11:38 AM Milan Zamazal  wrote:
>> 
>> Michal Skrivanek  writes:
>> 
>>> Hi all,
>>> so far we haven't encounter any blocking issue with this effort, I
>>> wanted to propose to decide on oVirt development moving to GitHub,
>>> COPR and CBS. Recent issue with decommissioning of our CI datacenter
>>> is a good reminder why we are doing that...
>>> What do we want to do?
>>> 1) move "ovirt-master-snapshot" compose to COPR
>>>  it is feasible for all projects except ovirt-node and
>>> appliance due to COPR limitations, for these two we plan to use a
>>> self-hosted runner in github env.
>>>  it replaces the "build-artifacts" stdci stage
>>> 2) move release to CentOS Community Build System to simplify our oVirt 
>>> releases
>>>  replaces our custom releng-tools process and aligns us better
>>> with CentOS that is our main (and only) platform we support.
>>> 3) move development from Gerrit to GitHub
>>>  this is a very visible change and affects every oVirt
>>> developer. We need a way how to test posted patches and the current
>>> stdci "check-patch" stage is overly complex and slow to run, we lack
>>> people for stdci maintenance in general (bluntly, it's a dead
>>> project). Out of the various options that exist we ended up converging
>>> to Github. Why? Just because it's the most simple thing to do for us,
>>> with least amount of effort, least amount of additional people and hw
>>> resources, with a manageable learning curve. It comes at a price - it
>>> only works if we switch our primary development from Gerrit to Github
>>> for all the remaining projects. It is a big change to our processes,
>>> but I believe we have to go through that transition in order to solve
>>> our CI troubles for good.  We started preparing guide and templates to
>>> use so that we keep a uniform "look and feel" for all sub-projects, is
>>> shall be ready soon.
>>> 
>>> I'd like us to move from "POC" stage to "production", and actively
>>> start working on the above, start moving project after project.
>>> Let me ask for a final round of thoughts, comments, objections, we are 
>>> ready to go ahead.
>> 
>> Hi,
>> 
>> the Vdsm maintainers have discussed the possibility of moving Vdsm
>> development to GitHub and we consider it a reasonable and feasible
>> option.  Although GitHub is not on par with Gerrit as for code reviews,
>> having a more reliable common development platform outweighs the
>> disadvantages.  There is already an ongoing work on having a fully
>> usable Vdsm CI on GitHub.
>> 
>> One thing related to the move is that we would like to retain the
>> history of code reviews from Gerrit.  The comments there contain
>> valuable information that we wouldn't like to lose.  Is there a way to
>> export the public Gerrit contents, once we make a switch to GitHub for
>> each particular project, to something that could be reasonably used for
>> patch archaeology when needed?
> 
> I think keeping a readonly instance would be best, one all projects
> migrated to github.


> 
> I hope there is a way to export the data to static html so it will be
> available forever without running an actual gerrit instance.

no idea...it can definitely be scraped patch after patch..but it's going to be 
really huge and again, it will keep running, there's no plan to shut it down or 
anything.
gerrit.ovirt.org will stay up for as long as it's needed and relevant. If it 
ever comes to shutting it down I don't think there's going to be anyone caring 
about the comments

> 
> Nir
> 
>> 
>>> It's not going to be easy, but I firmly believe it will greatly
>>> improve maintainability of oVirt and reduce overhead that we all
>>> struggle with for years.
>>> 
>>> Thanks,
>>> michal
>>> 
>>>> On 10. 11. 2021, at 9:17, Sandro Bonazzola  wrote:
>>>> 
>>>> Hi, here's an update on what has been done so far and how it is going.
>>>> 
>>>> COPR
>>>> All the oVirt active subprojects are now built on COPR except oVirt
>>>> Engine Appliance and oVirt Node: I'm still looking into how to build
>>>> them on COPR.
>>>> 
>>>> Of those subprojects only the following are not yet built
>>>> automatically on patch merge event as they have pending patches for
>>

[ovirt-devel] Re: COPR's ovirt-master-snapshot replacing resources.ovirt.org's "tested" repo

2021-12-02 Thread Michal Skrivanek


> On 2. 12. 2021, at 12:07, Michal Skrivanek  
> wrote:
> 
> Hi all,
> COPR[1] is effectively replacing the "tested" repo - the one that gets all 
> built artifacts after a patch is merged (and is actually not tested:)
> 
> Sandro started to move projects to COPR for merged patches some time ago and 
> we believe it's complete except appliance and node.
> So we can all start switching to it as it is currently more up to date anyway
> Please start modifying your CI setup, other automation, your own updates, 
> wherever really, and just switch from [2] to [3] (adjust the platform/arch 
> accordingly as well, so e.g. [4])
> 
> Thanks,
> michal
> 
> [1] https://copr.fedorainfracloud.org/coprs/ovirt/ovirt-master-snapshot/
> [2] https://resources.ovirt.org/repos/ovirt/tested/master/rpm/
> [3] 
> https://download.copr.fedorainfracloud.org/results/ovirt/ovirt-master-snapshot/
> [4] 
> https://download.copr.fedorainfracloud.org/results/ovirt/ovirt-master-snapshot/centos-stream-9-x86_64

also, for most cases I guess it makes most sense to use the DNF vars  
https://download.copr.fedorainfracloud.org/results/ovirt/ovirt-master-snapshot/centos-stream-$releasever-$basearch/
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/VPFGRVRMXP4I6OCRYIJ7W6LH2YIL5CB5/


[ovirt-devel] COPR's ovirt-master-snapshot replacing resources.ovirt.org's "tested" repo

2021-12-02 Thread Michal Skrivanek
Hi all,
COPR[1] is effectively replacing the "tested" repo - the one that gets all 
built artifacts after a patch is merged (and is actually not tested:)

Sandro started to move projects to COPR for merged patches some time ago and we 
believe it's complete except appliance and node.
So we can all start switching to it as it is currently more up to date anyway
Please start modifying your CI setup, other automation, your own updates, 
wherever really, and just switch from [2] to [3] (adjust the platform/arch 
accordingly as well, so e.g. [4])

Thanks,
michal

[1] https://copr.fedorainfracloud.org/coprs/ovirt/ovirt-master-snapshot/
[2] https://resources.ovirt.org/repos/ovirt/tested/master/rpm/
[3] 
https://download.copr.fedorainfracloud.org/results/ovirt/ovirt-master-snapshot/
[4] 
https://download.copr.fedorainfracloud.org/results/ovirt/ovirt-master-snapshot/centos-stream-9-x86_64
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/IRX43SNXNNG3TUZMV6BH427PKVTNIJRI/


[ovirt-devel] Re: oVirt Community - open discussion around repositories and workflows

2021-12-01 Thread Michal Skrivanek
Hi all,
so far we haven't encounter any blocking issue with this effort, I wanted to 
propose to decide on oVirt development moving  to GitHub, COPR and CBS. Recent 
issue with decommissioning of our CI datacenter is a good reminder why we are 
doing that...
What do we want to do?
1) move "ovirt-master-snapshot" compose to COPR
it is feasible for all projects except ovirt-node and appliance due to 
COPR limitations, for these two we plan to use a self-hosted runner in github 
env.
it replaces the "build-artifacts" stdci stage
2) move release to CentOS Community Build System to simplify our oVirt releases
replaces our custom releng-tools process and aligns us better with 
CentOS that is our main (and only) platform we support.
3) move development from Gerrit to GitHub
this is a very visible change and affects every oVirt developer. We 
need a way how to test posted patches and the current stdci "check-patch" stage 
is overly complex and slow to run, we lack people for stdci maintenance in 
general (bluntly, it's a dead project). Out of the various options that exist 
we ended up converging to Github. Why? Just because it's the most simple thing 
to do for us, with least amount of effort, least amount of additional people 
and hw resources, with a manageable learning curve. It comes at a price - it 
only works if we switch our primary development from Gerrit to Github for all 
the remaining projects. It is a big change to our processes, but I believe we 
have to go through that transition in order to solve our CI troubles for good.  
We started preparing guide and templates to use so that we keep a uniform "look 
and feel" for all sub-projects, is shall be ready soon.

I'd like us to move from "POC" stage to "production", and actively start 
working on the above, start moving project after project.
Let me ask for a final round of thoughts, comments, objections, we are ready to 
go ahead.

It's not going to be easy, but I firmly believe it will greatly improve 
maintainability of oVirt and reduce overhead that we all struggle with for 
years.

Thanks,
michal

> On 10. 11. 2021, at 9:17, Sandro Bonazzola  wrote:
> 
> Hi, here's an update on what has been done so far and how it is going.
> 
> COPR
> All the oVirt active subprojects are now built on COPR except oVirt Engine 
> Appliance and oVirt Node: I'm still looking into how to build them on COPR.
> 
> Of those subprojects only the following are not yet built automatically on 
> patch merge event as they have pending patches for enabling the automation:
> - ovirt-engine-nodejs-modules: 
> https://gerrit.ovirt.org/c/ovirt-engine-nodejs-modules/+/117506 
> - 
> ovirt-engine-ui-extensions: 
> https://gerrit.ovirt.org/c/ovirt-engine-ui-extensions/+/117512 
> 
> - ovirt-web-ui: https://github.com/oVirt/ovirt-web-ui/pull/1532 
> 
> 
> You can see the build status for the whole project here: 
> https://copr.fedorainfracloud.org/coprs/ovirt/ovirt-master-snapshot/monitor/ 
> 
> If you are maintaining an oVirt project and you want to enable builds for 
> CentOS Stream 9 or other architectures supported by copr please let me know.
> 
> So far, the COPR infrastructure seems reliable and working well.
> 
> GitHub
> The following projects are developed on GitHub only:
> - ovirt-ansible-collection
> - ovirt-cockpit-sso
> - ovirt-web-ui
> - python-ovirt-engine-sdk4
> - ovirt-engine-sdk-go
> 
> Within this list:
> - ovirt-engine-sdk-go is not being built in COPR as the rpm is not needed for 
> developing with go and the automation is already handled on GitHub actions 
> only.
> - ovirt-cockpit-sso is still triggering jenkins jobs but it's ready to drop 
> them as PR are now tested with github actions too and builds are handled in 
> COPR.
> 
> So far, moving the development to GitHub only seems to be working well and I 
> would suggest the maintainers of the oVirt subprojects to consider moving to 
> GitHub only as well.
> +Sanja Bonic  can help you enabling GitHub actions 
> for your oVirt projects so please ping her if you need help.
> 
> CentOS Community Build
> 
> I'm going to try building the same projects currently being built in COPR 
> also within the CentOS Community Build system in the coming weeks.
> If you are already a CentOS Virtualization SIG member and you want to help 
> with this effort please let me know what you are going to build there so we 
> won't duplicate the work.
> If you are an oVirt project maintainer I would recommend you to join CentOS 
> Virtualization SIG so you'll be independent releasing your package builds.
> 
> 
> 
> 
> Il giorno mar 2 nov 2021 alle ore 12:06 Sandro Bonazzola  > ha scritto:
> Hi, 

[ovirt-devel] Re: Another CI failure

2021-11-30 Thread Michal Skrivanek


> On 30. 11. 2021, at 14:37, Milan Zamazal  wrote:
> 
> Hi,
> 
> as demonstrated in
> https://jenkins.ovirt.org/job/vdsm_standard-check-patch/31133/, OST
> builds can at least start now but still fail, apparently due to the
> following:
> 
>  + sudo -n usermod -a -G jenkins qemu
>  usermod: user 'qemu' does not exist
>  + log ERROR 'Failed to add user qemu to group jenkins'
>  + local level=ERROR
>  + shift
>  + local 'message=Failed to add user qemu to group jenkins'
>  + local prefix
>  + [[ 4 -gt 1 ]]
>  + prefix='global_setup[lago_setup]'
>  + echo 'global_setup[lago_setup] ERROR: Failed to add user qemu to group 
> jenkins'
>  global_setup[lago_setup] ERROR: Failed to add user qemu to group jenkins
>  + return 1
>  + failed=true
> 
> What can be done about it?

it passed the next time, perhaps a faulty jenkins node that doesn't build?

> 
> Thanks,
> Milan
> ___
> Devel mailing list -- devel@ovirt.org
> To unsubscribe send an email to devel-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/devel@ovirt.org/message/KLQU6KMMORGVDSENBY4COFQZFHAPBS7R/
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/AAUUN64LO5JJRXDI2CL4DBGYX3AZTJN2/


[ovirt-devel] Re: CI failing all the time

2021-11-25 Thread Michal Skrivanek


> On 25. 11. 2021, at 16:45, Evgheni Dereveanchin  wrote:
> 
> Hi Milan,
> 
> Indeed, this is related to the Jenkins move: all pre-existing nodes are no 
> longer usable so new ones have been created.
> We're looking into the reasons new nodes fail to initialize and bringing up 
> new ones in parallel to help with processing the build queue.
> 
> Sorry for the inconvenience. Please report any other issues if you see them 
> as there may be quite some instability due to the rebuild.

Hi Evgheni,
the queue is now 53 builds long, and it seems nothing is building, perhaps 
there's no worker or the labelling is wrong?

Thanks,
michal

> 
> Regards.
> Evgheni
> 
> On Thu, Nov 25, 2021 at 4:24 PM Nir Soffer  > wrote:
> On Thu, Nov 25, 2021 at 3:48 PM Milan Zamazal  > wrote:
> >
> > Hi,
> >
> > all patches uploaded today I've seen (for Engine and Vdsm) fail due to
> > problems with availability of packages when preparing the el7
> > environment.  For example:
> > https://jenkins.ovirt.org/job/ovirt-engine_standard-check-patch/15291/ 
> > 
> >
> > Additionally, Vdsm tests are not run on PSI.
> >
> > Does anybody know what's wrong and how to fix it?  Can it be related to
> > the Jenkins move?
> 
> Looking at
> https://jenkins.ovirt.org/ 
> 
> There are 33 jobs in the queue, and only 3 jobs running.
> 
> Maybe we did not restore all the nodes?
> 
> 
> ___
> Devel mailing list -- devel@ovirt.org
> To unsubscribe send an email to devel-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/devel@ovirt.org/message/ZLH3ILKHC3YPZWQFA2SG4GS3QSIALUXZ/

___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/RZ7H4L26DU6LAKFQXJE4W3D4C5HP76CX/


[ovirt-devel] Re: OST gating: Merge aborted

2021-11-22 Thread Michal Skrivanek


> On 19. 11. 2021, at 13:50, Milan Zamazal  wrote:
> 
> Milan Zamazal  writes:
> 
>> Hi,
>> 
>> https://gerrit.ovirt.org/c/vdsm/+/117543 reached "OST merge attempted"

that'
s why it says attempted, it cannot know if it will succeed

>> but the merge hasn't happened.  The trigger log says:
>> 
>>  + echo 'OST_MESSAGE=Patch is good to merge'
>>  + touch verified
>>  + exit 0
>>  [EnvInject] - Injecting environment variables from a build step.
>>  [EnvInject] - Injecting as environment variables the properties file path 
>> 'ost'
>>  [EnvInject] - Variables injected successfully.
>>  [ds-ost-gating-trigger3] $ /bin/sh -xe /tmp/jenkins3506416413422390774.sh
>>  + [[ -f approved ]]
>>  + [[ -f verified ]]
>>  + echo 'will submit'
>>  will submit
>>  + exit 0
>>  Build step 'Groovy Postbuild' changed build result to ABORTED
>>  Started calculate disk usage of build
>>  Finished Calculation of disk usage of build in 0 seconds
>>  Started calculate disk usage of workspace
>>  Finished Calculation of disk usage of workspace in 0 seconds
>>  Finished: ABORTED
>> 
>> Any idea what's wrong?
> 
> Apparently missing CI+1 (due to the infamous DNS error that prevented
> running tests after rebase).  It would be nice if OST gating either
> didn't attempt a merge at all in such a case, or failed with a
> corresponding error message.

it's convoluted enough already, but feel free to implement that
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/DCNKMDLWPIGYCZ7TEILZNRCSK6ILO2VJ/


[ovirt-devel] Re: OST gating oddities

2021-11-18 Thread Michal Skrivanek
> On 16 Nov 2021, at 20:42, Milan Zamazal  wrote:
>
> Hi,
>
> putting recently experienced problems with OST runs failures aside, I
> wonder about what currently happens in gerrit on gating.  For instance in
> https://gerrit.ovirt.org/c/vdsm/+/117523/6:
>
> - OST OST -1
>  Patch Set 6: OST-1
>  https://redir.apps.ovirt.org/dj/job/ds-ost-gating-trigger1/4579 : Looking 
> for build artifacts for OST.
>  ci build
>
>  * Why setting OST-1 when looking for a build?
>
> - Jenkins CI
>  Patch Set 6:
>
>  No Builds Executed
>
>  https://jenkins.ovirt.org/job/vdsm_standard-check-patch/30672/ : To avoid 
> overloading the infrastructure, a whitelist for
>  running gerrit triggered jobs has been set in place, if
>  you feel like you should be in it, please contact infra at
>  ovirt dot org.
>
>  * OST not allowed to trigger builds?
>
> - OST OST +1
>  Patch Set 6: OST+1
>
>  https://redir.apps.ovirt.org/dj/job/ds-ost-gating-trigger1/4583 : start OST 
> for https://jenkins.ovirt.org/job/vdsm_standard-check-patch/30681/
>
>  * Shouldn't OST set OST+1 after it runs rather than when it starts?

Yes. I looked at it today and hopefully fixed. There is no history for
server configuration in jenkins. Either someone changed that or it
happened on some upgrade. No idea, best would be to report it when you
notice, not a week later

>
> Is there any explanation of these phenomenons?
>
> Thanks,
> Milan
> ___
> Devel mailing list -- devel@ovirt.org
> To unsubscribe send an email to devel-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/devel@ovirt.org/message/IMRYKBROJXKU3EQAQYNQBJ67CJSXKWWD/
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/RWCOYZI2XD7BF6FGNR24WDNLGJ3AGTPG/


[ovirt-devel] Re: suspend resume test broken

2021-11-16 Thread Michal Skrivanek


> On 16. 11. 2021, at 9:15, Michal Skrivanek  wrote:
> 
> HI all,
> suspend/resume is currently broken on master/el8stream, can anyone please 
> take a look and find out the reason and fix it?

seems it's due to old node image used, that exposes bug 
https://bugzilla.redhat.com/show_bug.cgi?id=1999141
should go away after update...

> 
> Thanks,
> michal
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/ORI2UXMQPW5BTG2FEILMYY35Q4PUF4LR/


[ovirt-devel] suspend resume test broken

2021-11-16 Thread Michal Skrivanek
HI all,
suspend/resume is currently broken on master/el8stream, can anyone please take 
a look and find out the reason and fix it?

Thanks,
michal
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/YO23HLQGOMAM3MLFMCEQJB73SL6VABEV/


[ovirt-devel] Re: OST: Vdsm: Occasional failures when stopping vdsmd

2021-11-12 Thread Michal Skrivanek
this needs to be fixed asap, it's failing very frequently for 2+ days
what's also changing is the underlying OS of course, it doesn't necessarily 
have to be a vdsm change


> On 12. 11. 2021, at 20:15, Milan Zamazal  wrote:
> 
> Hi,
> 
> Michal has observed occasional OST failures in test_vdsm_recovery last
> days, which hadn't been seen before.  When `systemctl stop vdsmd' is
> called (via Ansible) there, vdsmd (almost?) never finishes its shutdown
> within the 10 seconds timeout and gets then killed with SIGKILL.  If
> this action is accompanied by "Job for vdsmd.service canceled." message
> then the test fails; otherwise OST continues normally.
> 
> The situation is reproducible by running OST basic-suite-master and
> making it artificially failing after test_vdsm_recovery.  Then running
> `systemctl stop vdsmd' manually on the given OST host (can be done
> repeatedly, so it provides a good opportunity to examine the problem).
> 
> There are two problems there:
> 
> - "Job for vdsmd.service canceled." message that sometimes occurs after
>  `systemctl stop vdsmd' and then the test fails.  I don't know what it
>  means and I can't identify any difference in journal between when the
>  message occurs and when it doesn't.
> 
> - The fact that Vdsm doesn't stop within the timeout and must be killed.
>  This doesn't happen in my normal oVirt installation.  It apparently
>  blocks in self.irs.prepareForShutdown() call from clientIF.py.
>  Journal says:
> 
>systemd[1]: Stopping Virtual Desktop Server Manager...
>systemd[1]: vdsmd.service: State 'stop-sigterm' timed out. Killing.
>systemd[1]: vdsmd.service: Killing process 132608 (vdsmd) with signal 
> SIGKILL.
>systemd[1]: vdsmd.service: Killing process 133445 (ioprocess) with signal 
> SIGKILL.
>systemd[1]: vdsmd.service: Killing process 133446 (ioprocess) with signal 
> SIGKILL.
>systemd[1]: vdsmd.service: Killing process 133447 (ioprocess) with signal 
> SIGKILL.
>systemd[1]: vdsmd.service: Main process exited, code=killed, status=9/KILL
>systemd[1]: vdsmd.service: Failed with result 'timeout'.
>systemd[1]: Stopped Virtual Desktop Server Manager.
> 
>  And vdsm.log (from a different run, sorry):
> 
>2021-11-12 07:09:30,274+ INFO  (MainThread) [vdsm.api] START 
> prepareForShutdown() from=internal, 
> task_id=21b12bbd-1d61-4217-b92d-641a53d5f7bb (api:48)
>2021-11-12 07:09:30,317+ DEBUG (vmchannels) [vds] VM channels listener 
> thread has ended. (vmchannels:214)
>2021-11-12 07:09:30,317+ DEBUG (vmchannels) [root] FINISH thread 
>  (concurrent:261)
>2021-11-12 07:09:30,516+ DEBUG (mailbox-hsm/4) [root] FINISH thread 
>  (concurrent:261)
>2021-11-12 07:09:30,517+ INFO  (ioprocess/143197) [IOProcess] 
> (3a729aa1-8e14-4ea0-8794-e3d67fbde542) Starting ioprocess (__init__:465)
>2021-11-12 07:09:30,521+ INFO  (ioprocess/143199) [IOProcess] 
> (ost-he-basic-suite-master-storage:_exports_nfs_share1) Starting ioprocess 
> (__init__:465)
>2021-11-12 07:09:30,535+ INFO  (ioprocess/143193) [IOProcess] 
> (ost-he-basic-suite-master-storage:_exports_nfs_share2) Starting ioprocess 
> (__init__:465)
>2021-11-12 07:09:30,679+ INFO  (ioprocess/143195) [IOProcess] 
> (0187cf2f-2344-48de-a2a0-dd007315399f) Starting ioprocess (__init__:465)
>2021-11-12 07:09:30,719+ INFO  (ioprocess/143192) [IOProcess] 
> (15fa3d6c-671b-46ef-af9a-00337011fa26) Starting ioprocess (__init__:465)
>2021-11-12 07:09:30,756+ INFO  (ioprocess/143194) [IOProcess] 
> (ost-he-basic-suite-master-storage:_exports_nfs_exported) Starting ioprocess 
> (__init__:465)
>2021-11-12 07:09:30,768+ INFO  (ioprocess/143198) [IOProcess] 
> (ost-he-basic-suite-master-storage:_exports_nfs__he) Starting ioprocess 
> (__init__:465)
>2021-11-12 07:09:30,774+ INFO  (ioprocess/143196) [IOProcess] 
> (a8bab4ef-2952-4c42-ba44-dbb3e1b8c87c) Starting ioprocess (__init__:465)
>2021-11-12 07:09:30,957+ DEBUG (mailbox-hsm/2) [root] FINISH thread 
>  (concurrent:261)
>2021-11-12 07:09:31,629+ INFO  (mailbox-hsm) 
> [storage.mailbox.hsmmailmonitor] HSM_MailboxMonitor - Incoming mail 
> monitoring thread stopped, clearing outgoing mail (mailbox:500)
>2021-11-12 07:09:31,629+ INFO  (mailbox-hsm) 
> [storage.mailbox.hsmmailmonitor] HSM_MailMonitor sending mail to SPM - 
> ['/usr/bin/dd', 
> 'of=/rhev/data-center/f54c6052-437f-11ec-9094-54527d140533/mastersd/dom_md/inbox',
>  'iflag=fullblock', 'oflag=direct', 'conv=notrunc', 'bs=4096', 'count=1', 
> 'seek=2'] (mailbox:382)
>2021-11-12 07:09:32,610+ DEBUG (mailbox-hsm/1) [root] FINISH thread 
>  (concurrent:261)
>2021-11-12 07:09:32,792+ DEBUG (mailbox-hsm/3) [root] FINISH thread 
>  (concurrent:261)
>2021-11-12 07:09:32,818+ DEBUG (mailbox-hsm/0) [root] FINISH thread 
>  (concurrent:261)
>2021-11-12 07:09:32,820+ INFO  (MainThread) [storage.monitor] Shutting 
> down domain monitors (monitor:243)
>

[ovirt-devel] Re: [ovirt-users] [ANN] oVirt 4.4.8 Async update #1

2021-09-22 Thread Michal Skrivanek
Hi all,
please be aware of bug https://bugzilla.redhat.com/show_bug.cgi?id=2005221 that 
unfortunately removes the timezone info (Hardware Clock Time Offset) in VM 
properties. It matters mostly to Windows VMs since they use “localtime” so 
after reboot the guest time will probably be wrong. It also breaks the Cluster 
Level update with HE as described in the bug.
Unfortunately there’s no simple way how to restore that since the information 
is lost on 4.4.8 upgrade, if the time matters to you you have to set it again 
for each VM

Please refrain from upgrading engine to 4.4.8.5 and wait for 4.4.8.6
Nodes/hosts are not affected in any way.

Thanks,
michal


> On 27. 8. 2021, at 8:25, Sandro Bonazzola  wrote:
> 
> oVirt 4.4.8 Async update #1
> On August 26th 2021 the oVirt project released an async update to the 
> following packages:
> ovirt-ansible-collection 1.6.2
> ovirt-engine 4.4.8.5
> ovirt-release44 4.4.8.1
> oVirt Node 4.4.8.1
> oVirt Appliance 4.4-20210826
> 
> Fixing the following bugs:
> Bug 1947709  - [IPv6] 
> HostedEngineLocal is an isolated libvirt network, breaking upgrades from 4.3
> Bug 1966873  - [RFE] 
> Create Ansible role for remove stale LUNs example remove_mpath_device.yml
> Bug 1997663  - Keep 
> cinbderlib dependencies optional for 4.4.8
> Bug 1996816  - Cluster 
> upgrade fails with: 'OAuthException invalid_grant: The provided authorization 
> grant for the auth code has expired.
> 
> oVirt Node Changes:
> - Consume above oVirt updates
> - GlusterFS 8.6: https://docs.gluster.org/en/latest/release-notes/8.6/ 
>  
> - Fixes for:
> CVE-2021-22923  curl: 
> Metalink download sends credentials
> CVE-2021-22922  curl: 
> Content not matching hash in Metalink is not being discarded
> 
> 
> Full diff list:
> --- ovirt-node-ng-image-4.4.8.manifest-rpm2021-08-19 07:57:44.081590739 
> +0200
> +++ ovirt-node-ng-image-4.4.8.1.manifest-rpm  2021-08-27 08:11:54.863736688 
> +0200
> @@ -2,7 +2,7 @@
> -ModemManager-glib-1.10.8-3.el8.x86_64
> -NetworkManager-1.32.6-1.el8.x86_64
> -NetworkManager-config-server-1.32.6-1.el8.noarch
> -NetworkManager-libnm-1.32.6-1.el8.x86_64
> -NetworkManager-ovs-1.32.6-1.el8.x86_64
> -NetworkManager-team-1.32.6-1.el8.x86_64
> -NetworkManager-tui-1.32.6-1.el8.x86_64
> +ModemManager-glib-1.10.8-4.el8.x86_64
> +NetworkManager-1.32.8-1.el8.x86_64
> +NetworkManager-config-server-1.32.8-1.el8.noarch
> +NetworkManager-libnm-1.32.8-1.el8.x86_64
> +NetworkManager-ovs-1.32.8-1.el8.x86_64
> +NetworkManager-team-1.32.8-1.el8.x86_64
> +NetworkManager-tui-1.32.8-1.el8.x86_64
> @@ -94 +94 @@
> -curl-7.61.1-18.el8.x86_64
> +curl-7.61.1-18.el8_4.1.x86_64
> @@ -106,4 +106,4 @@
> -device-mapper-1.02.177-5.el8.x86_64
> -device-mapper-event-1.02.177-5.el8.x86_64
> -device-mapper-event-libs-1.02.177-5.el8.x86_64
> -device-mapper-libs-1.02.177-5.el8.x86_64
> +device-mapper-1.02.177-6.el8.x86_64
> +device-mapper-event-1.02.177-6.el8.x86_64
> +device-mapper-event-libs-1.02.177-6.el8.x86_64
> +device-mapper-libs-1.02.177-6.el8.x86_64
> @@ -140,36 +140,36 @@
> -fence-agents-all-4.2.1-74.el8.x86_64
> -fence-agents-amt-ws-4.2.1-74.el8.noarch
> -fence-agents-apc-4.2.1-74.el8.noarch
> -fence-agents-apc-snmp-4.2.1-74.el8.noarch
> -fence-agents-bladecenter-4.2.1-74.el8.noarch
> -fence-agents-brocade-4.2.1-74.el8.noarch
> -fence-agents-cisco-mds-4.2.1-74.el8.noarch
> -fence-agents-cisco-ucs-4.2.1-74.el8.noarch
> -fence-agents-common-4.2.1-74.el8.noarch
> -fence-agents-compute-4.2.1-74.el8.noarch
> -fence-agents-drac5-4.2.1-74.el8.noarch
> -fence-agents-eaton-snmp-4.2.1-74.el8.noarch
> -fence-agents-emerson-4.2.1-74.el8.noarch
> -fence-agents-eps-4.2.1-74.el8.noarch
> -fence-agents-heuristics-ping-4.2.1-74.el8.noarch
> -fence-agents-hpblade-4.2.1-74.el8.noarch
> -fence-agents-ibmblade-4.2.1-74.el8.noarch
> -fence-agents-ifmib-4.2.1-74.el8.noarch
> -fence-agents-ilo-moonshot-4.2.1-74.el8.noarch
> -fence-agents-ilo-mp-4.2.1-74.el8.noarch
> -fence-agents-ilo-ssh-4.2.1-74.el8.noarch
> -fence-agents-ilo2-4.2.1-74.el8.noarch
> -fence-agents-intelmodular-4.2.1-74.el8.noarch
> -fence-agents-ipdu-4.2.1-74.el8.noarch
> -fence-agents-ipmilan-4.2.1-74.el8.noarch
> -fence-agents-kdump-4.2.1-74.el8.x86_64
> -fence-agents-mpath-4.2.1-74.el8.noarch
> -fence-agents-redfish-4.2.1-74.el8.x86_64
> -fence-agents-rhevm-4.2.1-74.el8.noarch
> -fence-agents-rsa-4.2.1-74.el8.noarch
> -fence-agents-rsb-4.2.1-74.el8.noarch
> -fence-agents-sbd-4.2.1-74.el8.noarch
> -fence-agents-scsi-4.2.1-74.el8.noarch
> -fence-agents-vmware-rest-4.2.1-74.el8.noarch
> -fence-agents-vmware-soap-4.2.1-74.el8.noarch
> -fence-agents-wti-4.2.1-74.el8.noarch
> 

[ovirt-devel] Re: Cannot install latest oVirt engine

2021-09-21 Thread Michal Skrivanek


> On 21. 9. 2021, at 11:56, Vojtech Juranek  wrote:
> 
> On Tuesday, 21 September 2021 10:40:56 CEST Ales Musil wrote:
>> On Tue, Sep 21, 2021 at 10:06 AM Vojtech Juranek 
>> 
>> wrote:
>>> Hi,
>>> 
 I'm trying to install engine build [1] to verify test a patch [2], but
>>> 
>>> I'm
>>> 
 getting following error:
>>> I was bale to install the engine, but adding new hosts fails with
>>> 
>>> 2021-09-21 03:52:15 EDT - TASK [ovirt-provider-ovn-driver : Configure OVN
>>> for oVirt] *
>>> 2021-09-21 03:52:15 EDT -
>>> 2021-09-21 03:52:15 EDT - fatal: [192.168.122.201]: FAILED! => {"changed":
>>> true, "cmd": ["vdsm-tool", "ovn-config", "192.168.122.192",
>>> "192.168.122.201", "192.168.122.201"], "delta": "0:00:00.355651", "end":
>>> "2021-09-21 03:52:13.433094", "msg": "non-zer
>>> o return code", "rc": 1, "start": "2021-09-21 03:52:13.077443", "stderr":
>>> "usage: \n /usr/bin/vdsm-tool [options] ovn-config IP-central
>>> [tunneling-IP|tunneling-network]\nConfigures the ovn-controller on the
>>> host.\n\nParameters:\nIP-central -
>>> 
>>> the IP of the engine (the host where OVN central is located)\n
>>> 
>>> tunneling-IP - the local IP which is to be used for OVN tunneling\n
>>> tunneling-network - the vdsm network meant to be used for OVN tunneling\n
>>> 
>>>  ", "stderr_lines": ["usage: ", " /usr/
>>> 
>>> bin/vdsm-tool [options] ovn-config IP-central
>>> [tunneling-IP|tunneling-network]", "Configures the ovn-controller on
>>> the host.", "", "Parameters:", "IP-central - the IP of the engine
>>> (the host where OVN central is located)", "tunneling-IP
>>> - the local IP which is to be used for OVN tunneling", "
>>> tunneling-network - the vdsm network meant to be used for OVN tunneling",
>>> ""], "stdout": "", "stdout_lines": []}
>>> 
>>> 
>>> There's additional 192.168.122.201 and the command should be
>>> 
>>> vdsm-tool ovn-config 192.168.122.192 192.168.122.201
>>> 
>>> I can run it manually, but the network is not configured properly as the
>>> host setup fails always at this point.
>>> Again some issue on my side or is this a known issue? Is there any
>>> workaround for it?
>>> 
>>> Thanks for hits.
>>> Vojta
>> 
>> This looks like you have an older ovirt-provider-ovn-driver, that does not
>> support the third argument of ovn-config.
>> Try to upgrade the driver package.
> 
> thanks, but ovirt-provider-ovn-driver seems to be up-to-date:
> 
> [root@localhost ~]# rpm -qa|grep ovirt-provider-ovn-driver
> ovirt-provider-ovn-driver-1.2.35-0.20210902122827.git6140625.el8.noarch

you’re supposed to use 
https://resources.ovirt.org/repos/ovirt/tested/master/rpm/el8/noarch/
update from there

> 
> 
> it seems to be an issue in vdsm-tool, running it manually fails:
> 
> [root@localhost ~]# vdsm-tool ovn-config 192.168.122.192 192.168.122.201 
> 192.168.122.201
> usage: 
> /usr/bin/vdsm-tool [options] ovn-config IP-central 
> [tunneling-IP|tunneling-network]
>Configures the ovn-controller on the host.
> 
>Parameters:
>IP-central - the IP of the engine (the host where OVN central is located)
>tunneling-IP - the local IP which is to be used for OVN tunneling
>tunneling-network - the vdsm network meant to be used for OVN tunneling
> 
> [root@localhost ~]# echo $?
> 1
> 
> 
> ___
> Devel mailing list -- devel@ovirt.org 
> To unsubscribe send an email to devel-le...@ovirt.org 
> 
> Privacy Statement: https://www.ovirt.org/privacy-policy.html 
> 
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/ 
> 
> List Archives: 
> https://lists.ovirt.org/archives/list/devel@ovirt.org/message/UYQZRKPZ7Y452USEU5WNOXZSAZXG6VSL/
>  
> 
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/27N6IVNSYZOLY2N6SKW2VKO2EUN56Z3O/


[ovirt-devel] Re: noVNC not working when FIPS is enabled

2021-09-20 Thread Michal Skrivanek


> On 14. 9. 2021, at 13:45, Michal Skrivanek  
> wrote:
> 
> 
> 
>> On 10. 9. 2021, at 20:06, Milan Zamazal  wrote:
>> 
>> Michal Skrivanek  writes:
>> 
>>>> On 8. 9. 2021, at 20:48, Milan Zamazal  wrote:
>>>> 
>>>> Hi,
>>>> 
>>>> we had to disable VNC OST test some time ago because it started failing.
>>>> I looked at why it fails and the reason provided by
>>>> ovirt-websocket-proxy is
>>>> 
>>>> do_vencrypt_handshake:187 Server supports the following subtypes: 263
>>> 
>>> 263 is VNC_AUTH_VENCRYPT_X509SASL
>>> because with fips we change libvirt configuration to SASL? 
>> 
>> libvirt configuration is the same whether we boot with fips=0 or fips=1
>> (and disable/enable FIPS for the cluster accordingly).  And the proxy
>> works with fips=0 even when auth_unix_rw="sasl" is set in the libvirt
>> configuration.
> 
> it could be qemu’s decision to enforce only this one when FIPS enabled
> 
>> 
>> So should we add VENCRYPT_X509SASL support to the proxy?
> 
> yes, I do not see any other way when this is the only supported connection 
> type

and I think you have bigger issues, on el8stream we now pick up websockify 0.9 
with [1],
which changed the API we override, so the connection doesn’t work at all

now all you get is
ovirt-websocket-proxy[68086] INFO msg:630 handler exception: get_target() 
missing 1 required positional argument: 'path'

so first you need to update the proxy to handle 0.9 but also 0.8 that we use on 
RHEL

Thanks,
michal

[1] 
https://github.com/novnc/websockify/commit/af85184e28d8e4333472940bfe1d2eb6436b6733
> 
>> 
>>>> Server does not support X509VNC. OvirtProxy only supports X509VNC
>>>> 
>>>> This happens only when FIPS is enabled and is reproducible outside OST.
>>>> The only thing that seems to have influence on whether it works or not
>>>> is the value of `fips' kernel command line parameter -- when it's
>>>> changed to fips=0 then noVNC console works without any other changes.
>>>> 
>>>> So it looks like some change in QEMU.  I'm not an expert in this area
>>>> and don't know what those protocols are about, why the proxy supports
>>>> only X509VNC and why the mismatch in expectations on both the ends
>>>> happens when FIPS is enabled.  Can anybody help clarify it and provide
>>>> an idea how to resolve the problem?
>>>> 
>>>> Thanks,
>>>> Milan
>>>> ___
>>>> Devel mailing list -- devel@ovirt.org
>>>> To unsubscribe send an email to devel-le...@ovirt.org
>>>> Privacy Statement: https://www.ovirt.org/privacy-policy.html
>>>> oVirt Code of Conduct: 
>>>> https://www.ovirt.org/community/about/community-guidelines/
>>>> List Archives:
>>>> https://lists.ovirt.org/archives/list/devel@ovirt.org/message/S6MCLJV2QMQ3YLJDUUBT3AZVEADXJ6GK/
>> 
> 
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/TIATUMJCBHOA3BNR3UHUZZ2EPQP3242U/


[ovirt-devel] Re: noVNC not working when FIPS is enabled

2021-09-14 Thread Michal Skrivanek


> On 10. 9. 2021, at 20:06, Milan Zamazal  wrote:
> 
> Michal Skrivanek  writes:
> 
>>> On 8. 9. 2021, at 20:48, Milan Zamazal  wrote:
>>> 
>>> Hi,
>>> 
>>> we had to disable VNC OST test some time ago because it started failing.
>>> I looked at why it fails and the reason provided by
>>> ovirt-websocket-proxy is
>>> 
>>> do_vencrypt_handshake:187 Server supports the following subtypes: 263
>> 
>> 263 is VNC_AUTH_VENCRYPT_X509SASL
>> because with fips we change libvirt configuration to SASL? 
> 
> libvirt configuration is the same whether we boot with fips=0 or fips=1
> (and disable/enable FIPS for the cluster accordingly).  And the proxy
> works with fips=0 even when auth_unix_rw="sasl" is set in the libvirt
> configuration.

it could be qemu’s decision to enforce only this one when FIPS enabled

> 
> So should we add VENCRYPT_X509SASL support to the proxy?

yes, I do not see any other way when this is the only supported connection type

> 
>>> Server does not support X509VNC. OvirtProxy only supports X509VNC
>>> 
>>> This happens only when FIPS is enabled and is reproducible outside OST.
>>> The only thing that seems to have influence on whether it works or not
>>> is the value of `fips' kernel command line parameter -- when it's
>>> changed to fips=0 then noVNC console works without any other changes.
>>> 
>>> So it looks like some change in QEMU.  I'm not an expert in this area
>>> and don't know what those protocols are about, why the proxy supports
>>> only X509VNC and why the mismatch in expectations on both the ends
>>> happens when FIPS is enabled.  Can anybody help clarify it and provide
>>> an idea how to resolve the problem?
>>> 
>>> Thanks,
>>> Milan
>>> ___
>>> Devel mailing list -- devel@ovirt.org
>>> To unsubscribe send an email to devel-le...@ovirt.org
>>> Privacy Statement: https://www.ovirt.org/privacy-policy.html
>>> oVirt Code of Conduct: 
>>> https://www.ovirt.org/community/about/community-guidelines/
>>> List Archives:
>>> https://lists.ovirt.org/archives/list/devel@ovirt.org/message/S6MCLJV2QMQ3YLJDUUBT3AZVEADXJ6GK/
> 
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/YZYO6H275K4TYAICQETSOCCSERV34O3N/


[ovirt-devel] Re: noVNC not working when FIPS is enabled

2021-09-09 Thread Michal Skrivanek


> On 8. 9. 2021, at 20:48, Milan Zamazal  wrote:
> 
> Hi,
> 
> we had to disable VNC OST test some time ago because it started failing.
> I looked at why it fails and the reason provided by
> ovirt-websocket-proxy is
> 
>  do_vencrypt_handshake:187 Server supports the following subtypes: 263

263 is VNC_AUTH_VENCRYPT_X509SASL
because with fips we change libvirt configuration to SASL? 

>  Server does not support X509VNC. OvirtProxy only supports X509VNC
> 
> This happens only when FIPS is enabled and is reproducible outside OST.
> The only thing that seems to have influence on whether it works or not
> is the value of `fips' kernel command line parameter -- when it's
> changed to fips=0 then noVNC console works without any other changes.
> 
> So it looks like some change in QEMU.  I'm not an expert in this area
> and don't know what those protocols are about, why the proxy supports
> only X509VNC and why the mismatch in expectations on both the ends
> happens when FIPS is enabled.  Can anybody help clarify it and provide
> an idea how to resolve the problem?
> 
> Thanks,
> Milan
> ___
> Devel mailing list -- devel@ovirt.org
> To unsubscribe send an email to devel-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/devel@ovirt.org/message/S6MCLJV2QMQ3YLJDUUBT3AZVEADXJ6GK/
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/D4VH66AQU4EIJBTLBULDCW6DGDUKEWJK/


[ovirt-devel] Re: ovirt-imageio fails tests on CentOS Stream 9 / qemu-6.1.0

2021-09-09 Thread Michal Skrivanek


> On 9. 9. 2021, at 11:47, Vojtech Juranek  wrote:
> 
> On Wednesday, 8 September 2021 16:19:39 CEST Sandro Bonazzola wrote:
>> Hi,
>> running ovirt-imageio check-patch on CentOS Stream 9 fails.
> 
> do we have some CentOS Stream 9 images somewhere which I can use?

not yet.
I guess we should be adding it to OST at some point, rather sooner than later.
volunteers welcomed;)

> 
>> Is anyone around from the storage team who can have a look?
>> https://jenkins.ovirt.org/job/ovirt-imageio_standard-check-patch/3974//artif
>> act/check-patch.el9stream.x86_64/mock_logs/script/stdout_stderr.log
>> 
>> Thanks,
> 
> ___
> Devel mailing list -- devel@ovirt.org
> To unsubscribe send an email to devel-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/devel@ovirt.org/message/FU5UZT3KA2TESXI3RT2RPBSKG5RJGDWL/
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/ZQKYWQRBNECSSBU3ENVRDVBQAVJPBV4S/


[ovirt-devel] Re: OST: How to run ost.sh with a local custom repo?

2021-08-05 Thread Michal Skrivanek


> On 5. 8. 2021, at 11:18, Milan Zamazal  wrote:
> 
> Marcin Sobczyk  writes:
> 
>> On 8/4/21 1:30 PM, Milan Zamazal wrote:
>>> Hi,
>>> 
>>> when I try to run ost.sh on a local repo with
>>> --custom-repo=file:///home/pdm/rpmbuild/repodata/repomd.xml, I get the
>>> following error:
>>> 
>>>   requests.exceptions.InvalidSchema: No connection adapters were
>>> found for 'file:///home/pdm/rpmbuild/repodata/repomd.xml
>>> 
>>> I tried using file:/home/... and /home/... but neither works.
>>> https://... works fine.
>>> 
>>> How can I run the script with a custom repo in a local directory?
>> Hi,
>> 
>> this is most probably caused by:
>> 
>> https://github.com/oVirt/ovirt-system-tests/blob/0ad56d467ac0e608c568f597188db08117b7565d/ost_utils/ost_utils/pytest/fixtures/deployment.py#L85
>> 
>> you can comment out this line and most probably your problems will go
>> away.
> 
> It stops reporting the original error but it still doesn't work:
> 
>  Curl error (37): Couldn't read a file:// file for 
> file:///home/ost/rpmbuild/repodata/repomd.xml [Couldn't open file 
> /home/ost/rpmbuild/repodata/repomd.xml]
> 
> The file exists and is world readable so the problem apparently is the
> local directory repo is not propagated to the right location.

there’s no propagation. unless that path exists inside the VM, it won’t work

> 
>> If it works, please let us know here, and I'll adapt the code to make
>> it work
>> with local repos like these.
> 
> Michal Skrivanek  writes:
> 
>>> On 4. 8. 2021, at 13:47, Marcin Sobczyk  wrote:
>>> 
>>> 
>>> 
>>> On 8/4/21 1:30 PM, Milan Zamazal wrote:
>>>> Hi,
>>>> 
>>>> when I try to run ost.sh on a local repo with
>>>> --custom-repo=file:///home/pdm/rpmbuild/repodata/repomd.xml, I get the
>>>> following error:
>>>> 
>>>>  requests.exceptions.InvalidSchema: No connection adapters were
>>>> found for 'file:///home/pdm/rpmbuild/repodata/repomd.xml
>>>> 
>>>> I tried using file:/home/... and /home/... but neither works.
>>>> https://... works fine.
>>>> 
>>>> How can I run the script with a custom repo in a local directory?
>> 
>> You have those VMs running, you can just copy whatever files or vdsm rpm or 
>> whatnot, and run tests…
> 
> Do you mean as an end user?  The VMs are not running initially

feel free to use the lagofy bash functions for more granular control

> and
> copying files manually is cumbersome.

a predefined set of files to predefined location? not really any more than 
rebulding a repo:)

> 
> If it cannot be automated in OST then the easiest solution may be to run
> a web server on the host serving the repo(s).  

yes, we had that in CI. i’m happy it’s gone, we won’t do that again
you can still do that yourself of course

> The host IP is reachable
> from the VMs so it should work some way.  Did I say that running OST
> from Podman has its advantages? ;-)
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/EXJ4CFNLFFVQDQI2FQXMPCCXIZXU6DJE/


[ovirt-devel] Re: Failing to build ovn2.11 / openvswitch2.11 for el9

2021-08-05 Thread Michal Skrivanek
no need to bother with 2.11, we’re upgrading to 2.13(at least) anyway

> On 5. 8. 2021, at 18:17, Sandro Bonazzola  wrote:
> 
> Hi, I'm trying to rebuild laters ovn rpm from centos build system for el8s to 
> copr for el9s.
> It fails for me here:
> sed -f ./build-aux/extract-odp-netlink-h < 
> datapath/linux/compat/include/linux/openvswitch.h > include/odp-netlink.h
> sh -f ./build-aux/extract-odp-netlink-macros-h include/odp-netlink.h > 
> include/odp-netlink-macros.h
> PYTHONPATH=./python":"$PYTHONPATH PYTHONDONTWRITEBYTECODE=yes 
> /usr/bin/python3 ./ovsdb/ovsdb-idlc.in  annotate 
> ./vswitchd/vswitch.ovsschema ./lib/vswitch-idl.ann > 
> lib/vswitch-idl.ovsidl.tmp && mv lib/vswitch-idl.ovsidl.tmp 
> lib/vswitch-idl.ovsidl
> PYTHONPATH=./python":"$PYTHONPATH PYTHONDONTWRITEBYTECODE=yes 
> /usr/bin/python3 ./ovsdb/ovsdb-idlc.in  c-idl-source 
> lib/vswitch-idl.ovsidl > lib/vswitch-idl.c.tmp && mv lib/vswitch-idl.c.tmp 
> lib/vswitch-idl.c
> Traceback (most recent call last):
>   File "/builddir/build/BUILD/ovn2.11-2.11.1/./ovsdb/ovsdb-idlc.in 
> ", line 1581, in 
> func(*args[1:])
>   File "/builddir/build/BUILD/ovn2.11-2.11.1/./ovsdb/ovsdb-idlc.in 
> ", line 442, in printCIDLSource
> replace_cplusplus_keyword(schema)
>   File "/builddir/build/BUILD/ovn2.11-2.11.1/./ovsdb/ovsdb-idlc.in 
> ", line 179, in replace_cplusplus_keyword
> for columnName in table.columns:
> RuntimeError: dictionary keys changed during iteration
> make: *** [Makefile:8534: lib/vswitch-idl.c] Error 1
> 
> similar for openvswitch2.11 (this still with parallel make with -j2):
> Traceback (most recent call last):
>   File "/builddir/build/BUILD/ovs-2.11.3/build-shared/../ovsdb/ovsdb-idlc.in 
> ", line 1597, in 
> func(*args[1:])
>   File "/builddir/build/BUILD/ovs-2.11.3/build-shared/../ovsdb/ovsdb-idlc.in 
> ", line 458, in printCIDLSource
> replace_cplusplus_keyword(schema)
>   File "/builddir/build/BUILD/ovs-2.11.3/build-shared/../ovsdb/ovsdb-idlc.in 
> ", line 179, in replace_cplusplus_keyword
> for columnName in table.columns:
> RuntimeError: dictionary keys changed during iteration
> make: *** [Makefile:8383: lib/vswitch-idl.c] Error 1
> make: *** Waiting for unfinished jobs
> Traceback (most recent call last):
>   File "/builddir/build/BUILD/ovs-2.11.3/build-shared/../ovsdb/ovsdb-idlc.in 
> ", line 1597, in 
> func(*args[1:])
>   File "/builddir/build/BUILD/ovs-2.11.3/build-shared/../ovsdb/ovsdb-idlc.in 
> ", line 185, in printCIDLHeader
> replace_cplusplus_keyword(schema)
>   File "/builddir/build/BUILD/ovs-2.11.3/build-shared/../ovsdb/ovsdb-idlc.in 
> ", line 179, in replace_cplusplus_keyword
> for columnName in table.columns:
> RuntimeError: dictionary keys changed during iteration
> make: *** [Makefile:8385: lib/vswitch-idl.h] Error 1
> 
> I thought it could have been due to parallel make so I forced make to run 
> with -j1 but without changing the result.
> Any clue on how to get the build working?
> 
> Thanks,
> 
> -- 
> Sandro Bonazzola
> MANAGER, SOFTWARE ENGINEERING, EMEA R RHV
> Red Hat EMEA 
> sbona...@redhat.com    
>  
> Red Hat respects your work life balance. Therefore there is no need to answer 
> this email out of your office hours.
> 
> 
> ___
> Devel mailing list -- devel@ovirt.org
> To unsubscribe send an email to devel-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/devel@ovirt.org/message/XJXMIT5FNEZRYTSB5L7PAXLLGCR4DREQ/

___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/5G2OZGBB5Q7M6NYXGGHYQN5NRZ5JB7JV/


[ovirt-devel] Re: OST: How to run ost.sh with a local custom repo?

2021-08-05 Thread Michal Skrivanek


> On 4. 8. 2021, at 13:47, Marcin Sobczyk  wrote:
> 
> 
> 
> On 8/4/21 1:30 PM, Milan Zamazal wrote:
>> Hi,
>> 
>> when I try to run ost.sh on a local repo with
>> --custom-repo=file:///home/pdm/rpmbuild/repodata/repomd.xml, I get the
>> following error:
>> 
>>   requests.exceptions.InvalidSchema: No connection adapters were found for 
>> 'file:///home/pdm/rpmbuild/repodata/repomd.xml
>> 
>> I tried using file:/home/... and /home/... but neither works.
>> https://... works fine.
>> 
>> How can I run the script with a custom repo in a local directory?

You have those VMs running, you can just copy whatever files or vdsm rpm or 
whatnot, and run tests…

> Hi,
> 
> this is most probably caused by:
> 
> https://github.com/oVirt/ovirt-system-tests/blob/0ad56d467ac0e608c568f597188db08117b7565d/ost_utils/ost_utils/pytest/fixtures/deployment.py#L85
> 
> you can comment out this line and most probably your problems will go away.
> If it works, please let us know here, and I'll adapt the code to make it work
> with local repos like these.
> 
> Regards, Marcin
> 
>> 
>> Thanks,
>> Milan
>> ___
>> Devel mailing list -- devel@ovirt.org
>> To unsubscribe send an email to devel-le...@ovirt.org
>> Privacy Statement: https://www.ovirt.org/privacy-policy.html
>> oVirt Code of Conduct: 
>> https://www.ovirt.org/community/about/community-guidelines/
>> List Archives: 
>> https://lists.ovirt.org/archives/list/devel@ovirt.org/message/2GPXTSXP5PSWOB6VQEVAHKJZUZSRLO6C/
> ___
> Devel mailing list -- devel@ovirt.org
> To unsubscribe send an email to devel-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/devel@ovirt.org/message/IMFJZKYD7V6RJ7YIRKLYALSKSCV3CLHC/
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/HG2O5P434S5UXTFPPWP5GH7YM3RHOHGH/


[ovirt-devel] Re: OST HE: Engine VM went down due to cpu load (was: [oVirt Jenkins] ovirt-system-tests_he-basic-suite-master - Build # 2126 - Failure!)

2021-08-04 Thread Michal Skrivanek


> On 4. 8. 2021, at 7:38, Yedidyah Bar David  wrote:
> 
> On Tue, Aug 3, 2021 at 10:27 PM Michal Skrivanek  <mailto:michal.skriva...@redhat.com>> wrote:
> 
> 
>> On 3. 8. 2021, at 11:43, Yedidyah Bar David > <mailto:d...@redhat.com>> wrote:
>> 
>> On Tue, Aug 3, 2021 at 10:05 AM Yedidyah Bar David > <mailto:d...@redhat.com>> wrote:
>>> 
>>> On Tue, Aug 3, 2021 at 7:50 AM >> <mailto:jenk...@jenkins.phx.ovirt.org>> wrote:
>>>> 
>>>> Project: 
>>>> https://jenkins.ovirt.org/job/ovirt-system-tests_he-basic-suite-master/ 
>>>> <https://jenkins.ovirt.org/job/ovirt-system-tests_he-basic-suite-master/>
>>>> Build: 
>>>> https://jenkins.ovirt.org/job/ovirt-system-tests_he-basic-suite-master/2126/
>>>>  
>>>> <https://jenkins.ovirt.org/job/ovirt-system-tests_he-basic-suite-master/2126/>
>>>> Build Number: 2126
>>>> Build Status:  Failure
>>>> Triggered By: Started by timer
>>>> 
>>>> -
>>>> Changes Since Last Success:
>>>> -
>>>> Changes for Build #2126
>>>> [Michal Skrivanek] basic: skipping just the VNC console part of 
>>>> test_virtual_machines
>>>> 
>>>> 
>>>> 
>>>> 
>>>> -
>>>> Failed Tests:
>>>> -
>>>> 2 tests failed.
>>>> FAILED:  
>>>> he-basic-suite-master.test-scenarios.test_012_local_maintenance_sdk.test_local_maintenance
>>>> 
>>>> Error Message:
>>>> ovirtsdk4.Error: Failed to read response: [(>>> 0xfaf11228>, 7, 'Failed to connect to 192.168.200.99 port 443: 
>>>> Connection refused')]
>>> 
>>> This looks very similar to the issue we have with dns/dig failures
>>> that cause the engine VM to go down, and it's similar, but different.
>>> 
>>> dig didn't fail (it now uses TCP), but something else caused the agent
>>> to stop the engine VM - a combination of high cpu load and low free
>>> memory, after restarting the engine VM as part of test_008.
>>> 
>>> https://jenkins.ovirt.org/job/ovirt-system-tests_he-basic-suite-master/2126/artifact/exported-artifacts/test_logs/ost-he-basic-suite-master-host-0/var/log/ovirt-hosted-engine-ha/agent.log
>>>  
>>> <https://jenkins.ovirt.org/job/ovirt-system-tests_he-basic-suite-master/2126/artifact/exported-artifacts/test_logs/ost-he-basic-suite-master-host-0/var/log/ovirt-hosted-engine-ha/agent.log>
>>> :
>>> 
>>> =
>>> MainThread::INFO::2021-08-03
>>> 06:46:55,068::hosted_engine::517::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_monitoring_loop)
>>> Current state ReinitializeFSM (score: 0)
>>> MainThread::INFO::2021-08-03
>>> 06:47:04,089::state_decorators::51::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(check)
>>> Global maintenance detected
>>> MainThread::INFO::2021-08-03
>>> 06:47:04,169::brokerlink::73::ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(notify)
>>> Success, was notification of state_transition
>>> (ReinitializeFSM-GlobalMaintenance) sent? ignored
>>> MainThread::INFO::2021-08-03
>>> 06:47:05,249::hosted_engine::517::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_monitoring_loop)
>>> Current state GlobalMaintenance (score: 3400)
>>> MainThread::INFO::2021-08-03
>>> 06:47:14,439::state_decorators::51::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(check)
>>> Global maintenance detected
>>> MainThread::INFO::2021-08-03
>>> 06:47:25,526::states::176::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(score)
>>> Penalizing score by 814 due to cpu load
>>> MainThread::INFO::2021-08-03
>>> 06:47:25,527::hosted_engine::517::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_monitoring_loop)
>>> Current state GlobalMaintenance (score: 2586)
>>> MainThread::INFO::2021-08-03
>>> 06:47:25,537::state_decorators::51::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(check)
>>> Global maintenance detected
>>> MainThread::INFO::2021-08-03
>>> 06:47:26,029::hosted_engine::517::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_monitoring_loop)
>>> Current state GlobalMaintenance (score: 2586)
>>>

[ovirt-devel] Re: OST HE: Engine VM went down due to cpu load (was: [oVirt Jenkins] ovirt-system-tests_he-basic-suite-master - Build # 2126 - Failure!)

2021-08-03 Thread Michal Skrivanek


> On 3. 8. 2021, at 11:43, Yedidyah Bar David  wrote:
> 
> On Tue, Aug 3, 2021 at 10:05 AM Yedidyah Bar David  <mailto:d...@redhat.com>> wrote:
>> 
>> On Tue, Aug 3, 2021 at 7:50 AM  wrote:
>>> 
>>> Project: 
>>> https://jenkins.ovirt.org/job/ovirt-system-tests_he-basic-suite-master/
>>> Build: 
>>> https://jenkins.ovirt.org/job/ovirt-system-tests_he-basic-suite-master/2126/
>>> Build Number: 2126
>>> Build Status:  Failure
>>> Triggered By: Started by timer
>>> 
>>> -----
>>> Changes Since Last Success:
>>> -
>>> Changes for Build #2126
>>> [Michal Skrivanek] basic: skipping just the VNC console part of 
>>> test_virtual_machines
>>> 
>>> 
>>> 
>>> 
>>> -
>>> Failed Tests:
>>> -
>>> 2 tests failed.
>>> FAILED:  
>>> he-basic-suite-master.test-scenarios.test_012_local_maintenance_sdk.test_local_maintenance
>>> 
>>> Error Message:
>>> ovirtsdk4.Error: Failed to read response: [(>> 0xfaf11228>, 7, 'Failed to connect to 192.168.200.99 port 443: 
>>> Connection refused')]
>> 
>> This looks very similar to the issue we have with dns/dig failures
>> that cause the engine VM to go down, and it's similar, but different.
>> 
>> dig didn't fail (it now uses TCP), but something else caused the agent
>> to stop the engine VM - a combination of high cpu load and low free
>> memory, after restarting the engine VM as part of test_008.
>> 
>> https://jenkins.ovirt.org/job/ovirt-system-tests_he-basic-suite-master/2126/artifact/exported-artifacts/test_logs/ost-he-basic-suite-master-host-0/var/log/ovirt-hosted-engine-ha/agent.log
>> :
>> 
>> =
>> MainThread::INFO::2021-08-03
>> 06:46:55,068::hosted_engine::517::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_monitoring_loop)
>> Current state ReinitializeFSM (score: 0)
>> MainThread::INFO::2021-08-03
>> 06:47:04,089::state_decorators::51::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(check)
>> Global maintenance detected
>> MainThread::INFO::2021-08-03
>> 06:47:04,169::brokerlink::73::ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(notify)
>> Success, was notification of state_transition
>> (ReinitializeFSM-GlobalMaintenance) sent? ignored
>> MainThread::INFO::2021-08-03
>> 06:47:05,249::hosted_engine::517::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_monitoring_loop)
>> Current state GlobalMaintenance (score: 3400)
>> MainThread::INFO::2021-08-03
>> 06:47:14,439::state_decorators::51::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(check)
>> Global maintenance detected
>> MainThread::INFO::2021-08-03
>> 06:47:25,526::states::176::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(score)
>> Penalizing score by 814 due to cpu load
>> MainThread::INFO::2021-08-03
>> 06:47:25,527::hosted_engine::517::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_monitoring_loop)
>> Current state GlobalMaintenance (score: 2586)
>> MainThread::INFO::2021-08-03
>> 06:47:25,537::state_decorators::51::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(check)
>> Global maintenance detected
>> MainThread::INFO::2021-08-03
>> 06:47:26,029::hosted_engine::517::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_monitoring_loop)
>> Current state GlobalMaintenance (score: 2586)
>> MainThread::INFO::2021-08-03
>> 06:47:35,050::state_decorators::51::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(check)
>> Global maintenance detected
>> MainThread::INFO::2021-08-03
>> 06:47:35,576::hosted_engine::517::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_monitoring_loop)
>> Current state GlobalMaintenance (score: 2586)
>> MainThread::INFO::2021-08-03
>> 06:47:45,597::state_decorators::51::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(check)
>> Global maintenance detected
>> MainThread::INFO::2021-08-03
>> 06:47:46,521::hosted_engine::517::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_monitoring_loop)
>> Current state GlobalMaintenance (score: 2586)
>> MainThread::INFO::2021-08-03
>> 06:47:55,577::state_decorators::51::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(check)
>> Global maintenance detected
>> MainThread::INFO::2021-08-03
>> 06:47:56,559::hosted

[ovirt-devel] Re: lago is dead, long live ost!

2021-07-27 Thread Michal Skrivanek


> On 27. 7. 2021, at 14:53, Arik Hadas  wrote:
> 
> 
> 
> On Tue, Jul 27, 2021 at 2:57 PM Michal Skrivanek  <mailto:michal.skriva...@redhat.com>> wrote:
> Hi,
> we worked for a while on removing our dependency on lago that has been more 
> or less abandoned for a long time now. We got to the minimal feature set that 
> is simple to replace with bunch of virsh commands, and most of the advanced 
> logic is implemented in pytest. We’ve just merged the last patches that freed 
> us up from lago for local and beaker CI runs.
> 
> There’s a new ost.sh wrapper for the simple operations of running the suite, 
> inspecting the environment and shutting it down.
> Hopefully it’s self explanatory….
> 
> ./ost.sh command [arguments]
> 
> run   [...]
> initializes the workspace with preinstalled distro ost-images, launches 
> VMs and runs the whole suite
> add extra repos with --custom-repo=url
> skip check that extra repo is actually used with --skip-custom-repos-check
> status
> show environment status, VM details
> shell  [command ...]
> opens ssh connection
> console 
> opens virsh console
> destroy
> stop and remove the running environment
> 
> Right now lago and run_suite.sh still works and it is still used by the 
> mock-based jenkins.ovirt.org <http://jenkins.ovirt.org/> runs. It will go 
> away in future.
> 
> nice
> so if run_suite.sh is deprecated, how about replacing it in the documentation 
> with that new thingy?

yeah yeah, sure. it’s not gone yet, it’s now just two (almost) same configs and 
run methods

>  
> 
> Thanks,
> michal
> ___
> Devel mailing list -- devel@ovirt.org <mailto:devel@ovirt.org>
> To unsubscribe send an email to devel-le...@ovirt.org 
> <mailto:devel-le...@ovirt.org>
> Privacy Statement: https://www.ovirt.org/privacy-policy.html 
> <https://www.ovirt.org/privacy-policy.html>
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/ 
> <https://www.ovirt.org/community/about/community-guidelines/>
> List Archives: 
> https://lists.ovirt.org/archives/list/devel@ovirt.org/message/WJMNDQR5QNRQIJ64KUGTS3NXZJTA2MGN/
>  
> <https://lists.ovirt.org/archives/list/devel@ovirt.org/message/WJMNDQR5QNRQIJ64KUGTS3NXZJTA2MGN/>
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/JSNKLB3GPPCK2PMOWDTPF7Z2V2MHEXAB/


[ovirt-devel] lago is dead, long live ost!

2021-07-27 Thread Michal Skrivanek
Hi,
we worked for a while on removing our dependency on lago that has been more or 
less abandoned for a long time now. We got to the minimal feature set that is 
simple to replace with bunch of virsh commands, and most of the advanced logic 
is implemented in pytest. We’ve just merged the last patches that freed us up 
from lago for local and beaker CI runs.

There’s a new ost.sh wrapper for the simple operations of running the suite, 
inspecting the environment and shutting it down.
Hopefully it’s self explanatory….

./ost.sh command [arguments]

run   [...]
initializes the workspace with preinstalled distro ost-images, launches VMs 
and runs the whole suite
add extra repos with --custom-repo=url
skip check that extra repo is actually used with --skip-custom-repos-check
status
show environment status, VM details
shell  [command ...]
opens ssh connection
console 
opens virsh console
destroy
stop and remove the running environment

Right now lago and run_suite.sh still works and it is still used by the 
mock-based jenkins.ovirt.org runs. It will go away in future.

Thanks,
michal
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/WJMNDQR5QNRQIJ64KUGTS3NXZJTA2MGN/


[ovirt-devel] Re: OST failure, bad comment

2021-07-12 Thread Michal Skrivanek


> On 10. 7. 2021, at 0:14, Nir Soffer  wrote:
> 
> On Sat, Jul 10, 2021 at 12:59 AM Nir Soffer  wrote:
>> 
>> OST failed, and posted unprocessed comment:
>> 
>>OST Verified -1
>>Patch Set 1: Verified-1
>>https://redir.apps.ovirt.org/dj/job/ds-ost-gating-trigger2/313 :
>> OST https://redir.apps.ovirt.org/dj/job/ds-ost-baremetal_manual/$OST_NUMBER
>>failed with: $OST_MESSAGE
>> 
>> https://gerrit.ovirt.org/c/vdsm/+/115642#message-b2993761_adbc1b6a
>> 
>> Looking at:
>> https://redir.apps.ovirt.org/dj/job/ds-ost-gating-trigger2/313
>> 
>> It looks like environment issue:
>> 
>> ERROR: Build step failed with exception
>> 
>> java.lang.NullPointerException: no workspace from node
>> hudson.slaves.DumbSlave[hpe-bl280g6-12...] which is computer
>> hudson.slaves.SlaveComputer@2e2cb321 and has channel null
>> 
>> So we have 2 issues:
>> 1. If build fails, error is not handled properly producing bad comment
>> 2. the actual build failure
> 
> I started another build manually (8543) and it started normally, so this was
> probably a temporary issue.

Seems someone rebooted the jenkins host in the middle of the run

> 
>> 
>> The first issue should be handled by posting a better comment:
>> 
>>OST build error:
>> https://redir.apps.ovirt.org/dj/job/ds-ost-gating-trigger2/313
>> 
>> And OST should not mark the patch as verified -1, since it did not run any 
>> test.

It’s a shell script that runs after the job that fails on missing workspace, it 
would be possible to set unstable status on a failure of that script, but It 
doesn’t seem to be possible to handle internal jenkins exception

>> 
>> Marking a patch as -1 means some tests have failed,  and the author
>> needs to check
>> why. A build error means OST or CI maintainers need to check why the
>> error happened.
> ___
> Devel mailing list -- devel@ovirt.org
> To unsubscribe send an email to devel-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/devel@ovirt.org/message/H4Q2MT7R7DXKWHRZK7RGPSOLRYZMVZOP/
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/4BHK5YVUAGXYTJBZNVEZGG6KZ3OBLMQA/


[ovirt-devel] Re: selenium.common.exceptions.TimeoutException: Message: Clusters menu is not displayed

2021-06-23 Thread Michal Skrivanek


> On 23. 6. 2021, at 13:29, Yedidyah Bar David  wrote:
> 
> On Wed, Jun 23, 2021 at 1:03 PM Code Review  wrote:
>> 
>> From Jenkins CI :
>> 
>> Jenkins CI has posted comments on this change. ( 
>> https://gerrit.ovirt.org/c/ovirt-system-tests/+/115191 )
>> 
>> Change subject: Make the ansible_inventory fixture backend-independent
>> ..
>> 
>> 
>> Patch Set 16: Continuous-Integration-1
>> 
>> Build Failed
>> 
>> https://jenkins.ovirt.org/job/ovirt-system-tests_standard-check-patch/17451/ 
>> : FAILURE
> 
> Hi all,
> 
> Can someone please have a look and decide if this failure is reasonable?
> 
> The screenshot of the failed "Open cluster list view" does not show
> anything suspicious at all [1], to me, but not sure how it should
> otherwise look - e.g. should it include a pointer, which might imply
> where we tried to move, etc. It timed out 3 minutes after a successful
> login [2].

it should have opened a Cluster screen, but it looks it’s stuck at the 
dashboard after login
we likely have some (timing related perhaps) issues with navigation

> 
> If it's not supposed to include a pointer, perhaps we should try to
> make selenium do include a pointer in the screenshots - I have a
> feeling this can help debug other UI issues. But I don't know this
> code at all...
> 
> Thanks and best regards,
> 
> [1] 
> https://jenkins.ovirt.org/job/ovirt-system-tests_standard-check-patch/17451/artifact/check-patch.basic_suite_master.el8.x86_64/ui_tests_artifacts/20210623_100213_302883_firefox_test_clusters_failed.png
> 
> [2] 
> https://jenkins.ovirt.org/job/ovirt-system-tests_standard-check-patch/17451/artifact/check-patch.basic_suite_master.el8.x86_64/ui_tests_artifacts/20210623_095906_004756_firefox_test_login_success.png
> 
>> 
>> 
>> --
>> To view, visit https://gerrit.ovirt.org/c/ovirt-system-tests/+/115191
>> To unsubscribe, or for help writing mail filters, visit 
>> https://gerrit.ovirt.org/settings
>> 
>> Gerrit-Project: ovirt-system-tests
>> Gerrit-Branch: master
>> Gerrit-Change-Id: Id07012f8fc3a972a3b62d32f2271d5747514e13a
>> Gerrit-Change-Number: 115191
>> Gerrit-PatchSet: 16
>> Gerrit-Owner: Yedidyah Bar David 
>> Gerrit-Reviewer: Andrej Cernek 
>> Gerrit-Reviewer: Anton Marchukov 
>> Gerrit-Reviewer: Dafna Ron 
>> Gerrit-Reviewer: Dusan Fodor 
>> Gerrit-Reviewer: Gal Ben Haim 
>> Gerrit-Reviewer: Galit Rosenthal 
>> Gerrit-Reviewer: Jenkins CI 
>> Gerrit-Reviewer: Marcin Sobczyk 
>> Gerrit-Reviewer: Name of user not set #1001916
>> Gerrit-Reviewer: Yedidyah Bar David 
>> Gerrit-Comment-Date: Wed, 23 Jun 2021 10:03:15 +
>> Gerrit-HasComments: No
>> Gerrit-Has-Labels: Yes
>> Gerrit-MessageType: comment
>> 
> 
> 
> -- 
> Didi
> ___
> Devel mailing list -- devel@ovirt.org
> To unsubscribe send an email to devel-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/devel@ovirt.org/message/EFZVD6IQLBXGOLUCSQ7N7NCZ6EKVVA4P/
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/P5QD3P2EISIUEKIGVN25H4QSJQGO7LXX/


[ovirt-devel] Re: OST verifed -1 broken, fails for infra issue in OST

2021-06-21 Thread Michal Skrivanek


> On 14. 6. 2021, at 13:14, Nir Soffer  wrote:
> 
> I got this wrong review from OST, which looks like an infra issue in OST:
> 
> Patch:
> https://gerrit.ovirt.org/c/vdsm/+/115232
> 
> Error:
> https://gerrit.ovirt.org/c/vdsm/+/115232#message-46ad5e75_ed543485
> 
> Failing code:
> 
> Package(*line.split()) for res in results.values() > for line in
> _filter_results(res['stdout'].splitlines()) ] E TypeError: __new__()
> missing 2 required positional arguments: 'version' and 'repo'
> ost_utils/ost_utils/deployment_utils/package_mgmt.py:177: TypeError
> 
> I hope someone working on OST can take a look soon.

it’s from a week ago, is that still relevant or you pasted a wrong patch?

specifically this issue has been fixed by https://gerrit.ovirt.org/115254 on 
June 15th

Thanks,
michal
> 
> Nir
> ___
> Devel mailing list -- devel@ovirt.org
> To unsubscribe send an email to devel-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/devel@ovirt.org/message/EDLFMHDYR37FFNJBN7FLTBALURZYEC7V/
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/6KPLROLHFYCUKPXZETWP6A2TIQLUZJ4J/


[ovirt-devel] Re: OST gating failed on - test_import_vm1

2021-06-21 Thread Michal Skrivanek


> On 16. 6. 2021, at 10:03, Eyal Shenitzky  wrote:
> 
> Thanks for looking into it Michal.
> 
> Actually, my patch related to incremental backup so there nothing changed 
> around the snapshot area and I believe the failure isn't related to it,
> 
> I re-run OST for this change - 
> https://rhv-devops-jenkins.rhev-ci-vms.eng.rdu2.redhat.com/job/ds-ost-baremetal_manual/6795/
>  
> <https://rhv-devops-jenkins.rhev-ci-vms.eng.rdu2.redhat.com/job/ds-ost-baremetal_manual/6795/>.
> 
> Let's see if it works fine.

it still needs to be investigated and fixed

> 
> On Tue, 15 Jun 2021 at 14:00, Michal Skrivanek  <mailto:mskri...@redhat.com>> wrote:
> 
> 
>> On 15. 6. 2021, at 12:00, Eyal Shenitzky > <mailto:eshen...@redhat.com>> wrote:
>> 
>> Hi All,
>> 
>> As part of OST gating verification, the verification failed with the 
>> following message - 
>> 
>> gating2 (43) : OST build 6687 failed with: test_import_vm1 failed:
>> 
>> engine = 
>> event_id = [1165], timeout = 600
>> 
>> @contextlib.contextmanager
>> def wait_for_event(engine, event_id, timeout=assertions.LONG_TIMEOUT):
>> '''
>> event_id could either be an int - a single
>> event ID or a list - multiple event IDs
>> that all will be checked
>> '''
>> events = engine.events_service()
>> last_event = int(events.list(max=2)[0].id)
>> try:
>> >   yield
>> 
>> ost_utils/ost_utils/engine_utils.py:36: 
>> _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
>> _ _ 
>> 
>> engine = 
>> correlation_id = 'test_validate_ova_import_vm', vm_name = 'imported_vm'
>> imported_url = 'ova:///var/tmp/ova_vm.ova <>', storage_domain = 'iscsi'
>> cluster_name = 'test-cluster'
>> 
>> def _import_ova(engine, correlation_id, vm_name, imported_url, 
>> storage_domain, cluster_name):
>> sd = 
>> engine.storage_domains_service().list(search='name={}'.format(storage_domain))[0]
>> cluster = 
>> engine.clusters_service().list(search='name={}'.format(cluster_name))[0]
>> imports_service = engine.external_vm_imports_service()
>> host = test_utils.get_first_active_host_by_name(engine)
>> 
>> with engine_utils.wait_for_event(engine, 1165): # 
>> IMPORTEXPORT_STARTING_IMPORT_VM
>> imports_service.add(
>> types.ExternalVmImport(
>> name=vm_name,
>> provider=types.ExternalVmProviderType.KVM,
>> url=imported_url,
>> cluster=types.Cluster(
>> id=cluster.id <http://cluster.id/>
>> ),
>> storage_domain=types.StorageDomain(
>> id=sd.id <http://sd.id/>
>> ),
>> host=types.Host(
>> id=host.id <http://host.id/>
>> ),
>> sparse=True
>> >   ), async_=True, query={'correlation_id': correlation_id}
>> )
>> 
>> basic-suite-master/test-scenarios/test_004_basic_sanity.py:935: 
>> _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
>> _ _ 
>> 
>> self = 
>> import_ = 
>> headers = None, query = {'correlation_id': 'test_validate_ova_import_vm'}
>> wait = True, kwargs = {'async_': True}
>> 
>> def add(
>> self,
>> import_,
>> headers=None,
>> query=None,
>> wait=True,
>> **kwargs
>> ):
>> """
>> This operation is used to import a virtual machine from external hypervisor, 
>> such as KVM, XEN or VMware.
>> For example import of a virtual machine from VMware can be facilitated using 
>> the following request:
>> [source]
>> 
>> POST /externalvmimports
>> 
>> With request body of type <>, for 
>> example:
>> [source,xml]
>> 
>> 
>> 
>> my_vm
>> 
>> 
>> 
>> vm_name_as_is_in_vmware
>> true
>> vmware_user
>> 123456
>> VMWARE
>> vpx://wmware_user@vcenter-host/DataCenter/Cluster/esxi-host?no_verify=1
>>  <>
>> 
>> 
>> 
>> 
>> 
>> """
>> # Check the types of the parameters:
>> Service._check_types([
>> ('import_', import_, types.ExternalVmImport),
>> ])
>> 
>> # Build the URL:
>> 
>> Patch set 4:Verified -1
>> 
>> 
>> 
>> The OST run as part of verification for patch - 
>> https://gerrit.ovirt.org/#/c/ovirt-engine/+/115192/ 
>> <https://gerrit.ovirt.org/#/c/ovirt-engine/+/115192/>
>> 
>> Can someone from Virt/OST team have a look?
> 
> you should be able to review logs in generic way
> 
> you can ee
>

[ovirt-devel] Re: OST gating failed on - test_import_vm1

2021-06-21 Thread Michal Skrivanek


> On 15. 6. 2021, at 12:00, Eyal Shenitzky  wrote:
> 
> Hi All,
> 
> As part of OST gating verification, the verification failed with the 
> following message - 
> 
> gating2 (43) : OST build 6687 failed with: test_import_vm1 failed:
> 
> engine = 
> event_id = [1165], timeout = 600
> 
> @contextlib.contextmanager
> def wait_for_event(engine, event_id, timeout=assertions.LONG_TIMEOUT):
> '''
> event_id could either be an int - a single
> event ID or a list - multiple event IDs
> that all will be checked
> '''
> events = engine.events_service()
> last_event = int(events.list(max=2)[0].id)
> try:
> >   yield
> 
> ost_utils/ost_utils/engine_utils.py:36: 
> _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
> _ 
> 
> engine = 
> correlation_id = 'test_validate_ova_import_vm', vm_name = 'imported_vm'
> imported_url = 'ova:///var/tmp/ova_vm.ova', storage_domain = 'iscsi'
> cluster_name = 'test-cluster'
> 
> def _import_ova(engine, correlation_id, vm_name, imported_url, 
> storage_domain, cluster_name):
> sd = 
> engine.storage_domains_service().list(search='name={}'.format(storage_domain))[0]
> cluster = 
> engine.clusters_service().list(search='name={}'.format(cluster_name))[0]
> imports_service = engine.external_vm_imports_service()
> host = test_utils.get_first_active_host_by_name(engine)
> 
> with engine_utils.wait_for_event(engine, 1165): # 
> IMPORTEXPORT_STARTING_IMPORT_VM
> imports_service.add(
> types.ExternalVmImport(
> name=vm_name,
> provider=types.ExternalVmProviderType.KVM,
> url=imported_url,
> cluster=types.Cluster(
> id=cluster.id 
> ),
> storage_domain=types.StorageDomain(
> id=sd.id 
> ),
> host=types.Host(
> id=host.id 
> ),
> sparse=True
> >   ), async_=True, query={'correlation_id': correlation_id}
> )
> 
> basic-suite-master/test-scenarios/test_004_basic_sanity.py:935: 
> _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
> _ 
> 
> self = 
> import_ = 
> headers = None, query = {'correlation_id': 'test_validate_ova_import_vm'}
> wait = True, kwargs = {'async_': True}
> 
> def add(
> self,
> import_,
> headers=None,
> query=None,
> wait=True,
> **kwargs
> ):
> """
> This operation is used to import a virtual machine from external hypervisor, 
> such as KVM, XEN or VMware.
> For example import of a virtual machine from VMware can be facilitated using 
> the following request:
> [source]
> 
> POST /externalvmimports
> 
> With request body of type <>, for 
> example:
> [source,xml]
> 
> 
> 
> my_vm
> 
> 
> 
> vm_name_as_is_in_vmware
> true
> vmware_user
> 123456
> VMWARE
> vpx://wmware_user@vcenter-host/DataCenter/Cluster/esxi-host?no_verify=1
> 
> 
> 
> 
> 
> """
> # Check the types of the parameters:
> Service._check_types([
> ('import_', import_, types.ExternalVmImport),
> ])
> 
> # Build the URL:
> 
> Patch set 4:Verified -1
> 
> 
> 
> The OST run as part of verification for patch - 
> https://gerrit.ovirt.org/#/c/ovirt-engine/+/115192/ 
> 
> 
> Can someone from Virt/OST team have a look?

you should be able to review logs in generic way

you can ee
2021-06-15 11:08:37,515+02 ERROR 
[org.ovirt.engine.core.bll.exportimport.ImportVmFromExternalUrlCommand] 
(default task-2) [test_validate_ova_import_vm] Exception: 
java.lang.NullPointerException
at 
deployment.engine.ear.bll.jar//org.ovirt.engine.core.bll.exportimport.ImportVmFromExternalUrlCommand$ExternalVmImporter.performImport(ImportVmFromExternalUrlCommand.java:116)
at 
deployment.engine.ear.bll.jar//org.ovirt.engine.core.bll.exportimport.ImportVmFromExternalUrlCommand.executeCommand(ImportVmFromExternalUrlCommand.java:65)
at 
deployment.engine.ear.bll.jar//org.ovirt.engine.core.bll.CommandBase.executeWithoutTransaction(CommandBase.java:1174)
at 
deployment.engine.ear.bll.jar//org.ovirt.engine.core.bll.CommandBase.executeActionInTransactionScope(CommandBase.java:1332)
at 
deployment.engine.ear.bll.jar//org.ovirt.engine.core.bll.CommandBase.runInTransaction(CommandBase.java:2008)

likely caused by
2021-06-15 11:08:37,513+02 ERROR [org.ovirt.engine.core.bll.GetVmFromOvaQuery] 
(default task-2) [test_validate_ova_import_vm] Exception: 
org.ovirt.engine.core.common.utils.ansible.AnsibleRunnerCallException: Task Run 
query script failed to execute. Please check logs for more details: 
/var/log/ovirt-engine/ova/ovirt-query-ova-ansible-20210615110831-lago-basic-suite-master-host-0-test_validate_ova_import_vm.log

seeing then the following error in ansible log:
2021-06-15 11:08:37 CEST - fatal: [lago-basic-suite-master-host-0]: FAILED! => 
{"changed": true, "msg": "non-zero return code", "rc": 1, "stderr": "Shared 
connection to lago-basic-suite-master-host-0 closed.\r\n", "stderr_lines": 
["Shared connection to lago-basic-suite-master-host-0 closed."], "stdout": 
"Traceback (most recent call last):\r\n  File 

[ovirt-devel] OST using FIPS-enabled images

2021-06-09 Thread Michal Skrivanek
Hi all,
just a heads up in case something comes up. 
The OST runs that we’re performing in CI will now use images that are installed 
with “fips=1” kernel parameter that sets them up to comply with FIPS 140-2. We 
recently fixed all the known issues in oVirt around this, so hopefully it 
should all just work.
We will not be testing non-fips setups separately, those should be “easier” and 
less likely to break, so with testing only fips we should cover breakage in 
both.

Thanks,
michal
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/K3ZM3X25PN2MA7BRGEOI47N7VDJTPCFO/


[ovirt-devel] Re: Merge rights changes in the oVirt Engine project

2021-04-09 Thread Michal Skrivanek


> On 8. 4. 2021, at 7:18, Yedidyah Bar David  wrote:
> 
> On Wed, Apr 7, 2021 at 6:45 PM Michal Skrivanek
> mailto:michal.skriva...@redhat.com>> wrote:
>> 
>> 
>> 
>>> On 7. 4. 2021, at 12:04, Yedidyah Bar David  wrote:
>>> 
>>> On Tue, Aug 11, 2020 at 11:05 AM Tal Nisan  wrote:
>>>> 
>>>> Hi everyone,
>>>> As you probably know we are now in a mode in which we develop our next 
>>>> zstream version on the master branch as opposed to how we worked before 
>>>> where the master version was dedicated for the next major version. This 
>>>> makes the rapid changes in master to be delivered to customers in a much 
>>>> higher cadence thus affecting stability.
>>>> Due to that we think it's best that from now on merges in the master 
>>>> branch will be done only by stable branch maintainers after inspecting 
>>>> those closely.
>>>> 
>>>> What you need to do in order to get your patch merged:
>>>> - Have it pass Jenkins
>>>> - Have it get code review +2
>>>> - Have it mark verified +1
>>>> - It's always encourage to have it tested by OST, for bigger changes it's 
>>>> a must
>>>> 
>>>> Once you have all those covered, please add me as a reviewer and I'll 
>>>> examine it and merge if everything seems right, if I haven't done it in a 
>>>> timely manner feel free to ping me.
>>> 
>>> Is the above still the current policy?
>> 
>> Hi Didi,
>> well, yes it is. what’s the concern?
> 
> No "concern", other than I noticed a few times that people other than
> Tal merged patches, and yesterday I did the same [1] after getting +1

that’s master branch, not stable branch. 

> from Sandro and consulting him, seeing that I have permissions. IIRC I
> didn't have permissions until recently, so I wondered if anything
> changed.
> 
> If the re-granting of permissions was a mistake, let's revert. If it's
> on purpose, perhaps clarify the situation.

we did a major cleanup of stale people and the convoluted system of groups that 
accumulated in the past
we now just use a simple “regular” and stable maintainers list. +2 are for 
respective areas, we do not have a per-area granularity in ovirt-engine 
project, it’s really just an agreement that you shouldn’t merge patches in 
areas that are not yours.

oh, no w I get it, you really talk about submitting, not +2:) ok, yes, so that 
has changed. we’ve concluded that we can go back to previous mode of not 
distinguishing between +2 and merge rights.
everything else above applies about the patch quality criteria, it’s just that 
indeed anyone in this list[1] can click the submit button.


> 
> [1] https://gerrit.ovirt.org/c/ovirt-engine/+/114130 
> <https://gerrit.ovirt.org/c/ovirt-engine/+/114130>
> 
>> I’d love to get to a point when we can automatically gate patches based on 
>> OST, but it’s going slow…so for now it’s still manual.
> 
> Not sure that's enough, but it would be a step in the right direction.
> Sometimes patches won't break OST but are still harmful.

sure, it’s never 100%, still better than today, especially since we started to 
pay more attention to OST results it’s not only that it brings in a regression, 
it’s also all that wasted time by people trying to fix OST after such patch.

> 
> Did we ever consider going fully to Test-Driven-Development? Not
> certain if there are studies/methods to calculate/approximate the
> expected extra work this will require. Assuming you want eventually
> all developers to also know well-enough OST (in addition to their
> specific expertise) and be comfortable patching it, it might make
> sense.

I think anything more involved would be complicated by the slowness. It’s 
better than before, but still ~35 minutes at best for basic suite. And it’s 
fragile so not really that simple to add a test without either breaking it or 
making it too isolated - too slow for practical use. There are always unit 
tests of course, that’s a separate matter...

Thanks,
michal


> 
> Best regards,
> — 
> Didi


[1] https://gerrit.ovirt.org/#/admin/groups/63,members


___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/MG4BC3ABRKSXBLPRNQO6LEJEQGS2EFQY/


[ovirt-devel] Re: Merge rights changes in the oVirt Engine project

2021-04-07 Thread Michal Skrivanek


> On 7. 4. 2021, at 12:04, Yedidyah Bar David  wrote:
> 
> On Tue, Aug 11, 2020 at 11:05 AM Tal Nisan  wrote:
>> 
>> Hi everyone,
>> As you probably know we are now in a mode in which we develop our next 
>> zstream version on the master branch as opposed to how we worked before 
>> where the master version was dedicated for the next major version. This 
>> makes the rapid changes in master to be delivered to customers in a much 
>> higher cadence thus affecting stability.
>> Due to that we think it's best that from now on merges in the master branch 
>> will be done only by stable branch maintainers after inspecting those 
>> closely.
>> 
>> What you need to do in order to get your patch merged:
>> - Have it pass Jenkins
>> - Have it get code review +2
>> - Have it mark verified +1
>> - It's always encourage to have it tested by OST, for bigger changes it's a 
>> must
>> 
>> Once you have all those covered, please add me as a reviewer and I'll 
>> examine it and merge if everything seems right, if I haven't done it in a 
>> timely manner feel free to ping me.
> 
> Is the above still the current policy?

Hi Didi,
well, yes it is. what’s the concern?
I’d love to get to a point when we can automatically gate patches based on OST, 
but it’s going slow…so for now it’s still manual.

Thanks,
michal

> 
> Thanks,
> -- 
> Didi
> ___
> Devel mailing list -- devel@ovirt.org
> To unsubscribe send an email to devel-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/devel@ovirt.org/message/5LIRDIAACHT52K7DUFP3WWRHXEUYY6LR/
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/A7D4MIQVHGV2ASDQIJ3XQ3HU3C6SU47H/


[ovirt-devel] Re: basic suite fails on test_metrics_and_log_collector

2021-03-17 Thread Michal Skrivanek


> On 17. 3. 2021, at 13:53, Dana Elfassy  wrote:
> 
> Adding +Marcin Sobczyk  
> 
> On Mon, Mar 15, 2021 at 9:59 AM Yedidyah Bar David  > wrote:
> On Mon, Mar 15, 2021 at 7:55 AM Yedidyah Bar David  > wrote:
> >
> > Hi all,
> >
> > This started a few days ago [1] and randomly happens since then:
> >
> > E   DEBUG: Configuration:
> > E   DEBUG: command: collect
> > E   DEBUG: Traceback (most recent call last):
> > E   DEBUG:   File
> > "/usr/lib/python3.6/site-packages/ovirt_log_collector/__main__.py",
> > line 2067, in 
> > E   DEBUG: '%s directory is not empty.' % 
> > (conf["local_tmp_dir"])
> > E   DEBUG: Exception: /dev/shm/log directory is not
> > empty.ERROR: /dev/shm/log directory is not empty.non-zero return code
> >
> > Michal tried to fix this by using a random directory but it still fails [2]:
> >
> > DEBUG: command: collect
> > DEBUG: Traceback (most recent call last):
> > DEBUG:   File 
> > "/usr/lib/python3.6/site-packages/ovirt_log_collector/__main__.py",
> > line 2067, in 
> > DEBUG: '%s directory is not empty.' % (conf["local_tmp_dir"])
> > DEBUG: Exception: /dev/shm/kaN7uY directory is not empty.ERROR:
> > /dev/shm/kaN7uY directory is not empty.non-zero return code
> >
> > Since I suppose that the randomness of mktemp is good enough, it must
> > be something else. Also, the last successful run before [1] used the
> > same OST git commit (same code), so I do not think it's something in
> > OST's code.
> >
> > Any idea?
> >
> > I think I'll push a patch to create and use the directory right before
> > calling ovirt-log-collector, which is probably better in other ways.
> 
> My patch [1] still fails, with a somewhat different error message, but
> this made me check further, and while I still do not understand, I have
> this to add:
> 
> In the failing runs, ovirt-log-collector is called *twice* in parallel. E.g.
> in [2] (the check-patch of [1]):
> 
> Mar 15 07:38:59 lago-basic-suite-master-engine platform-python[59099]:
> ansible-command Invoked with _raw_params=lctmp=$(mktemp -d -p
> /dev/shm); ovirt-log-collector --verbose --batch --no-hypervisors
> --local-tmp="${lctmp}" --conf-file=/root/ovirt-log-collector.conf
> _uses_shell=True warn=True stdin_add_newline=True
> strip_empty_ends=True argv=None chdir=None executable=None
> creates=None removes=None stdin=None
> Mar 15 07:38:59 lago-basic-suite-master-engine platform-python[59124]:
> ansible-command Invoked with _raw_params=lctmp=$(mktemp -d -p
> /dev/shm); ovirt-log-collector --verbose --batch --no-hypervisors
> --local-tmp="${lctmp}" --conf-file=/root/ovirt-log-collector.conf
> _uses_shell=True warn=True stdin_add_newline=True
> strip_empty_ends=True argv=None chdir=None executable=None
> creates=None removes=None stdin=None
> 
> It also generates two logs, which you can check/compare.
> 
> It's the same for previous ones, e.g. latest nightly [3][4]:
> 
> Mar 15 06:23:30 lago-basic-suite-master-engine platform-python[59343]:
> ansible-command Invoked with _raw_params=ovirt-log-collector --verbose
> --batch --no-hypervisors --conf-file=/root/ovirt-log-collector.conf
> _uses_shell=True warn=True stdin_add_newline=True
> strip_empty_ends=True argv=None chdir=None executable=None
> creates=None removes=None stdin=None
> Mar 15 06:23:30 lago-basic-suite-master-engine setroubleshoot[58889]:
> SELinux is preventing /usr/lib/systemd/systemd from unlink access on
> the sock_file ansible-ssh-lago-basic-suite-master-host-1-22-root. For
> complete SELinux messages run: sealert -l
> d03a8655-9430-4fcf-9892-3b4df1939899
> Mar 15 06:23:30 lago-basic-suite-master-engine setroubleshoot[58889]:
> SELinux is preventing /usr/lib/systemd/systemd from unlink access on
> the sock_file ansible-ssh-lago-basic-suite-master-host-1-22-root.#012#012*
>  Plugin catchall (100. confidence) suggests
> **#012#012If you believe that systemd should
> be allowed unlink access on the
> ansible-ssh-lago-basic-suite-master-host-1-22-root sock_file by
> default.#012Then you should report this as a bug.#012You can generate
> a local policy module to allow this access.#012Do#012allow this access
> for now by executing:#012# ausearch -c 'systemd' --raw | audit2allow
> -M my-systemd#012# semodule -X 300 -i my-systemd.pp#012
> Mar 15 06:23:30 lago-basic-suite-master-engine platform-python[59361]:
> ansible-command Invoked with _raw_params=ovirt-log-collector --verbose
> --batch --no-hypervisors --conf-file=/root/ovirt-log-collector.conf
> _uses_shell=True warn=True stdin_add_newline=True
> strip_empty_ends=True argv=None chdir=None executable=None
> creates=None removes=None stdin=None
> 
> Any idea what might have caused this to start happening? Perhaps
> a bug in ansible, or ansible-runner? It reminds me of [5].
> Adding Dana and Martin.
> 
> I think [5] is quite a serious bug, btw, should be a 4.4.5 blocker.

it’s 

[ovirt-devel] Re: [oVirt Jenkins] ovirt-system-tests_basic-suite-master_nightly - Build # 962 - Still Failing!

2021-03-16 Thread Michal Skrivanek


> On 16. 3. 2021, at 15:53, Yedidyah Bar David  wrote:
> 
> On Tue, Mar 16, 2021 at 10:09 AM Yedidyah Bar David  <mailto:d...@redhat.com>> wrote:
>> 
>> On Tue, Mar 16, 2021 at 7:06 AM  wrote:
>>> 
>>> Project: 
>>> https://jenkins.ovirt.org/job/ovirt-system-tests_basic-suite-master_nightly/
>>> Build: 
>>> https://jenkins.ovirt.org/job/ovirt-system-tests_basic-suite-master_nightly/962/
>>> Build Number: 962
>>> Build Status:  Still Failing
>>> Triggered By: Started by timer
>>> 
>>> -----
>>> Changes Since Last Success:
>>> -
>>> Changes for Build #953
>>> [Michal Skrivanek] randomize /dev/shm logcollector tmp directory
>>> 
>>> 
>>> Changes for Build #954
>>> [Michal Skrivanek] randomize /dev/shm logcollector tmp directory
>>> 
>>> 
>>> Changes for Build #955
>>> [Michal Skrivanek] randomize /dev/shm logcollector tmp directory
>>> 
>>> 
>>> Changes for Build #956
>>> [Michal Skrivanek] randomize /dev/shm logcollector tmp directory
>>> 
>>> 
>>> Changes for Build #957
>>> [Michal Skrivanek] randomize /dev/shm logcollector tmp directory
>>> 
>>> 
>>> Changes for Build #958
>>> [Michal Skrivanek] randomize /dev/shm logcollector tmp directory
>>> 
>>> 
>>> Changes for Build #959
>>> [Michal Skrivanek] randomize /dev/shm logcollector tmp directory
>>> 
>>> 
>>> Changes for Build #960
>>> [Andrej Cernek] pylint: Upgrade to 2.7
>>> 
>>> 
>>> Changes for Build #961
>>> [Andrej Cernek] pylint: Upgrade to 2.7
>>> 
>>> 
>>> Changes for Build #962
>>> [Andrej Cernek] pylint: Upgrade to 2.7
>>> 
>>> 
>>> 
>>> 
>>> -
>>> Failed Tests:
>>> -
>>> 1 tests failed.
>>> FAILED:  
>>> basic-suite-master.test-scenarios.test_001_initialize_engine.test_set_hostnames
>>> 
>>> Error Message:
>>> failed on setup with "TypeError: __new__() missing 2 required positional 
>>> arguments: 'version' and 'repo'"
>>> 
>>> Stack Trace:
>>> ansible_by_hostname = 
>>> 
>>>@pytest.fixture(scope="session", autouse=True)
>>>def check_installed_packages(ansible_by_hostname):
>>>vms_pckgs_dict_list = []
>>>for hostname in backend.default_backend().hostnames():
>>>vm_pckgs_dict = _get_custom_repos_packages(
>>>>  ansible_by_hostname(hostname))
>>> 
>>> ost_utils/ost_utils/pytest/fixtures/check_repos.py:39:
>>> _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
>>> _ _
>>> ost_utils/ost_utils/pytest/fixtures/check_repos.py:55: in 
>>> _get_custom_repos_packages
>>>repo_name)
>>> ost_utils/ost_utils/pytest/fixtures/check_repos.py:69: in 
>>> _get_installed_packages
>>>Package(*line) for line in result
>>> _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
>>> _ _
>>> 
>>> .0 = 
>>> 
>>>>  Package(*line) for line in result
>>>]
>>> E   TypeError: __new__() missing 2 required positional arguments: 'version' 
>>> and 'repo'
>> 
>> This failed, because 'dnf repo-pkgs' has split the output to two
>> lines, so the first
>> didn't include a version [1]:
>> 
>> lago-basic-suite-master-host-1 | CHANGED | rc=0 >>
>> Installed Packages
>> ovirt-ansible-collection.noarch 1.3.2-0.1.master.20210315141358.el8 
>> @extra-src-1
>> python3-ovirt-engine-sdk4.x86_64
>>4.4.10-1.20210315.gitf8b9f2a.el8
>> @extra-src-1
>> 
>> We should either give up on this, or rewrite the call 'dnf repo-pkgs'
>> in some other
>> language that does not require parsing of human-targeted output
>> (perhaps python or
>> ansible), or amend a bit the current code and hope it will survive longer...
>> 
>> Trying last one:
>> 
>> https://gerrit.ovirt.org/c/ovirt-system-tests/+/113895
> 
> Merged, but we still fail in nightly (which I ran manually):
> 
> https://jenkins.ovirt.org/job/ovirt-system-tests_basic-suite-master_nightly/963/console
>  
> <https://jenkins.ovirt.org/job/ovirt-system-tests_basic-suit

[ovirt-devel] Re: [ovirt-users] CentOS Stream support

2021-02-01 Thread Michal Skrivanek


> On 29. 1. 2021, at 10:35, Sandro Bonazzola  wrote:
> 
> 
> 
> Il giorno mer 27 gen 2021 alle ore 18:10  > ha scritto:
> I also have concerns about CentOS Stream and the future CentOS in general!
> 
> What about support of oVirt on Oracle Linux? As Oracle themselves have 
> changed gears and now offering it for FREE, and only charging if you want 
> support. This is what IBM/RedHat should of done with CentOS but nope.
> 
> I'd like to stick with oVirt either on Oracle or upgrade to RHEL under the 
> new developer rules. RHEL allowing up to 16-hosts for free.
> 
> But as you mentioned - there are issues with the oVirt dependencies on RHEL.
> 
> If there are issues with oVirt dependencies on RHEL please open a bug.

the issue with openvswitch should be resolved

> RHEL and any RHEL rebuild should be binary compatible with all the packages 
> we rely on, so if something is not working I'd like to dig into it.

the problem is AV, mostly. It is “ahead” in RHEL, and so even the latest AV 
builds are compiled on a CentOS that's ~1-2 months behind. Since it has 
dependencies on other packages outside of the AV module (e.g. kernel) it 
sometimes doesn’t work that great


> 
>  
> 
> Thoughts, anyone?
> ___
> Devel mailing list -- devel@ovirt.org 
> To unsubscribe send an email to devel-le...@ovirt.org 
> 
> Privacy Statement: https://www.ovirt.org/privacy-policy.html 
> 
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/ 
> 
> List Archives: 
> https://lists.ovirt.org/archives/list/devel@ovirt.org/message/XXER2JO7QP5QBFKEOMOO2AT5HDRSRO47/
>  
> 
> 
> 
> -- 
> Sandro Bonazzola
> MANAGER, SOFTWARE ENGINEERING, EMEA R RHV
> Red Hat EMEA 
> sbona...@redhat.com    
>  
> Red Hat respects your work life balance. Therefore there is no need to answer 
> this email out of your office hours.
>  
> 
> 
> ___
> Devel mailing list -- devel@ovirt.org
> To unsubscribe send an email to devel-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/devel@ovirt.org/message/FG647EFDU4JRHRB4ZNPNWXF7GXCTWWNN/

___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/SFMHYD6LXOBUA7HUSHU5IKEIHC7Y2CQH/


[ovirt-devel] Re: Switch CentOS to Oracle Linux

2021-01-28 Thread Michal Skrivanek


> On 27. 1. 2021, at 18:07, ntoxica...@gmail.com wrote:
> 
> Hello,
> 
> Oracle Linux is offering CentOS users/dev/production servers to switch to 
> Oracle Linux free of cost. They've produced a script to do this.  Oracle also 
> has their own Repo's for oVirt and use it for their own Oracle Linux 
> Virtualization Manager [Re-branded oVirt].
> 
> I know Red Hat has announced moving away from RHV and focusing on OpenShift 
> virtualization. They'll happen in 2022-forward.  Are we already to assume 
> oVirt will follow suite and migrate to OpenShift??

oVirt is an open source project, RHV is Red Hat’s product, they don’t 
necessarily mean the same
Openshift Virtualization already exists, it has upstream too, though not very 
mature yet. 

We will look at collaboration there for sure, like we did before with Openstack 
integrations. But that doesn’t mean we migrate oVirt into Openshift tomorrow.
Kubevirt/Kubernetes is a very different thing, it may make sense for some 
people to eventually move there, it may make sense for others not to. Well, 
like with Openstack...

> 
> Questions:
> Since announcement of CentOS going EOL and move to CentOS Stream... [Not 
> doing it].  
> 
> - Will the oVirt team continue to develop past 4.4 release as well as support 
> Oracle Linux? 

There were emails on that before - we changed our development model to rather 
continuous “4.4.z” with faster incremental updates instead of 9+ months large 
drops. So in that sense “beyond 4.4” is where we are already. As for OL support 
see the other thread, I do not see why it wouldn’t work provided that RHEL will 
work. Can’t say if we will have any resources to dedicate to OL specifically.

> - Does Oracle really develop to their own ovirt repo and virtualization 
> manager product

no idea if there’s a fork. As for a product, afaik yes, there’s OLVM.

> - What about supporting OpenSuSE or SLES?

not on the roadmap. we had discussions about different distros, also long time 
ago, but it’s not feasible, there are too many little differences. If anyone 
would want to contribute we don’t mind, but it’s by no means a small feat (we 
do have some remains here or there from our failed debian attempt long long 
time ago)

> - oVirt packages on RHEL?  [RH announced they will allow RHEL installed on up 
> to 16-hosts for FREE]

possibly. But dependencies are the key...

> I am considering to test moving CentOS 7 development  work-loads to Oracle 
> Linux and use their oVirt repo.
> 
> Really like to know if oVirt developers plan to continue to build on what is 
> there moving forward. Or will the project die?  I use oVirt on production 
> work-loads [same as others] with GREAT success.  This is an absolute VMWare 
> replacement.

Great to hear, We are still very much around.
But many of those questions/plans do depend on contributors. We’d be very happy 
to accept other contributors who would want to expand/improve oVirt...

Thanks,
michal

> ___
> Devel mailing list -- devel@ovirt.org
> To unsubscribe send an email to devel-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/devel@ovirt.org/message/JTR5K4PFVNZAVWOBHGUOFGYE65L2XOF4/
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/RUQYZSGAMZAHL7FEZO2CG7NGRSJVSLLB/


[ovirt-devel] Re: [ovirt-users] CentOS Stream support

2021-01-28 Thread Michal Skrivanek


> On 27. 1. 2021, at 18:10, ntoxica...@gmail.com wrote:
> 
> I also have concerns about CentOS Stream and the future CentOS in general!
> 
> What about support of oVirt on Oracle Linux? As Oracle themselves have 
> changed gears and now offering it for FREE, and only charging if you want 
> support. This is what IBM/RedHat should of done with CentOS but nope.
> 
> I'd like to stick with oVirt either on Oracle or upgrade to RHEL under the 
> new developer rules. RHEL allowing up to 16-hosts for free.
> 
> But as you mentioned - there are issues with the oVirt dependencies on RHEL.

yes, dependencies are the main issue. I don’t think we particularly care that 
much about the base OS (as long as it is more or less a RHEL clone it doesn’t 
really matter)

we won’t be able to sanity test everything, but RHEL sounds like a feasible 
idea. We do need to move from CentOS to Stream first anyway, though, just for 
the development sake. 

Thanks,
michal

> 
> Thoughts, anyone?
> ___
> Devel mailing list -- devel@ovirt.org
> To unsubscribe send an email to devel-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/devel@ovirt.org/message/XXER2JO7QP5QBFKEOMOO2AT5HDRSRO47/
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/Q6K43ADVSO5XAU37MJMFJ6OGMS2SFSEF/


[ovirt-devel] Re: OST on Stream

2021-01-22 Thread Michal Skrivanek


> On 22. 1. 2021, at 13:01, Marcin Sobczyk  wrote:
> 
> 
> 
> On 1/22/21 12:00 PM, Marcin Sobczyk wrote:
>> Hi
>> 
>> On 1/22/21 11:45 AM, Michal Skrivanek wrote:
>>> There seems to be multiple issues really
>>> 
>>> 1) First it fails on “unknown state” of ovirt-engine-notifier service. 
>>> Seems the OST code is not handling service states well and just explodes
>>> 2) Either way, the problem is it's stopped. It should be running
>>> 3) if you start it manually it gets to the engine-config test and restarts 
>>> engine. It explodes here again, apparently same reason as #1
>>> 4) host installation fails - didi’s 
>>> https://gerrit.ovirt.org/#/c/ovirt-engine/+/113101/ is fixing it
>>> 5)  verify_engine_notifier fails when we try to stop it. it’s not running 
>>> due to #2 and it explodes anyway due to #1
>>> 
>>> rest seems to work fine except few more #1 issues, so that’s good…
>>> 
>>> Marcin, can you please prioritize #1?
>> Sure, looking...
> This is a known issue. Bugs that have been filed for this:
> 
> https://bugzilla.redhat.com/show_bug.cgi?id=1908275
> https://bugzilla.redhat.com/show_bug.cgi?id=1901449
> 
> There's also a workaround made for ansible modules:
> 
> https://github.com/ansible/ansible/issues/71528
> 
> I actually cannot reproduce this issue on my CentOS 8.3 servers.
> I think the problem is that CI agents are based on RHEL 8.2, which has the 
> unpatched ansible version.

looks like that. seems to work after upgrade. great

we should have Stream OST working shortly!

> 
> Regards, Marcin
> 
>> 
>>> Martin, any idea about notifier?
>>> 
>>> Thanks,
>>> michal
>>> 
> 
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/OVWV3VRW4MT5FUATK4WDHMKDAWDCI7HQ/


[ovirt-devel] OST on Stream

2021-01-22 Thread Michal Skrivanek
There seems to be multiple issues really

1) First it fails on “unknown state” of ovirt-engine-notifier service. Seems 
the OST code is not handling service states well and just explodes
2) Either way, the problem is it's stopped. It should be running
3) if you start it manually it gets to the engine-config test and restarts 
engine. It explodes here again, apparently same reason as #1
4) host installation fails - didi’s 
https://gerrit.ovirt.org/#/c/ovirt-engine/+/113101/ is fixing it
5)  verify_engine_notifier fails when we try to stop it. it’s not running due 
to #2 and it explodes anyway due to #1

rest seems to work fine except few more #1 issues, so that’s good…

Marcin, can you please prioritize #1? Martin, any idea about notifier?

Thanks,
michal
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/MOEDZFHU6NXL5Z3GNZ5XYECAB4JYO4GS/


[ovirt-devel] Re: [ovirt-users] CentOS Stream support

2021-01-07 Thread Michal Skrivanek


> On 5. 1. 2021, at 22:14, Alex K  wrote:
> 
> 
> 
> On Fri, Jun 5, 2020, 11:36 Michal Skrivanek  <mailto:michal.skriva...@redhat.com>> wrote:
> Hi all,
> we would like to ask about interest in community about oVirt moving to CentOS 
> Stream.
> There were some requests before but it’s hard to see how many people would 
> really like to see that.
> 
> With CentOS releases lagging behind RHEL for months it’s interesting to 
> consider moving to CentOS Stream as it is much more up to date and allows us 
> to fix bugs faster, with less workarounds and overhead for maintaining old 
> code. E.g. our current integration tests do not really pass on CentOS 8.1 and 
> we can’t really do much about that other than wait for more up to date 
> packages. It would also bring us closer to make oVirt run smoothly on RHEL as 
> that is also much closer to Stream than it is to outdated CentOS.
> 
> So..would you like us to support CentOS Stream?
> The answer is yes, though I do not see any other option, if I'm not mistaken. 

...this was sent under very different circumstances:) Now it seems to be an 
easier call...

We will be probably moving to Stream a bit more. We did it already, but since 
there was very little interest it’s not entirely working across automation, CI, 
release processes.
It does depend on all the dependencies we consume so don’t expect it to happen 
in a month, it’s probably going to take a while 

To that note, there is also a possibility of oVirt on RHEL which some tried(or 
even use). It was not very helpful yet because again the dependencies are not 
really part of the OS and they are not being published in a way that we can 
consume them. But if this ever changes, it would be a good option for stable 
underlying OS and up-to-date oVirt….

> 
> We don’t really have capacity to run 3 different platforms, would you still 
> want oVirt to support CentOS Stream if it means “less support” for regular 
> CentOS? 
> There are some concerns about Stream being a bit less stable, do you share 
> those concerns?
> 
> Thank you for your comments,
> michal
> ___
> Users mailing list -- us...@ovirt.org <mailto:us...@ovirt.org>
> To unsubscribe send an email to users-le...@ovirt.org 
> <mailto:users-le...@ovirt.org>
> Privacy Statement: https://www.ovirt.org/privacy-policy.html 
> <https://www.ovirt.org/privacy-policy.html>
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/ 
> <https://www.ovirt.org/community/about/community-guidelines/>
> List Archives: 
> https://lists.ovirt.org/archives/list/us...@ovirt.org/message/3B5MJKO7BS2DMQL3XOXPNO4BU3YDL52T/
>  
> <https://lists.ovirt.org/archives/list/us...@ovirt.org/message/3B5MJKO7BS2DMQL3XOXPNO4BU3YDL52T/>

___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/2Z33AH4XOFNIPL33PIAY6QLDMAZA7G5Z/


[ovirt-devel] Re: OST UI tests are skipped during check patch for OST repo

2020-12-21 Thread Michal Skrivanek
> On 21 Dec 2020, at 10:32, Marcin Sobczyk  wrote:
>
> 
>
>> On 12/21/20 9:08 AM, Lucia Jelinkova wrote:
>> Hi all,
>>
>> I've recently refactored some of the UI tests in the OST basic suite and 
>> added a new test for integration with Grafana [1]. I pushed my patches to 
>> Gerrit, however, I am not able to run them because during the check patch CI 
>> job (almost) all UI tests are skipped [2].
>>
>> How can I make Jenkins to run them?
> Hi Lucia,
>
> unfortunately we don't have that in u/s CI at the moment.
>
> We had to drop all el7 jobs to get rid of all the legacy stuff and complete 
> our move to el8 and py3.
> In el7 we had docker with its socket exposed to mock, so we could use 
> containers to run the selenium grid.
> In el8 there is no docker, we have podman. Podman on it's own doesn't work in 
> mock chroot.
> With CentOS 8.3 there was some hope, since the version of podman provided has 
> an experimental socket support,
> meaning we could use it the same way like we used docker.
>
> I tried it out, but even though the socket itself works, there are 
> limitations on the network implementation side.
> To have the complete setup working, we need the browsers running inside 
> containers to be able
> to access engine's http server. This is only possible when using slirp4netns 
> networking backend.
> Unfortunately with this version of podman there is no way to choose 
> networking backend for pods.
>
> For now, my advice would be to try running OST on your own machine.
> If it has 16GBs of RAM and ~12GBs of free

8GB RAM is enough for basic suite

> space on /, then you should be good.
> There's a thread on devel mailing list called "How to set up a (rh)el8 
> machine for running OST"
> where you can find instructions on how to prepare your machine for running 
> OST.
>
> I'm keeping my eye on the podman situation and will let you know if we have 
> something working.

Keep nagging CI team for mock-less env.
Also, downstream manual runs are without mock

>
> Regards, Marcin
>
>>
>> Regards,
>>
>> Lucia
>>
>> 1: https://gerrit.ovirt.org/#/c/ovirt-system-tests/+/112737/ 
>> 
>> 2: 
>> https://jenkins.ovirt.org/job/ovirt-system-tests_standard-check-patch/14501/testReport/basic-suite-master.test-scenarios/test_100_basic_ui_sanity/
>>  
>> .
>>
>>
>> ___
>> Devel mailing list -- devel@ovirt.org
>> To unsubscribe send an email to devel-le...@ovirt.org
>> Privacy Statement: https://www.ovirt.org/privacy-policy.html
>> oVirt Code of Conduct: 
>> https://www.ovirt.org/community/about/community-guidelines/
>> List Archives: 
>> https://lists.ovirt.org/archives/list/devel@ovirt.org/message/IRJ7Q5VJTKHMXLABURA76YMXAMDL347J/
> ___
> Devel mailing list -- devel@ovirt.org
> To unsubscribe send an email to devel-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/devel@ovirt.org/message/A6YJJN5KCONHIUO25EYC45JVLOSG56SE/
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/73XIJXPFVJ47RN5FAVYL2VSVZS3ZGFX3/


[ovirt-devel] Re: Problems building CentOS 8.3-based engine image for OST

2020-12-07 Thread Michal Skrivanek


> On 7 Dec 2020, at 16:06, Marcin Sobczyk  wrote:
> 
> Hi All,
> 
> since CentOS 8.3 is out, I'm trying to build a new base image for OST, but 
> there are problems on the engine side.
> The provisioning script we use to build the engine VM is here [1].
> 
> The build ends with errors:
> 
> Error: Problems in request:
> missing groups or modules: javapackages-tools

does it no longer exist or not built yet?

> Last metadata expiration check: 0:00:02 ago on Mon Dec  7 16:00:56 2020.
> Error:
>  Problem 1: cannot install the best candidate for the job
>   - nothing provides apache-commons-compress needed by 
> ovirt-engine-4.4.4.4-0.0.master.20201206151430.git1a096b0d4e7.el8.noarch
>   - nothing provides apache-commons-jxpath needed by 
> ovirt-engine-4.4.4.4-0.0.master.20201206151430.git1a096b0d4e7.el8.noarch
>  Problem 2: package 
> ovirt-engine-extension-aaa-ldap-setup-1.4.3-0.289.202010220206.el8.noarch 
> requires ovirt-engine-extension-aaa-ldap = 1.4.3-0.289.202010220206.el8, but 
> none of the providers can be installed
>   - package 
> ovirt-engine-extension-aaa-ldap-1.4.3-0.289.202010220206.el8.noarch requires 
> slf4j-jdk14, but none of the providers can be installed
>   - conflicting requests
>   - package slf4j-jdk14-1.7.25-4.module_el8.3.0+454+67dccca4.noarch is 
> filtered out by modular filtering

or perhaps it just got renamed (that’s what it should if that’s the case)

> 
> Please advise.
> 
> Thanks, Marcin
> 
> [1] 
> https://gerrit.ovirt.org/gitweb?p=ost-images.git;a=blob_plain;f=el8-provision-engine.sh.in;hb=HEAD
> 
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/MEDYL4IIZF5UUVETINONYGKA7UVUV6DA/


[ovirt-devel] Re: Health check endpoint in the engine hanging forever on CheckDBConnection

2020-11-27 Thread Michal Skrivanek
It would be good to notify on devel list when there are breaking changes across 
multiple components, development envs usually do not update everything.
If there is an actual incompatibility like in this case, it would also be good 
to not only require new version, but properly conflict with an old one. 
Currently official composes fail too because old engines have vdsm-jsonrpc-java 
>= 1.5.4 which pulls in 1.6.0 just fine

Thanks,
michal

> On 27 Nov 2020, at 12:04, Artur Socha  wrote:
> 
> In this run there is ovirt-engine-* used with git  c2c805a2662, however, 
> vdsm-jsonrpc-client is in version 1.6.0 which has a breaking change of 
> removing reactive streams support in favor to java.util.concurrent.FLOW [1] 
> which is handled by engine's commit ff3aa4da956 [2]
> 
> [1] https://gerrit.ovirt.org/#/c/vdsm-jsonrpc-java/+/109916/ 
> 
> [2] https://gerrit.ovirt.org/#/c/ovirt-engine/+/112347/ 
> 
> 
> Artur
> 
> 
> On Fri, Nov 27, 2020 at 11:49 AM Artur Socha  > wrote:
> I can see something I was recently touching. Removal of reactive stream from 
> vdsm-jsonrpc-java. Checking...
> Artur
> 
> On Fri, Nov 27, 2020 at 11:47 AM Marcin Sobczyk  > wrote:
> 
> 
> On 11/27/20 11:24 AM, Martin Perina wrote:
> > Hi,
> >
> > the health status is pretty stupid simple call to database:
> >
> > https://github.com/oVirt/ovirt-engine/blob/master/backend/manager/modules/services/src/main/java/org/ovirt/engine/core/services/HealthStatus.java
> >  
> > 
> >  
> >  >  
> > >
> > https://github.com/oVirt/ovirt-engine/blob/master/backend/manager/modules/bll/src/main/java/org/ovirt/engine/core/bll/CheckDBConnectionQuery.java#L21
> >  
> > 
> >  
> >  >  
> > >
> > https://github.com/oVirt/ovirt-engine/blob/master/backend/manager/modules/dal/src/main/java/org/ovirt/engine/core/dal/dbbroker/DbConnectionUtil.java#L33
> >  
> > 
> >  
> >  >  
> > >
> > https://github.com/oVirt/ovirt-engine/blob/master/packaging/dbscripts/common_sp.sql#L421
> >  
> > 
> >  
> >  >  
> > >
> >
> > So it should definitely not hang forever unless there is some serious 
> > issue in the engine start up or PostgreSQL database. Could you please 
> > share logs? Especially interesting would server.log and engine.log 
> > from /var/log/ovirt-engine
> Well, after some discussion with Artur and trying some workarounds, the 
> problem magically
> disappeared on my servers, but there's one OST gating run in CI that 
> suffered from the same problem:
> 
> https://jenkins.ovirt.org/blue/organizations/jenkins/ovirt-system-tests_gate/detail/ovirt-system-tests_gate/937/pipeline#step-240-log-1226
>  
> 
> 
> https://jenkins.ovirt.org/job/ovirt-system-tests_gate/937/artifact/basic-suit-master.el7.x86_64/test_logs/basic-suite-master/lago-basic-suite-master-engine/_var_log/ovirt-engine/server.log/*view*/
>  
> 
> 
> 

[ovirt-devel] Re: [OST] Network suites fail CI builds

2020-11-16 Thread Michal Skrivanek
> On 15 Nov 2020, at 17:40, Nir Soffer  wrote:
>
>> On Sun, Nov 15, 2020 at 4:13 PM Nir Soffer  wrote:
>>
>>> On Sun, Nov 15, 2020 at 12:28 PM Yedidyah Bar David  wrote:
>>>
 On Thu, Nov 12, 2020 at 9:24 PM Eitan Raviv  wrote:



> On Thu, Nov 12, 2020 at 5:46 PM Nir Soffer  wrote:
>
>> On Thu, Nov 12, 2020 at 4:01 PM Nir Soffer  wrote:
>>
>> I had many failures in recent OST patches, so I posted this change:
>> https://gerrit.ovirt.org/c/112174/
>>
>> This patch does not change anything, but it modifies the lago vm 
>> configuration
>> so it triggers 8 jobs. 2 network test suites failed:
>> https://jenkins.ovirt.org/blue/organizations/jenkins/ovirt-system-tests_standard-check-patch/detail/ovirt-system-tests_standard-check-patch/13540/pipeline
>>
>> Can someone look at the network suite failures?
>>


 Network suite has indeed been failing randomly recently. More often than 
 not it was due to timeouts while waiting for connections to the hosts, 
 timeouts while waiting for hosts to reach deserted statuses, and in the 
 above I also see what looks like a sock error on port 22. Not only are the 
 failing tests random but also usually the next nightly passes. This leads 
 me to believe that the cause of the failures is outside the scope of the 
 tests code.
>>>
>>> I noticed something similar as well - see thread:
>>>
>>>[oVirt Jenkins] ovirt-system-tests_basic-suite-master_nightly -
>>> Build # 561 - Failure!
>>
>> This is not only the networks suites, lot of other suites fail randomly.
>>
>> Regarding the networks suites - can it be related to old kernel when running
>> the tests in mock on el7 host? Do we need to require el8 host?
>>
>> Do we see the same failures when running the network suites locally?
>>
>> If these suites are not stable, we should not included them in the CI 
>> for OST
>> patches, or mark them as expected failures so they do not fail the build.
>>
>> I triggered another build since I see lot of random failures in other 
>> suites.
>
> On the next build - different errors:
>
> https://jenkins.ovirt.org/blue/organizations/jenkins/ovirt-system-tests_standard-check-patch/detail/ovirt-system-tests_standard-check-patch/13581/pipeline
>
> - basic_suite_4.3.el7.x86_64 - failed

This one should be removed, we do not maintain 4.3 anymore

> - basic_suite_master.el7.x86_64 - failed

This is obsoleted by ost-images and el8 runs, as you do locally. We
are waiting to get rid of el7 jenkins slaves and mock env there

> - network_suite_4.3.el7.x86_64 - failed

Should be removed

> - network_suite_master.el7.x86_64 - failed

Will be obsoleted once network suite completes move to ost-images/el8

>>
>> Third build failed:
>>
>> https://jenkins.ovirt.org/blue/organizations/jenkins/ovirt-system-tests_standard-check-patch/detail/ovirt-system-tests_standard-check-patch/13594/pipeline
>>
>> Failing suites:
>>
>> - basic_suite_master.el7.x86_64
>> - network_suite_master.el7.x86_64
>
> Forth build failed:
> https://jenkins.ovirt.org/blue/organizations/jenkins/ovirt-system-tests_standard-check-patch/detail/ovirt-system-tests_standard-check-patch/13654/pipeline/156
>
> - basic_suite_master.el7.x86_64
> - upgrade-from-release_suite_4.3.el7.x86_64

This should be removed

>
>> Looks like all failures happen with el7. Why are we running master
>> (el8 based) on el7 hosts?

Progress with our CI env is very slow indeed

>>
>> The basic master suites never failed when I run it locally, even with
>> nested environment.
>> But maybe I did not try enough, I did 10 runs.

Yes, ost-image based runs are way more reliable. Currently this
applies only to master basic suite.

>>
> With the current state OST CI is not useful to anyone. Builds take hours
> and fail randomly. This wastes our limited resources for other projects
> and makes contribution to this project very hard.


 +1

> ___
> Devel mailing list -- devel@ovirt.org
> To unsubscribe send an email to devel-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/devel@ovirt.org/message/JBA4FBJN2N6MMH3XURIQAUB3L25KLCPN/

 ___
 Devel mailing list -- devel@ovirt.org
 To unsubscribe send an email to devel-le...@ovirt.org
 Privacy Statement: https://www.ovirt.org/privacy-policy.html
 oVirt Code of Conduct: 
 https://www.ovirt.org/community/about/community-guidelines/
 List Archives: 
 https://lists.ovirt.org/archives/list/devel@ovirt.org/message/ZVFQOUXWCREYAHHFLTPTDRWAJF5U6OHM/
>>>
>>>
>>>
>>> --
>>> Didi
>>>
> 

[ovirt-devel] Re: Travis builds

2020-11-11 Thread Michal Skrivanek


> On 11 Nov 2020, at 11:17, Nir Soffer  wrote:
> 
> On Wed, Nov 11, 2020 at 10:33 AM Vojtech Juranek  wrote:
>> 
>> Hi,
>> recently, I noticed Travis builds waits in the queue for a very long time.
>> Looking around I found out Travis probably reduced resources for OSS 
>> projects.
>> 
>> Also I found out all projects should migrate from travis-ci.org to travis-
>> ci.com by end of December 2020 [1] (see Q. When will the migration from
>> travis-ci.org to travis-ci.com be completed?). Recently they announce new
>> pricing model [2] when OSS projects will get some initial credit and after
>> that either has to pay or ask the support for another credit (see section
>> "Building on a public repositories only" in [2]).
>> 
>> Maybe time to migrate away from travis-ci to something else, e.g. GH Actions?
> 
> I would avoid github only dependency. Libvrt and qemu moved to gitlab 
> recently,
> I think we should check this option instead.

or our jenkins?

we are getting rid of that awful mock environment, slowly, but it’s progressing…
is there still any reason to run a separate thing then?

> 
>> Thoughts?
>> 
>> Vojta
>> 
>> [1] https://docs.travis-ci.com/user/migrate/open-source-repository-migration
>> [2] 
>> https://blog.travis-ci.com/2020-11-02-travis-ci-new-billing___
>> Devel mailing list -- devel@ovirt.org
>> To unsubscribe send an email to devel-le...@ovirt.org
>> Privacy Statement: https://www.ovirt.org/privacy-policy.html
>> oVirt Code of Conduct: 
>> https://www.ovirt.org/community/about/community-guidelines/
>> List Archives: 
>> https://lists.ovirt.org/archives/list/devel@ovirt.org/message/PFRTNEBW3IFCUGPI5R3VN3LEMV7DGD4Z/
> ___
> Devel mailing list -- devel@ovirt.org
> To unsubscribe send an email to devel-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/devel@ovirt.org/message/JSPSQ2AZBKTBWUKFLVXJQSDIQ5D5UNX5/
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/2HFZ4O4CBSAFNLOV23WJXURZKIDQYHWJ/


[ovirt-devel] Re: How to set up a (rh)el8 machine for running OST

2020-11-06 Thread Michal Skrivanek
> On 6 Nov 2020, at 11:29, Milan Zamazal  wrote:
>
> Marcin Sobczyk  writes:
>
>>> On 11/5/20 11:30 AM, Milan Zamazal wrote:
>>> Marcin Sobczyk  writes:
>>>
 On 11/4/20 11:29 AM, Yedidyah Bar David wrote:
> Perhaps what you want, some day, is for the individual tests to
> have make-style dependencies? So you'll issue just a single test,
> and OST will only
> run the bare minimum for running it.
 Yeah, I had the same idea. It's not easy to implement it though.
 'pytest' has a "tests are independent" design, so we would need to
 build something on top of that (or try to invent our own test
 framework, which is a very bad idea). But even with a
 dependency-resolving solution, there are tests that set something up
 just to bring it down in a moment (by design), so we'd probably need
 some kind of "provides" and "tears down" markers.  Then you have the
 fact that some things take a lot of time and we do other stuff in
 between, while waiting - dependency resolving could force things to
 happen linearly and the run times could skyrocket... It's a complex
 subject that requires a serious think-through.
>>> Actually I was once thinking about introducing test dependencies in
>>> order to run independent tests in parallel and to speed up OST runs this
>>> way.  The idea was that OST just waits on something at many places and
>>> it could run other tests in the meantime (we do some test interleaving
>>> in extreme cases but it's suboptimal and difficult to maintain).
>> Yeah, I think I remember you did that during one of OST's hackathons.
>>
>>>
>>> When arranging some things manually, I could achieve a significant
>>> speedup.  But the problem is, of course, how to make an automated
>>> dependency management and handle all the possible situations and corner
>>> cases.  It would be quite a lot of work, I think.
>>>
>> Exactly. I.e. I can see there's [1], but of course that will work only
>> on py3.
>
> py3 is the least problem.
>
>> The dependency management is something we'd have to implement and maintain
>> on our own probably.
>
> Yes, this is the hard part.
>
>> Then of course we'd be introducing the test repeatability
>> problem, since ordering of things for different runs might be different,
>> which in current state of OST is something I'd like to avoid.
>
> It should be easy to have a switch between deterministic and
> non-deterministic ordering.  Then one can use the fast, dynamic ordering
> for running tests more quickly and the suboptimal but deterministic
> ordering can be used for repeatability (on CI etc.).  So this is not a
> real problem.

Why do you think it’s going to be significantly faster? I do not see
much space for increasing it, at least not with the current set of
tests.
Actual tests that can run in parallel take cca 10 minutes. There’s
initial install, backup/restore when you can’t run anything, storage
operations ( if you try to parallelize them they only run slower(and
fail))
You’re not going to gain much...

>
>> [1] https://pypi.org/project/pytest-asyncio/
> ___
> Devel mailing list -- devel@ovirt.org
> To unsubscribe send an email to devel-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/devel@ovirt.org/message/LOQG7PATUE2OXCGF6564BVNXV7USAWNO/
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/CAPNKDH7XYLK7OYF56CUK24DY3O6MILW/


[ovirt-devel] Re: "env issues" in CI (was: virt-sparsify failed (was: [oVirt Jenkins] ovirt-system-tests_basic-suite-master_nightly - Build # 479 - Failure!))

2020-10-15 Thread Michal Skrivanek


> On 15 Oct 2020, at 12:16, Yedidyah Bar David  wrote:
> 
> On Thu, Oct 15, 2020 at 12:44 PM Michal Skrivanek  wrote:
>> 
>> 
>> 
>>> On 14 Oct 2020, at 08:14, Yedidyah Bar David  wrote:
>>> 
>>> On Tue, Oct 13, 2020 at 6:46 PM Nir Soffer  wrote:
>>>> 
>>>> On Mon, Oct 12, 2020 at 9:05 AM Yedidyah Bar David  wrote:
>>>>> The next run of the job (480) did finish successfully. No idea if it
>>>>> was already fixed by a patch, or is simply a random/env issue.
>>>> 
>>>> I think this is env issue, we run on overloaded vms with small amount of 
>>>> memory.
>>>> I have seen such radnom failures before.
>>> 
>>> Generally speaking, I think we must aim for zero failures due to "env
>>> issues" - and not ignore them as such.
>> 
>> Exactly. We cannot ignore that any longer.
>> 
>>> It would obviously be nice if we had more hardware in CI, no doubt.
>> 
>> there’s never enough
>> 
>>> But I wonder if perhaps stressing the system like we do (due to resources
>>> scarcity) is actually a good thing - that it helps us find bugs that real
>>> users might also run into in actually legitimate scenarios
>> 
>> yes, it absolutely does.
>> 
>>> - meaning, using
>>> what we recommend in terms of hardware etc. but with a load that is higher
>>> than what we have in CI per-run - as, admittedly, we only have minimal
>>> _data_ there.
>>> 
>>> So: If we decide that some code "worked as designed" and failed due to
>>> "env issue", I still think we should fix this - either in our code, or
>>> in CI.
>> 
>> yes!
> 
> Or, as applicable e.g. to current case, if we can't reproduce, at least
> add more information so that a next (random) reproduction reveals more
> information...
> 
>> 
>>> 
>>> For latter, I do not think it makes sense to just say "the machines are
>>> overloaded and not have enough memory" - we must come up with concrete
>>> details - e.g. "We need at least X MiB RAM".
>> 
>> I’ve spent quite some time analyzing the flakes in basic suite this past 
>> half year…so allow me to say that that’s usually just an excuse for a lousy 
>> test (or functionality:)
>> 
>>> 
>>> For current issue, if we are certain that this is due to low mem, it's
>>> quite easy to e.g. revert this patch:
>>> 
>>> https://gerrit.ovirt.org/110530
>>> 
>>> Obviously it will mean either longer queues or over-committing (higher
>>> load). Not sure which.
>> 
>> it’s difficult to pinpoint the reason really. If it’s happening rarely (as 
>> this one is) you’d need a statistically relevant comparison. Which takes 
>> time…
>> 
>> About this specific sparsify test - it was me uncommenting it few months 
>> ago, after running around 100 tests over a weekend. It may have failed once 
>> (there were/are still some other flakes)…but to me considering the overall 
>> success rate being quite low at that time it sounded acceptable.
> 
> ... E.g. in current case (I wasn't aware of above), if it fails for
> you even once, and you can't find the root cause, perhaps better make
> sure to log more information, so that a next case will be more likely
> to help us find the root cause. Or open a bug for that, if you do not
> do this immediately.
> 
>> If this is now happening more often then it does sound like a regression 
>> somewhere. Could be all the OST changes or tests rearrangements, but it also 
>> could be a code regression.
> 
> I have no idea if it happens more often. I think I only noticed this once.
> 
>> 
>> Either way it’s supposed to be predictable.
> 
> Really? The failure of virt-sparsify? So perhaps reply on the other
> thread explaining how to reproduce, or even fix :-)

sadly, no:) the environment is. And that’s why we’re moving towards that. ost 
images gives a complete software isolation (all repos are disabled), the only 
thing it still does over network is to download the relatively small cirros 
image from glance.ovirt.org. And running OST on baremetal gives you the 
isolation from possible concurrent runs in CI env.

as for a followup, it’s a virt test, adding Arik

> 
>> And it is, just not in this environment we use for this particular job - 
>> it’s the old one without ost-images, inside the troublesome mock, so you 
>> don’t know what it picked up, what’s the really underlying system(outside of 
>> mock)
>> 
>> Thanks,
>> michal
>> 
>>> 
>>> But personally, I wouldn't do that without knowing more (e.g. following
>>> the other thread).
>>> 
>>> Best regards,
>>> --
>>> Didi
>>> 
>> 
> 
> 
> --
> Didi
> 
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/TKUKHCHLPXUVZ2C3HWO2BRTJVOWUAJJZ/


[ovirt-devel] Re: "env issues" in CI (was: virt-sparsify failed (was: [oVirt Jenkins] ovirt-system-tests_basic-suite-master_nightly - Build # 479 - Failure!))

2020-10-15 Thread Michal Skrivanek


> On 14 Oct 2020, at 08:14, Yedidyah Bar David  wrote:
> 
> On Tue, Oct 13, 2020 at 6:46 PM Nir Soffer  wrote:
>> 
>> On Mon, Oct 12, 2020 at 9:05 AM Yedidyah Bar David  wrote:
>>> The next run of the job (480) did finish successfully. No idea if it
>>> was already fixed by a patch, or is simply a random/env issue.
>> 
>> I think this is env issue, we run on overloaded vms with small amount of 
>> memory.
>> I have seen such radnom failures before.
> 
> Generally speaking, I think we must aim for zero failures due to "env
> issues" - and not ignore them as such.

Exactly. We cannot ignore that any longer. 

> It would obviously be nice if we had more hardware in CI, no doubt.

there’s never enough

> But I wonder if perhaps stressing the system like we do (due to resources
> scarcity) is actually a good thing - that it helps us find bugs that real
> users might also run into in actually legitimate scenarios

yes, it absolutely does.

> - meaning, using
> what we recommend in terms of hardware etc. but with a load that is higher
> than what we have in CI per-run - as, admittedly, we only have minimal
> _data_ there.
> 
> So: If we decide that some code "worked as designed" and failed due to
> "env issue", I still think we should fix this - either in our code, or
> in CI.

yes!

> 
> For latter, I do not think it makes sense to just say "the machines are
> overloaded and not have enough memory" - we must come up with concrete
> details - e.g. "We need at least X MiB RAM".

I’ve spent quite some time analyzing the flakes in basic suite this past half 
year…so allow me to say that that’s usually just an excuse for a lousy test (or 
functionality:)

> 
> For current issue, if we are certain that this is due to low mem, it's
> quite easy to e.g. revert this patch:
> 
> https://gerrit.ovirt.org/110530
> 
> Obviously it will mean either longer queues or over-committing (higher
> load). Not sure which.

it’s difficult to pinpoint the reason really. If it’s happening rarely (as this 
one is) you’d need a statistically relevant comparison. Which takes time…

About this specific sparsify test - it was me uncommenting it few months ago, 
after running around 100 tests over a weekend. It may have failed once (there 
were/are still some other flakes)…but to me considering the overall success 
rate being quite low at that time it sounded acceptable.
If this is now happening more often then it does sound like a regression 
somewhere. Could be all the OST changes or tests rearrangements, but it also 
could be a code regression.

Either way it’s supposed to be predictable. And it is, just not in this 
environment we use for this particular job - it’s the old one without 
ost-images, inside the troublesome mock, so you don’t know what it picked up, 
what’s the really underlying system(outside of mock)

Thanks,
michal

> 
> But personally, I wouldn't do that without knowing more (e.g. following
> the other thread).
> 
> Best regards,
> --
> Didi
> 
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/6I3RB4XJVYVN2WEZ76L2QZTHRN6CAST2/


[ovirt-devel] Re: [CQ]: 111471,2 (ovirt-engine) failed "ovirt-master" system tests

2020-10-15 Thread Michal Skrivanek


> On 15 Oct 2020, at 11:08, Martin Perina  wrote:
> 
> 
> 
> On Thu, Oct 15, 2020 at 8:43 AM Yedidyah Bar David  > wrote:
> On Wed, Oct 14, 2020 at 7:06 PM oVirt Jenkins  > wrote:
> >
> > Change 111471,2 (ovirt-engine) is probably the reason behind recent system 
> > test
> > failures in the "ovirt-master" change queue and needs to be fixed.
> >
> > This change had been removed from the testing queue. Artifacts build from 
> > this
> > change will not be released until it is fixed.
> >
> > For further details about the change see:
> > https://gerrit.ovirt.org/#/c/111471/2 
> > 
> 
> We didn't detect the build issue on Jenkins CI, because we don't build 
> frontend unless there is a change under frontend directory to save resources 
> on Jenkins. So for these rare cases where a backend only change breaks 
> frontend we don't have a warning and that's why it was revealed only after 
> merging ...

Yeah. It seems it brings more trouble than it helps….
looking at recent CI data it does add 15-20 minutes to the build….but since we 
recently added an actually useful UI tests it seems worth it to me
maybe with another “ci build quick” to skip GWT explicitly if someone really 
wants it fast…..though this “fast” still means ~45 minutes.
?

And we can save plenty of resources by disabling fc30:)

> 
> 
> 
> This fails compilation. Liran already commented there with details,
> posting this for visibility.
> 
> Is anyone handling this? Should we revert?
> 
> Thanks and best regards,
> -- 
> Didi
> ___
> Devel mailing list -- devel@ovirt.org 
> To unsubscribe send an email to devel-le...@ovirt.org 
> 
> Privacy Statement: https://www.ovirt.org/privacy-policy.html 
> 
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/ 
> 
> List Archives: 
> https://lists.ovirt.org/archives/list/devel@ovirt.org/message/BT3EGYHZFLAMZ5T3ICEXCDZQDZ4ZLLJX/
>  
> 
> 
> 
> -- 
> Martin Perina
> Manager, Software Engineering
> Red Hat Czech s.r.o.
> ___
> Devel mailing list -- devel@ovirt.org
> To unsubscribe send an email to devel-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/devel@ovirt.org/message/UNA5CZOMOYOMWXXN7I3FMWXP7INW2N5Y/

___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/MQM4UOJORP65P3QLUQ6ZAJXOEOG25ZWO/


[ovirt-devel] Re: Branching out 4.3 in ovirt-system-tests

2020-10-12 Thread Michal Skrivanek
> On 12 Oct 2020, at 14:49, Marcin Sobczyk  wrote:
>
> Hi all,
>
> after minimizing the usage of lago in basic suite,
> and some minor adjustments in the network suite, we are finally
> able to remove lago OST plugin as a dependency [1].
>
> This however comes with a price of keeping lots of ugly ifology, i.e. [2][3].
> There's big disparity between OST runs we have on el7 and el8.
> There's also tons of symlink-based code sharing between suites - be it 4.3
> suites and master suites or simply different types of suites.
> The basic suite has its own 'test_utils', which is copied/symlinked
> in multiple places. There's also 'ost_utils', which is really messy ATM.
> It's very hard to keep track and maintain all of this...
>
> At this moment, we are able to run basic suite and network suite
> on el8, with prebuilt ost-images and without lago plugin.
> HE suites should be the next step. We have patches that make them
> py3-compatible that probably still need some attention [4][5].
> We don't have any prebuilt HE ost-images, but this will be handled
> in the nearest future.
>
> I think it's good time to detach ourselves from the legacy stuff
> and start with a clean slate. My proposition would be to branch
> out 4.3 in ovirt-system-tests and not use py2/el7 in the master
> branch at all. This would allow us to focus on py3, el8 and ost-images
> efforts while keeping the legacy stuff intact.
>
> WDYT?

Great. We don’t really need 4.3 that much anymore.

>
> Regards, Marcin
>
> [1] https://gerrit.ovirt.org/#/c/111643/
> [2] https://gerrit.ovirt.org/#/c/111643/6/basic-suite-master/control.sh
> [3] 
> https://gerrit.ovirt.org/#/c/111643/6/basic-suite-master/test-scenarios/conftest.py
> [4] https://gerrit.ovirt.org/108809
> [5] https://gerrit.ovirt.org/110097
>
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/OED3ZKFZEITE46ID2BJ77CPFEUP2GTNL/


[ovirt-devel] Re: Need help in handling catch block in vdsm

2020-09-29 Thread Michal Skrivanek
[adding devel list]

> On 29 Sep 2020, at 09:51, Ritesh Chikatwar  wrote:
> 
> Hello,
> 
> i am new to VDSM codebase.
> 
> There is one minor bug in gluster and they don't have any near plan to fix 
> this. So i need to fix this in vdsm.
> 
> The bug is when i run command
> [root@dhcp35-237 ~]# gluster v geo-replication status
> No active geo-replication sessions
> geo-replication command failed
> [root@dhcp35-237 ~]# 
> 
> So this engine is piling up with error. i need handle this in vdsm the code 
> at this place
> 
> https://github.com/oVirt/vdsm/blob/master/lib/vdsm/gluster/cli.py#L1231 
> 
> 
> If i run the above command with xml then i will get 
> 
> [root@dhcp35-237 ~]# gluster v geo-replication status --xml
> geo-replication command failed
> [root@dhcp35-237 ~]# 
> 
> so i have to run one more command before executing this and check in 
> exception that it contains No active geo-replication sessions this string. 
> for that i did like this

maybe it goes to stderr?

other than that, no idea, sorry

Thanks,
michal

> 
> try:
> xmltree = _execGluster(command)
> except ge.GlusterCmdFailedException as e:
> if str(e).find("No active geo-replication sessions") != -1:
> return []
> Seems like this is not working can you help me here
> 
> 
> Any help will be appreciated 
> Ritesh
> 

___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/3QCWPQIVIQSSMLWYYRMI6WK6ASQJUQR7/


[ovirt-devel] Re: [ARM64] Possiblity to support oVirt on ARM64

2020-09-11 Thread Michal Skrivanek
Hi Zhenyu,
as far as I know there’s no one actively adding code support ARM at the moment.
But it shouldn’t be awfully hard.
For ppc64le and s390x platform we just ported the hypervisor side. ovirt-engine 
itself is running on x86_64. That simplifies things. The only complexity is in 
the different device model for VMs and few features here and there. Patches 
welcome!:)
All we would need from oVirt side is contributing some machine for CI, and of 
course committing to maintain a healthy status of builds and tests.

S390x port might be a good starting point, it’s relatively recent (2-3 years 
ago), and done almost exclusively by Viktor Mihajlovski, so this[1] gives you 
actually a pretty good picture how many changes that are. Things have changed a 
bit since then, but I would still say it’s pretty simple to add another 
platform, given enough time and skill…(again, supposing you’re fine with x86 
manager)

HTH,
michal

[1] https://github.com/search?q=org%3AoVirt+mihajlov=commits


> On 11 Sep 2020, at 09:39, Zhenyu Zheng  wrote:
> 
> Hi Sandro,
> 
> Thanks for the reply and info.
> I'm actually representing the openEuler ovirt SIG here, they have been 
> working on porting and testing for a few months,
> and there is some progress. We want to check for possibilities for the ovirt 
> upstream version to support ARM as well.
> 
> BR,
> 
> On Wed, Sep 9, 2020 at 9:25 PM Sandro Bonazzola  > wrote:
> 
> 
> Il giorno dom 19 lug 2020 alle ore 16:04 Zhenyu Zheng 
> mailto:zhengzhenyul...@gmail.com>> ha scritto:
> Hi oVirt,
> 
> We are currently trying to make oVirt work on ARM64 platform, since I'm quite 
> new to oVirt community, I'm wondering what is the current status about ARM64 
> support in the oVirt upstream, as I saw the oVirt Wikipedia page mentioned 
> there is an ongoing efforts to support ARM platform. We have a small team 
> here and we are willing to also help to make this work.
> 
> 
> 
> Hi, nice to see this initiative. I'd like to loop you in with Joey Ma who 
> also is working on porting to ARM.
> There's an ongoing effort for Open Euler Virt SIG about this: 
> - https://www.youtube.com/watch?v=fdG8_uMt-IM=youtu.be 
> 
> - https://gitee.com/openeuler/community/tree/master/sig/Virt 
> 
> 
> Vincent Van der Kussen  is also working on 
> porting to ARM (https://twitter.com/kbsingh/status/335023164289601536 
> )
> 
> Dennis Gilmore  
> reported success in running ovirt-engine on aarch64 too.
> 
> It would be nice to see an interest group working together on this.
> 
> -- 
> Sandro Bonazzola
> MANAGER, SOFTWARE ENGINEERING, EMEA R RHV
> Red Hat EMEA 
> sbona...@redhat.com    
>  
> Red Hat respects your work life balance. Therefore there is no need to answer 
> this email out of your office hours.
>  
> ___
> Devel mailing list -- devel@ovirt.org
> To unsubscribe send an email to devel-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:

___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives:


[ovirt-devel] Re: implementing hotplugCd/hotunplugCd in vdsm

2020-08-25 Thread Michal Skrivanek


> On 25 Aug 2020, at 13:50, Nir Soffer  wrote:
> 
> On Tue, Aug 25, 2020 at 2:21 PM Michal Skrivanek
>  wrote:
>> 
>> 
>> 
>>> On 25 Aug 2020, at 13:02, Vojtech Juranek  wrote:
>>> 
>>> On čtvrtek 20. srpna 2020 14:42:15 CEST Michal Skrivanek wrote:
>>>>> On 20 Aug 2020, at 14:28, Nir Soffer  wrote:
>>>>> 
>>>>> On Thu, Aug 20, 2020 at 12:19 PM Vojtech Juranek 
>>>>> wrote:
>>>> 
>>>>>> 
>>>>>> Hi,
>>>>>> as Fedor is on PTO, I was asked to take over.
>>>>>> 
>>>>>> I don't think new functionality is need, more detail list of proposed
>>>>>> flows and changes is bellow,
>>> TL;DR: I'd suggest to
>>>>>> 
>>>>>> - change in engine: send PDIV instead of CD path from engine to vdsm
>>>> 
>>>> 
>>>> the biggest issue here is always with older versions not having this
>>>> functionality yet….
>>> 
>>> yes, I was wrong, this requires new functionality. Therefore Nir proposed to
>>> create a feature page for it. We put together flows which should cover all 
>>> the
>>> possible issues (including VM recovery). Please review and comment directly
>>> under
>>> 
>>> https://github.com/oVirt/ovirt-site/pull/2320
>>> 
>>> (unless you think creating feature page for it is wrong way to go:-)
>> 
>> IMHO you’re making it bigger than it needs to be
>> We already have ChangeCD API, I don’t see how adding two different APIs make 
>> it significantly better
> 
> The new APIs are internal, I don't think we need user visible API to
> insert and eject a CD.
> 
>> There is definitely a time when old unmaintainable code needs to be 
>> rewritten and improved….but it greatly increases the "time to fix”.
> 
> Actually this shorten the time to fix, from infinity (current mess) to
> next release :-)

4.4.3 sounds a bit too ambitious. Work on 4.4.3 already started and we don’t 
have a finished design for this. Looks more like 4.4.4 but that’s still better 
than infinity for sure;-)

> 
>> Either way, I still don’t get "add new drivespec to VM metadata…”. We’re not 
>> adding a new drive. If you mean to extend the existing disk metadata with 
>> new attribute please say “extend” or something else instead.
> 
> We already need to keep the drive details (pool id, domain id, image
> id, volume id)
> so we can deactivate the drive later. This is how we handle other
> disks, and how we
> keep the volume info when starting a VM with a CD on block storage:
> 
> Example from VM started with ISO on block storage:
> 
>
>
> 6e37519d-a58b-47ac-a339-479539c19fc7
>
> 2d8d2402-8ad1-416d-9761-559217d8b414
>86b5a5ca-5376-4cef-a8f7-d1dc1ee144b4
>
> 7bc3ff54-f493-4212-b9f3-bea491c4a502
>
>
> 
> 6e37519d-a58b-47ac-a339-479539c19fc7
> 
> 2d8d2402-8ad1-416d-9761-559217d8b414
> type="int">105906176
> 
> /dev/6e37519d-a58b-47ac-a339-479539c19fc7/leases
> 
> /rhev/data-center/mnt/blockSD/6e37519d-a58b-47ac-a339-479539c19fc7/images/2d8d2402-8ad1-416d-9761-559217d8b414/7bc3ff54-f493-4212-b9f3-bea491c4a502
> 
> 7bc3ff54-f493-4212-b9f3-bea491c4a502
>
>
>
> ...
>
>  
>   dev='/rhev/data-center/mnt/blockSD/6e37519d-a58b-47ac-a339-479539c19fc7/images/2d8d2402-8ad1-416d-9761-559217d8b414/7bc3ff54-f493-4212-b9f3-bea491c4a502'
> index='3'>
>
>  
>  
>  
>  
>  
>  
>  
>
> 
> When we start without a CD and insert a CD, the XML should be the
> same, so we need
> to add a drivespec to the metadata.
> 
> When we eject a CD we need to remove the CD metadat from the vm metadata.

yes. that’s what i had in mind. it’s not adding/removing the whole disk or its 
metadata, just the drivespec of it.

do you want to change that for iso domain -based CDs as well? If so, please 
describe how that would work. It may be easier to keep it as is.

> 
> Vojta, I think we should add this into to the feature page to make it
> more clear.

yes please. I just wanted to clarify “bigger things” here first

Thanks,
michal

> 
>>>>>> - change in vdsm: implement counter of ISOs being used by VMs to know
>>>>>> when we can deactivate volume
>>> - change in vdsm: remove old drivespec
>>>>>> from VM XML when changing/removing CD (and eventually deactivate
>>>>>> volume)>>
>>>

[ovirt-devel] Re: implementing hotplugCd/hotunplugCd in vdsm

2020-08-25 Thread Michal Skrivanek


> On 25 Aug 2020, at 13:02, Vojtech Juranek  wrote:
> 
> On čtvrtek 20. srpna 2020 14:42:15 CEST Michal Skrivanek wrote:
>>> On 20 Aug 2020, at 14:28, Nir Soffer  wrote:
>>> 
>>> On Thu, Aug 20, 2020 at 12:19 PM Vojtech Juranek 
>>> wrote:
>> 
>>>> 
>>>> Hi,
>>>> as Fedor is on PTO, I was asked to take over.
>>>> 
>>>> I don't think new functionality is need, more detail list of proposed
>>>> flows and changes is bellow,
> TL;DR: I'd suggest to
>>>> 
>>>> - change in engine: send PDIV instead of CD path from engine to vdsm
>> 
>> 
>> the biggest issue here is always with older versions not having this
>> functionality yet….
> 
> yes, I was wrong, this requires new functionality. Therefore Nir proposed to 
> create a feature page for it. We put together flows which should cover all 
> the 
> possible issues (including VM recovery). Please review and comment directly 
> under 
> 
> https://github.com/oVirt/ovirt-site/pull/2320
> 
> (unless you think creating feature page for it is wrong way to go:-)

IMHO you’re making it bigger than it needs to be
We already have ChangeCD API, I don’t see how adding two different APIs make it 
significantly better
There is definitely a time when old unmaintainable code needs to be rewritten 
and improved….but it greatly increases the "time to fix”.

Either way, I still don’t get "add new drivespec to VM metadata…”. We’re not 
adding a new drive. If you mean to extend the existing disk metadata with new 
attribute please say “extend” or something else instead.

> 
>> 
>>>> - change in vdsm: implement counter of ISOs being used by VMs to know
>>>> when we can deactivate volume
> - change in vdsm: remove old drivespec
>>>> from VM XML when changing/removing CD (and eventually deactivate
>>>> volume)>> 
>>>> 
>>>> You comments are welcome.
>>>> Thanks
>>>> Vojta
>>>> 
>>>> Flows
>>>> ===
>>>> 
>>>> VM without a CD
>>>> -
>>>> 
>>>> - Should not be possible to insert any CD, this option should not be
>>>> available/active in the UI.
>> 
>>> 
>>> I don't think we have such configuration, all VMs have empty cdrom by
>>> default:
> 
>>> 
>>>   
>>> 
>>> 
>>> 
>>> 
>>> 
>>> 
>>> 
>>> 
>>>   
>>> 
>>> 
>>> But of course if we have such configuration when adding CD is not
>> 
>> 
>> we don’t. All VMs have always at least that one implicit CD (empty or
>> loaded)
> We have a case where there is additional CD for
>> cloud-init/payload, that additional CD is not “changeable” nor exposed in
>> UI anywhere 
>> 
>>> possible, the menu/button
>>> should be disabled in the UI.
>>> 
>>> 
>>>> VM without CD, changeCD inserts new ISO
>> 
>> 
>> please update to make it really clear, there’s no “VM without CD”. VM always
>> has CD, just not loaded (empty tray)
> so this case doesn’t exist…
>> 
>> 
>>>> -
>>>> 
>>>> - add new drivespec to VM metadata
>>> 
>>> 
>>> How failure is handled?
>>> 
>>> 
>>>> - prepare new drivespec
>>> 
>>> 
>>> What if vdsm is restarted at this point?
>>> 
>>> How VM recovery should handle this drivespec referring to inactive
>>> volume?
>>> 
>>> 
>>>> - attach new device to VM
>>>> - if attaching to VM fails, tear down drivespec and remove drivespec
>>>> from VM metadata
>> 
>>> 
>>> Tearing down and removing drivespec cannot be done atomically. any
>>> operation may fail and
>>> vdsm may be killed at any point.
>>> 
>>> The design should suggest how vdsm recover from all errors.
>>> 
>>> 
>>>> VM with CD, changeCD removes current ISO
>>>> ---
>>>> 
>>>> - tear down previous drivespec
>>>> 
>>>>   - if volume with ISO is inactive (as a result e.g. of failure of vdsm
>>>>   after inserting drivespec into VM metadata, but before activating
>>>>   volume), continue without error
>> 
>>> 
>&

[ovirt-devel] Re: implementing hotplugCd/hotunplugCd in vdsm

2020-08-20 Thread Michal Skrivanek


> On 20 Aug 2020, at 14:28, Nir Soffer  wrote:
> 
> On Thu, Aug 20, 2020 at 12:19 PM Vojtech Juranek  wrote:
>> 
>> Hi,
>> as Fedor is on PTO, I was asked to take over.
>> 
>> I don't think new functionality is need, more detail list of proposed flows 
>> and changes is bellow,
>> TL;DR: I'd suggest to
>>  - change in engine: send PDIV instead of CD path from engine to vdsm

the biggest issue here is always with older versions not having this 
functionality yet….

>>  - change in vdsm: implement counter of ISOs being used by VMs to know when 
>> we can deactivate volume
>>  - change in vdsm: remove old drivespec from VM XML when changing/removing 
>> CD (and eventually deactivate volume)
>> 
>> You comments are welcome.
>> Thanks
>> Vojta
>> 
>> Flows
>> ===
>> 
>> VM without a CD
>> -
>>  - Should not be possible to insert any CD, this option should not be 
>> available/active in the UI.
> 
> I don't think we have such configuration, all VMs have empty cdrom by default:
> 
>
>  
>  
>  
>  
>  
>  
>
> 
> But of course if we have such configuration when adding CD is not

we don’t. All VMs have always at least that one implicit CD (empty or loaded)
We have a case where there is additional CD for cloud-init/payload, that 
additional CD is not “changeable” nor exposed in UI anywhere

> possible, the menu/button
> should be disabled in the UI.
> 
>> VM without CD, changeCD inserts new ISO

please update to make it really clear, there’s no “VM without CD”. VM always 
has CD, just not loaded (empty tray)
so this case doesn’t exist…

>> -
>>  - add new drivespec to VM metadata
> 
> How failure is handled?
> 
>>  - prepare new drivespec
> 
> What if vdsm is restarted at this point?
> 
> How VM recovery should handle this drivespec referring to inactive volume?
> 
>>  - attach new device to VM
>>  - if attaching to VM fails, tear down drivespec and remove drivespec from 
>> VM metadata
> 
> Tearing down and removing drivespec cannot be done atomically. any
> operation may fail and
> vdsm may be killed at any point.
> 
> The design should suggest how vdsm recover from all errors.
> 
>> VM with CD, changeCD removes current ISO
>> ---
>>  - tear down previous drivespec
>>- if volume with ISO is inactive (as a result e.g. of failure of vdsm 
>> after inserting drivespec into VM metadata, but before activating volume), 
>> continue without error
> 
> Sounds good, and may be already handled in lvm module. You can try to 
> deactivate
> LV twice to confirm this.
> 
>>- if drivespec is used by another VM, don’t deactivate volume
> 
> This is the tricky part, vdsm does not have a mechanism for tracking
> usage of block devices,
> and adding such mechanism is much bigger work than supporting CD on
> block devices, so
> it cannot be part of this work.

We do have the list of active VMs and their xml/devices, so we do know if there 
are other users, it’s nt an expensive check. Locking it properly so there’s no 
in flight request for change cd could be a bit tricky but it doesn’t sound that 
complex
> 
>>  -remove drivespec from VM metadata
> 
> What if this fails, or vdsm is restarted before we try to do this?
> 
> When vdsm starts it recovers running vms, we need to handle this case
> - drivespec
> referencing non-existing volume in this flow.

remove what exactly? CD device stays there, the "tray is ejected”, device’s 
path is updated to “".

> 
>> VM with CD, changeCD inserts new ISO and removes old
>> --
>>  - tear down previous drivespec
>>- if volume with ISO is inactive (as a result e.g. of failure of vdsm 
>> after inserting drivespec into VM metadata, but before activating volume), 
>> continue without error
>>- if drivespec is used by another VM, don’t deactivate volume
>>  - remove previous drivespec from VM metadata
>>  - add new drivespec to VM metadata
>>  - prepare new drivespec
>>  - attach new device to VM
>>  - if attaching new drivespac fails, tear down new drivespec and attach back 
>> previous drivespec
> 
> This is too complicated. When I discussed this Fedor, our conclusion
> was that we don't
> want to support this complexity, and it will be much easier to
> implement 2 APIs, one for
> removing a CD, and one for inserting a CD.
> 
> With separate APIs, error handling is much easier, and is similar to
> existing error handling
> in hotplugDisk and hotunplugDisk. You have good example how to handle
> all errors in these
> APIs.

there’s a big difference between hotplugs and change CD in terms of the 
devices. It’s supposedly an atomic change. Though having a separate actions for 
load/eject would be probably ok. But not sure if worth it

> 
> For example, if we have API for removing a CD:
> 
> 1. engine send VM.removeCD
> 

[ovirt-devel] Re: 192.168.200.90 unreachable (was: [oVirt Jenkins] ovirt-system-tests_he-basic-suite-master - Build # 1696 - Still Failing!)

2020-08-04 Thread Michal Skrivanek


> On 4 Aug 2020, at 13:48, Michal Skrivanek  wrote:
> 
> 
> 
>> On 4 Aug 2020, at 10:52, Yedidyah Bar David  wrote:
>> 
>> On Tue, Aug 4, 2020 at 6:40 AM  wrote:
>>> 
>>> Project: 
>>> https://jenkins.ovirt.org/job/ovirt-system-tests_he-basic-suite-master/
>>> Build: 
>>> https://jenkins.ovirt.org/job/ovirt-system-tests_he-basic-suite-master/1696/
>> 
>> This is failing for a few days now, all with the same error:
>> 
>>> FAILED:  004_basic_sanity.hotplug_disk
>>> 
>>> Error Message:
>>> False != True after 180 seconds
>>>  >> begin captured logging << 
>>> lago.ssh: DEBUG: start task:cef1ecf3-76fd-4aad-a1fb-093eeda8193a:Get ssh 
>>> client for lago-he-basic-suite-master-host-1:
>>> lago.ssh: DEBUG: end task:cef1ecf3-76fd-4aad-a1fb-093eeda8193a:Get ssh 
>>> client for lago-he-basic-suite-master-host-1:
>>> lago.ssh: DEBUG: Running 833afb8a on lago-he-basic-suite-master-host-1: 
>>> ping -4 -c 1 192.168.200.90
>>> lago.ssh: DEBUG: Command 833afb8a on lago-he-basic-suite-master-host-1 
>>> returned with 1
>>> lago.ssh: DEBUG: Command 833afb8a on lago-he-basic-suite-master-host-1 
>>> output:
>>> PING 192.168.200.90 (192.168.200.90) 56(84) bytes of data.
>>> From 192.168.200.3 icmp_seq=1 Destination Host Unreachable
>> 
>> This is after Michal pushed a patch to restore the removed function
>> (and related) get_vm0_ip_address, which failed build 1691.
>> 
>> I guess this patch was not enough - either the VM now gets a different
>> IP address, not 192.168.200.90, or its network is down, or whatever.
> 
> on 4.3 it shouldn’t
> on master if the configuration is shared with basic-suite-master (which it is 
> in case of -he-basic-suite-master) it implicitly starts using CirrOS as well. 
> That means cloud init network conf doesn’t work and it doesn’t get IP .90 (it 
> quite reliably gets .111 now but I wouldn’t suggest that either:)
> 
>> Any idea?
> 
> yes, just replace it with hostname instead.

the whole suite needs py3 updates so maybe worth copying bigger pieces of 
basic-suite-master…
or maybe rather revisit what that test does? It seems to replicate basic suite 
to some degree, which is probably not really useful as that gets tested by 
basic suite. HE suite shoudl only care about HE deployment, rbackup/restore 
proabbly, restart HE, sure, but pretty much nothing from actual engine 
functionality.

> 
>> 
>> Thanks,
>> -- 
>> Didi

___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/5JPXLJIGQQRJXZIKOU4JDNQ3DHE4KZMI/


[ovirt-devel] Re: 192.168.200.90 unreachable (was: [oVirt Jenkins] ovirt-system-tests_he-basic-suite-master - Build # 1696 - Still Failing!)

2020-08-04 Thread Michal Skrivanek


> On 4 Aug 2020, at 10:52, Yedidyah Bar David  wrote:
> 
> On Tue, Aug 4, 2020 at 6:40 AM  wrote:
>> 
>> Project: 
>> https://jenkins.ovirt.org/job/ovirt-system-tests_he-basic-suite-master/
>> Build: 
>> https://jenkins.ovirt.org/job/ovirt-system-tests_he-basic-suite-master/1696/
> 
> This is failing for a few days now, all with the same error:
> 
>> FAILED:  004_basic_sanity.hotplug_disk
>> 
>> Error Message:
>> False != True after 180 seconds
>>  >> begin captured logging << 
>> lago.ssh: DEBUG: start task:cef1ecf3-76fd-4aad-a1fb-093eeda8193a:Get ssh 
>> client for lago-he-basic-suite-master-host-1:
>> lago.ssh: DEBUG: end task:cef1ecf3-76fd-4aad-a1fb-093eeda8193a:Get ssh 
>> client for lago-he-basic-suite-master-host-1:
>> lago.ssh: DEBUG: Running 833afb8a on lago-he-basic-suite-master-host-1: ping 
>> -4 -c 1 192.168.200.90
>> lago.ssh: DEBUG: Command 833afb8a on lago-he-basic-suite-master-host-1 
>> returned with 1
>> lago.ssh: DEBUG: Command 833afb8a on lago-he-basic-suite-master-host-1 
>> output:
>> PING 192.168.200.90 (192.168.200.90) 56(84) bytes of data.
>> From 192.168.200.3 icmp_seq=1 Destination Host Unreachable
> 
> This is after Michal pushed a patch to restore the removed function
> (and related) get_vm0_ip_address, which failed build 1691.
> 
> I guess this patch was not enough - either the VM now gets a different
> IP address, not 192.168.200.90, or its network is down, or whatever.

on 4.3 it shouldn’t
on master if the configuration is shared with basic-suite-master (which it is 
in case of -he-basic-suite-master) it implicitly starts using CirrOS as well. 
That means cloud init network conf doesn’t work and it doesn’t get IP .90 (it 
quite reliably gets .111 now but I wouldn’t suggest that either:)

> Any idea?

yes, just replace it with hostname instead.

> 
> Thanks,
> -- 
> Didi
> 
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/D3BTSKYZEGXGXWZCLSCL5X5FH5327MP7/


[ovirt-devel] Re: Error during deployment of self hosted engine oVirt 4.4

2020-08-04 Thread Michal Skrivanek


> On 4 Aug 2020, at 06:10, i iordanov  wrote:
> 
> It seems the issue stems from cpu type being empty.
> 
> 'cpu': {'architecture': 'undefined', 'type': ''}
> 
> 2020-08-03 23:31:39,888-0400 DEBUG 
> otopi.ovirt_hosted_engine_setup.ansible_utils 
> ansible_utils._process_output:103 cluster_facts: {'changed': False, 
> 'ansible_facts': {'ovirt_clusters': [{'href': 
> '/ovirt-engine/api/clusters/0eb77d38-d5fe-11ea-8808-00163e42a94a', 'comment': 
> '', 'description': 'The default server cluster', 'id': 
> '0eb77d38-d5fe-11ea-8808-00163e42a94a', 'name': 'Default', 'affinity_groups': 
> [], 'ballooning_enabled': True, 'bios_type': 'cluster_default', 'cpu': 
> {'architecture': 'undefined', 'type': ''}, 'cpu_profiles': [], 'data_center': 
> {'href': 
> '/ovirt-engine/api/datacenters/0ea3b60e-d5fe-11ea-a87c-00163e42a94a', 'id': 
> '0ea3b60e-d5fe-11ea-a87c-00163e42a94a'}, 'enabled_features': [], 
> 'error_handling': {'on_error': 'migrate'}, 'external_network_providers': [], 
> 'fencing_policy': {'enabled': True, 'skip_if_connectivity_broken': 
> {'enabled': False, 'threshold': 50}, 'skip_if_gluster_bricks_up': False, 
> 'skip_if_gluster_quorum_not_met': False, 'skip_if_sd_active': {'enabled': 
> False}}, 'firewall_type': 'firewalld', 'gluster_hooks': [], 
> 'gluster_service': False, 'gluster_volumes': [], 'ha_reservation': False, 
> 'ksm': {'enabled': True, 'merge_across_nodes': True}, 
> 'log_max_memory_used_threshold': 95, 'log_max_memory_used_threshold_type': 
> 'percentage', 'mac_pool': {'href': 
> '/ovirt-engine/api/macpools/58ca604b-017d-0374-0220-014e', 'id': 
> '58ca604b-017d-0374-0220-014e'}, 'memory_policy': {'over_commit': 
> {'percent': 100}, 'transparent_huge_pages': {'enabled': True}}, 'migration': 
> {'auto_converge': 'inherit', 'bandwidth': {'assignment_method': 'auto'}, 
> 'compressed': 'inherit', 'encrypted': 'inherit', 'policy': {'id': 
> '80554327-0569-496b-bdeb-fcbbf52b827b'}}, 'network_filters': [], 'networks': 
> [], 'permissions': [], 'required_rng_sources': ['urandom'], 
> 'scheduling_policy': {'href': 
> '/ovirt-engine/api/schedulingpolicies/b4ed2332-a7ac-4d5f-9596-99a439cb2812', 
> 'id': 'b4ed2332-a7ac-4d5f-9596-99a439cb2812'}, 'switch_type': 'legacy', 
> 'threads_as_cores': False, 'trusted_service': False, 'tunnel_migration': 
> False, 'version': {'major': 4, 'minor': 4}, 'virt_service': True, 
> 'vnc_encryption': False}]}, 'deprecations': [{'msg': "The 
> 'ovirt_cluster_facts' module has been renamed to 'ovirt_cluster_info', and 
> the renamed one no longer returns ansible_facts", 'version': '2.13'}], 
> 'failed': False}
> 
> Perhaps this Penryn series CPU is too old for this oVirt installation...

Yes, we dropped Penryn from supported CPU list i 4.3. You could probably stil 
make it run but it would need messing with engine’s db, adding back Nehalem 
entry to ServerCPUList(e.g. from 4.2 cluster version line) and resume the 
deployment somehow.

> 
> iordan
> 
> On Mon, Aug 3, 2020 at 11:54 PM i iordanov  > wrote:
> Hi guys,
> 
> I am trying to install oVirt 4.4 for testing of the aSPICE and Opaque Android 
> clients and tried to follow this slightly outdated doc:
> 
> https://www.ovirt.org/documentation/installing_ovirt_as_a_self-hosted_engine_using_the_command_line/#Installing_the_self-hosted_engine_deployment_host_SHE_cli_deploy
>  
> 
> 
> to deploy an all-in-one self-hosted engine using the command-line.
> 
> I started with a clean CentOS 8 installation, set up an NFS server and tested 
> that mounts work from the local host and other hosts, opened all ports with 
> firewalld to my LAN and localhost (but left firewalld enabled).
> 
> During the run of 
> hosted-engine --deploy
> I got the following error:
> 
> 2020-08-03 23:31:51,426-0400 DEBUG 
> otopi.ovirt_hosted_engine_setup.ansible_utils 
> ansible_utils._process_output:103 TASK [ovirt.hosted_engine_setup : debug]
> 2020-08-03 23:31:51,827-0400 DEBUG 
> otopi.ovirt_hosted_engine_setup.ansible_utils 
> ansible_utils._process_output:103 server_cpu_dict: {'Intel Nehalem Family': 
> 'Nehalem', 'Secure Intel Nehalem Family': 'Nehalem,+sp
> ec-ctrl,+ssbd,+md-clear', 'Intel Westmere Family': 'Westmere', 'Secure Intel 
> Westmere Family': 'Westmere,+pcid,+spec-ctrl,+ssbd,+md-clear', 'Intel 
> SandyBridge Family': 'SandyBridge', 'Secure Intel SandyBridge Fa
> mily': 'SandyBridge,+pcid,+spec-ctrl,+ssbd,+md-clear', 'Intel IvyBridge 
> Family': 'IvyBridge', 'Secure Intel IvyBridge Family': 
> 'IvyBridge,+pcid,+spec-ctrl,+ssbd,+md-clear', 'Intel Haswell Family': 
> 'Haswell-noTSX
> ', 'Secure Intel Haswell Family': 'Haswell-noTSX,+spec-ctrl,+ssbd,+md-clear', 
> 'Intel Broadwell Family': 'Broadwell-noTSX', 'Secure Intel Broadwell Family': 
> 'Broadwell-noTSX,+spec-ctrl,+ssbd,+md-clear', 'Intel Sk
> ylake Client Family': 

[ovirt-devel] Re: OST switching back to CirrOS

2020-07-30 Thread Michal Skrivanek


> On 30 Jul 2020, at 12:13, Yedidyah Bar David  wrote:
> 
> On Wed, Jul 29, 2020 at 7:09 PM Michal Skrivanek
>  wrote:
>> 
>> Hi,
>> FYI we’ve just switched back from CentOS to CirrOS VMs in OST basic suite, 
>> saving 900MB of data transferred all the time (and copied back and forth 
>> during test) as well as lower memory requirements (down to 128MB from 384MB, 
>> times 3 VMs).
>> For a good measure a memory hot unplug test was added too.
>> 
>> Let me know if you see anything unusual, other than faster run times.
> 
> Several suites are broken right now, e.g.:
> 
> https://jenkins.ovirt.org/job/ovirt-system-tests_he-basic-suite-master/1691/testReport/junit/(root)/004_basic_sanity/vm_run/
> 
> Due to missing function get_vm0_ip_address, which you removed in your
> patch. Is there a problem to restore this function (even if basic
> suite does not use it anymore), or it won't work as-is? Perhaps it

it will work, it’s just the convoluted linking of files between suites…

> won't work if moving to cirros also other suites? Trying, anyway:
> 
> https://gerrit.ovirt.org/110559

I noticed too, sorry, forgot to add you to review of 
https://gerrit.ovirt.org/#/c/110554/
i’ll merge it now

> 
> Best regards,
> -- 
> Didi
> 
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/LEIP73YZIHH5MS6O4ATPFRVOXBAUJDBW/


[ovirt-devel] OST switching back to CirrOS

2020-07-29 Thread Michal Skrivanek
Hi,
FYI we’ve just switched back from CentOS to CirrOS VMs in OST basic suite, 
saving 900MB of data transferred all the time (and copied back and forth during 
test) as well as lower memory requirements (down to 128MB from 384MB, times 3 
VMs).
For a good measure a memory hot unplug test was added too.

Let me know if you see anything unusual, other than faster run times.

Thanks,
michal
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/PDEKIWLTFAUQE3C6T7R6O7U5GCIJT2JU/


[ovirt-devel] Re: code merges in ovirt-engine repo

2020-07-14 Thread Michal Skrivanek


> On 14 Jul 2020, at 15:33, Yedidyah Bar David  wrote:
> 
> On Tue, Jul 14, 2020 at 4:21 PM Michal Skrivanek  <mailto:michal.skriva...@redhat.com>> wrote:
> 
> 
> > On 14 Jul 2020, at 12:11, Yedidyah Bar David  > <mailto:d...@redhat.com>> wrote:
> > 
> > On Tue, Jul 14, 2020 at 11:43 AM Michal Skrivanek
> > mailto:michal.skriva...@redhat.com>> wrote:
> >> 
> >> Hi all,
> >> we’re moving to 4.4.z development now and we need to keep a closer eye on 
> >> automation results and making sure the build is not broken. For these 
> >> reasons we’re considering moving to a similar model as vdsm, having a 
> >> smaller set of people with merge rights to make sure the patches get in in 
> >> the right order and they meet our sanity standards (OST, bug’s TM)
> >> Any objections/comments?
> > 
> > Any reason to not simply branch 4.4? And have the branch maintained by
> > the stable branches maintainers?
> 
> just the sheer amount of backports needed (every patch). Doesn’t sound worth 
> the effort of posting and reviewing(even if just formally) everything twice.
> 
> If you expect _every_ patch to be backported

yes

> , just do nothing - let current maintainers do their job, and revert the 
> occasional bad ones when needed.

And how would you prevent breaking the relatively frequent updates? The CI OST 
is not working very well (for a long time now) and while we are improving and 
stabilizing the infrastrucure it’s not really there yet to consider automated 
gating.
Engine is unique in oVirt set of projects, it’s the largest one by far and use 
maintainership per team or area (FE, BE, database, API…) and so we have a 
pretty high number of people merging patches but far less people keeping up to 
date with the project’s planning.


> 
> Otherwise, I think branching is a good approach.
>  
> 
> > -- 
> > Didi
> > 
> 
> 
> 
> -- 
> Didi

___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/EDI64EBEIJC6KUZWEHOXX2JI6WYVCZVF/


[ovirt-devel] Re: code merges in ovirt-engine repo

2020-07-14 Thread Michal Skrivanek


> On 14 Jul 2020, at 12:11, Yedidyah Bar David  wrote:
> 
> On Tue, Jul 14, 2020 at 11:43 AM Michal Skrivanek
>  wrote:
>> 
>> Hi all,
>> we’re moving to 4.4.z development now and we need to keep a closer eye on 
>> automation results and making sure the build is not broken. For these 
>> reasons we’re considering moving to a similar model as vdsm, having a 
>> smaller set of people with merge rights to make sure the patches get in in 
>> the right order and they meet our sanity standards (OST, bug’s TM)
>> Any objections/comments?
> 
> Any reason to not simply branch 4.4? And have the branch maintained by
> the stable branches maintainers?

just the sheer amount of backports needed (every patch). Doesn’t sound worth 
the effort of posting and reviewing(even if just formally) everything twice.

> -- 
> Didi
> 
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/CEVY6OCRMT7IRYPMQTKCZQ2DMQHKF3GK/


[ovirt-devel] code merges in ovirt-engine repo

2020-07-14 Thread Michal Skrivanek
Hi all,
we’re moving to 4.4.z development now and we need to keep a closer eye on 
automation results and making sure the build is not broken. For these reasons 
we’re considering moving to a similar model as vdsm, having a smaller set of 
people with merge rights to make sure the patches get in in the right order and 
they meet our sanity standards (OST, bug’s TM) 
Any objections/comments?

Thanks,
michal
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/3RQPIROZ5NJ2BQ5C74ZGALBQN5JOW2TY/


[ovirt-devel] Re: OST failing because of modular filtering error

2020-06-30 Thread Michal Skrivanek


> On 29 Jun 2020, at 17:12, Parth Dhanjal  wrote:
> 
> Hey!
> 
> I am unable to install gluster-ansible-roles on VMs because of an error from 
> modular filtering
>   - conflicting requests
>   - package python-six-1.9.0-2.el7.noarch is filtered out by modular filtering
>   - package python2-cryptography-1.7.2-2.el7.x86_64 is filtered out by 
> modular filtering
>   - package python2-cryptography-2.1.4-2.el7.x86_64 is filtered out by 
> modular filtering
>   - package python2-six-1.10.0-9.el7.noarch is filtered out by modular 
> filtering
>   - package python-six-1.9.0-1.el7.noarch is filtered out by modular filtering
>  
> I tried disabling the modular packages but still facing this issue, can 
> someone suggest a fix?

how did you get el7 packages? what/how are you installing exactly?

> 
> Regards
> Parth Dhanjal
> ___
> Devel mailing list -- devel@ovirt.org
> To unsubscribe send an email to devel-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/devel@ovirt.org/message/3ETIG5GJFGUKTW4HL77M432I62VZNXSA/

___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/VEPKSRD6DVDGVFBRLRCUGXHKDDXRK76A/


[ovirt-devel] Re: VM rebooted during OST test_hotplug_cpu

2020-06-30 Thread Michal Skrivanek


> On 30 Jun 2020, at 08:30, Yedidyah Bar David  wrote:
> 
> Hi all,
> 
> I am trying to verify fixes for ovirt-engine-rename, specifically for
> OVN. Engine top patch is [1], OST patch [2]. Ran the manual job on
> these [3].
> 
> In previous patches, OST failed in earlier tests. Now, it passed these
> tests, so I hope that my patches are enough for what I am trying to
> do. However, [3] did fail later, during test_hotplug_cpu - it set the
> number of CPUs, then tried to connect to the VM, and timed out.
> 
> The logs imply that right after it changed the number of CPUs, the VM
> was rebooted, apparently by libvirtd. Relevant log snippets:
> 
> vdsm [4]:
> 
> 2020-06-29 10:21:10,889-0400 DEBUG (jsonrpc/1) [virt.vm]
> (vmId='7474280d-4501-4355-9425-63898757682b') Setting number of cpus
> to : 2 (vm:3089)
> 2020-06-29 10:21:10,952-0400 INFO  (jsonrpc/1) [api.virt] FINISH
> setNumberOfCpus return={'status': {'code': 0, 'message': 'Done'},
> 'vmList': {}} from=:::192.168.201.4,54576, flow_id=7f9503ed,
> vmId=7474280d-4501-4355-9425-63898757682b (api:54)
> 2020-06-29 10:21:11,111-0400 DEBUG (periodic/0)
> [virt.sampling.VMBulkstatsMonitor] sampled timestamp 2925.602824355
> elapsed 0.160 acquired True domains all (sampling:451)
> 2020-06-29 10:21:11,430-0400 DEBUG (jsonrpc/1) [jsonrpc.JsonRpcServer]
> Return 'VM.setNumberOfCpus' in bridge with {} (__init__:356)
> 2020-06-29 10:21:11,432-0400 INFO  (jsonrpc/1) [jsonrpc.JsonRpcServer]
> RPC call VM.setNumberOfCpus succeeded in 0.56 seconds (__init__:312)
> 2020-06-29 10:21:12,228-0400 INFO  (libvirt/events) [virt.vm]
> (vmId='7474280d-4501-4355-9425-63898757682b') reboot event (vm:1033)
> 
> qemu [5]:
> 
> 2020-06-29T14:21:12.260303Z qemu-kvm: terminating on signal 15 from
> pid 42224 ()
> 2020-06-29 14:21:12.462+: shutting down, reason=destroyed
> 
> libvirtd [6] itself does not log anything relevant AFAICT, but at
> least it shows that the above unknown process is itself:
> 
> 2020-06-29 14:18:16.212+: 42224: error : qemuMonitorIO:620 :
> internal error: End of file from qemu monitor
> 
> (Note that above line is from 3 minutes before the reboot, and the
> only place in the log with '42224'. No other log there has 42224,
> other than these and audit.log).
> 
> Any idea? Is this a bug in libvirt? vdsm? I'd at least expect
> something in the log for such a severe step.

I’d suggest to rerun. I don’t trust the CI env at all. Could be any reason.
It’s highly unlikely to be caused by your patch, and I can see on my infra that 
OST is running well on both CentOS and Stream.

> 
> [1] https://gerrit.ovirt.org/109961
> [2] https://gerrit.ovirt.org/109734
> [3] 
> https://jenkins.ovirt.org/view/oVirt%20system%20tests/job/ovirt-system-tests_manual/7031/
> [4] 
> https://jenkins.ovirt.org/view/oVirt%20system%20tests/job/ovirt-system-tests_manual/7031/artifact/exported-artifacts/test_logs/basic-suite-master/post-004_basic_sanity_pytest.py/lago-basic-suite-master-host-0/_var_log/vdsm/vdsm.log
> [5] 
> https://jenkins.ovirt.org/view/oVirt%20system%20tests/job/ovirt-system-tests_manual/7031/artifact/exported-artifacts/test_logs/basic-suite-master/post-004_basic_sanity_pytest.py/lago-basic-suite-master-host-0/_var_log/libvirt/qemu/vm0.log
> [6] 
> https://jenkins.ovirt.org/view/oVirt%20system%20tests/job/ovirt-system-tests_manual/7031/artifact/exported-artifacts/test_logs/basic-suite-master/post-004_basic_sanity_pytest.py/lago-basic-suite-master-host-0/_var_log/libvirt.log
> -- 
> Didi
> ___
> Devel mailing list -- devel@ovirt.org
> To unsubscribe send an email to devel-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/devel@ovirt.org/message/JEF5QWFZF4O2OGQFHPH7SPU6SX76KF47/
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/ZCKCXCOG4IQWTFSSTAG43INH7WA5BHQS/


[ovirt-devel] Re: implementing hotplugCd/hotunplugCd in vdsm

2020-06-28 Thread Michal Skrivanek
Hi Fedor,

> On 27 Jun 2020, at 14:46, Fedor Gavrilov  wrote:
>
> So from what I was able to see in hotplugDisk example, for CDs I would have 
> to do similar:
>
> 1. parse disk params from stored xml metadata
> 2. call prepareVolumePath with these params
> (skip normalizeVdsmImg)
> 3. call updateDriveIndex

Why?

> (skip validating whether volume has leases)
> 4. drive.getXML() and then hooks.before_disk_hotplug with it

If only the storage stuff would ever finish moving to xml
But it’s not right to call this hook anyway, iiuc you’re not doing a hotplug

> 5. call attachDevice with drive XML and handle possible errors
>
> I am not sure, however, whether we need what "else" does in hotplugDisk 
> (vm.py line 3468)...
>
> Looking forward to hearing your opinions,
> Fedor
>
> - Original Message -
> From: "Fedor Gavrilov" 
> To: "devel" 
> Sent: Monday, June 22, 2020 4:37:59 PM
> Subject: [ovirt-devel] implementing hotplugCd/hotunplugCd in vdsm
>
> Hey,
>
> So in an attempt to fix change CD functionality we discovered a few other 
> potential

CD has a long history of issues caused by historical simplifications.
What is the intended fix, stop using string paths?

> issues and what Nir suggested was to implement two [somewhat] new functions 
> in VDSM: hotplug and hotunplug for CDs similar to how it works for normal 
> disks now.

This is conceptually different, CD is a removable media,
hotplug/unplug is for the device itself. We never supported hotplug of
the device though.

> Existing changeCD function will be left as is for backwards compatibility.

Who would use the new function?

> As I found out, engine already calculates iface and index before invoking 
> VDSM functions, so we will just pass these along with PDIV to the VDSM.
>
> Suggested flow is, let me quote:
>
>> So the complete change CD flow should be:
>>
>> 1. get the previous drivespec from vm metadata
>> 2. prepare new drivespec
>> 3. add new drivespec to vm metadata
>> 4. attach a new device to vm

Don’t call it a new device when you just change a media

>> 5. teardown the previous drivespec
>> 6. remove previous drivespec from vm metadata
>>
>> When the vm is stopped, it must do:
>>
>> 1. get drive spec from vm metadata
>> 2. teardown drivespec
>>
>> During attach, there are interesting races:
>> - what happens if vdsm crashes after step 2? who will teardown the volume?
>> maybe you need to add the new drivespec to the metadata first,
>> before preparing it.
>> - what happens if attach failed? who will remove the new drive from
>> the metadata?
>
> Now, what makes hotplugDisk/hotunplugDisk different? From what I understand, 
> the flow is same there, so what difference is there as far as VDSM is 
> concerned? If none, this means if I more or less copy that code, changing 
> minor details and data accordingly for CDs, this should work, shouldn't it?

Hotplug/unplug is similar, but I would like to see the proposed change
in context of engine as well, and I expect it to be a bit more complex
there. Without that this is not worth it.

Thanks,
michal
>
> Thanks!
> Fedor
> ___
> Devel mailing list -- devel@ovirt.org
> To unsubscribe send an email to devel-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/devel@ovirt.org/message/LAQR3RW4RMTUNFUXL5T4HWLPKXJKEC3Y/
> ___
> Devel mailing list -- devel@ovirt.org
> To unsubscribe send an email to devel-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/devel@ovirt.org/message/VKPDAB7XXTQKPYWAZEHQYTV7TRCQQI2E/
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/W2U55OKENZP2ZMNG7AJDYWE3NKBOPGOA/


[ovirt-devel] Re: [oVirt Jenkins] ovirt-system-tests_he-basic-suite-4.3 - Build # 472 - Failure!

2020-06-25 Thread Michal Skrivanek
it is very unlikely anything has changed in 4.3 suite
TBH I wouldn’t bother and rather invest time to get the 4.4 he-basic-suite on 
par with basic-suite-master. It needs python3, pytest, ost-images…

Thanks,
michal

> On 25 Jun 2020, at 09:56, Sandro Bonazzola  wrote:
> 
> 
> 
> Il giorno lun 15 giu 2020 alle ore 09:15 Yedidyah Bar David  > ha scritto:
> On Sat, Jun 13, 2020 at 5:01 AM  > wrote:
> >
> > Project: 
> > https://jenkins.ovirt.org/job/ovirt-system-tests_he-basic-suite-4.3/ 
> > 
> > Build: 
> > https://jenkins.ovirt.org/job/ovirt-system-tests_he-basic-suite-4.3/472/ 
> > 
> 
> This failed in:
> 
> https://jenkins.ovirt.org/job/ovirt-system-tests_he-basic-suite-4.3/474/testReport/junit/(root)/012_local_maintenance_sdk/local_maintenance/
>  
> 
> 
> Fault reason is "Operation Failed". Fault detail is "[Cannot activate
> Host. Related operation is currently in progress. Please try again
> later.]". HTTP response code is 409.
> 
> engine.log has:
> 
> https://jenkins.ovirt.org/job/ovirt-system-tests_he-basic-suite-4.3/474/artifact/exported-artifacts/test_logs/he-basic-suite-4.3/post-012_local_maintenance_sdk.py/lago-he-basic-suite-4-3-engine/_var_log/ovirt-engine/engine.log
>  
> 
> 
> 2020-06-15 00:00:58,330-04 INFO
> [org.ovirt.engine.core.bll.RefreshHostCapabilitiesCommand]
> (ForkJoinPool-1-worker-8) [3715b680] Running command:
> RefreshHostCapabilitiesCommand internal: true. Entities affected :
> ID: e75bc41d-c044-40f6-a645-9a86f2ef2536 Type: VDSAction group
> MANIPULATE_HOST with role type ADMIN
> 2020-06-15 00:00:58,331-04 INFO
> [org.ovirt.engine.core.bll.RefreshHostCapabilitiesCommand]
> (ForkJoinPool-1-worker-8) [3715b680] Before acquiring lock in order to
> prevent monitoring for host 'lago-he-basic-suite-4-3-host-0' from
> data-center 'Default'
> 2020-06-15 00:00:58,332-04 INFO
> [org.ovirt.engine.core.bll.lock.InMemoryLockManager]
> (ForkJoinPool-1-worker-8) [3715b680] Failed to acquire lock and wait
> lock 
> 'HostEngineLock:{exclusiveLocks='[e75bc41d-c044-40f6-a645-9a86f2ef2536=VDS_INIT]',
> sharedLocks=''}'
> 2020-06-15 00:00:58,337-04 INFO
> [org.ovirt.engine.core.vdsbroker.irsbroker.IrsProxy]
> (EE-ManagedThreadFactory-engine-Thread-281) [] Clearing cache of pool:
> 'e186881e-aeb0-11ea-8e5b-5452c0a8c863' for problematic entities of
> VDS: 'lago-he-basic-suite-4-3-host-0'.
> 2020-06-15 00:00:58,337-04 INFO
> [org.ovirt.engine.core.vdsbroker.irsbroker.IrsProxy]
> (EE-ManagedThreadFactory-engine-Thread-281) [] Removing vds
> '[e75bc41d-c044-40f6-a645-9a86f2ef2536]' from the domain in
> maintenance cache
> 2020-06-15 00:00:58,338-04 INFO
> [org.ovirt.engine.core.vdsbroker.irsbroker.IrsProxy]
> (EE-ManagedThreadFactory-engine-Thread-281) [] Removing host(s)
> '[e75bc41d-c044-40f6-a645-9a86f2ef2536]' from hosts unseen domain
> report cache
> 2020-06-15 00:00:58,382-04 INFO
> [org.ovirt.engine.core.vdsbroker.vdsbroker.DisconnectStoragePoolVDSCommand]
> (EE-ManagedThreadFactory-engineScheduled-Thread-60) [] START,
> DisconnectStoragePoolVDSCommand(HostName =
> lago-he-basic-suite-4-3-host-0,
> DisconnectStoragePoolVDSCommandParameters:{hostId='e75bc41d-c044-40f6-a645-9a86f2ef2536',
> storagePoolId='e186881e-aeb0-11ea-8e5b-5452c0a8c863',
> vds_spm_id='1'}), log id: e9ba207
> 2020-06-15 00:00:58,498-04 INFO
> [org.ovirt.engine.core.bll.lock.InMemoryLockManager]
> (ForkJoinPool-1-worker-8) [3715b680] Failed to acquire lock and wait
> lock 
> 'HostEngineLock:{exclusiveLocks='[e75bc41d-c044-40f6-a645-9a86f2ef2536=VDS_INIT]',
> sharedLocks=''}'
> 2020-06-15 00:01:01,657-04 INFO
> [org.ovirt.engine.core.bll.lock.InMemoryLockManager]
> (ForkJoinPool-1-worker-8) [3715b680] Failed to acquire lock and wait
> lock 
> 'HostEngineLock:{exclusiveLocks='[e75bc41d-c044-40f6-a645-9a86f2ef2536=VDS_INIT]',
> sharedLocks=''}'
> 2020-06-15 00:01:03,556-04 INFO
> [org.ovirt.engine.core.bll.ActivateVdsCommand] (default task-1)
> [e4c7e809-7883-47dc-b7da-3236cc754ebb] Failed to Acquire Lock to
> object 
> 'EngineLock:{exclusiveLocks='[e75bc41d-c044-40f6-a645-9a86f2ef2536=VDS]',
> sharedLocks=''}'
> 2020-06-15 00:01:03,585-04 WARN
> [org.ovirt.engine.core.bll.ActivateVdsCommand] (default task-1)
> [e4c7e809-7883-47dc-b7da-3236cc754ebb] Validation of action
> 'ActivateVds' failed for user admin@internal-authz. Reasons:
> VAR__ACTION__ACTIVATE,VAR__TYPE__HOST,ACTION_TYPE_FAILED_OBJECT_LOCKED
> 2020-06-15 00:01:03,864-04 ERROR
> 

[ovirt-devel] Re: Enabling hardware acceleration on RHEL8 VMs (non bios way)

2020-06-22 Thread Michal Skrivanek


> On 21 Jun 2020, at 20:01, Prajith Kesava Prasad  wrote:
> 
> Hi 
> 
> I was wondering if there exist any way to enable hardware acceleration on a 
> RHEL8.1 or (RHEL8)  VM.
> (the VM was created on ovirt ) , if there are any docs or anything which any 
> of you could point me to, would help me out.

There’s nothing to do in VM, we have a vdsm-hook-nestedvt which just adds it to 
each VM launched on a hypervisor with that hook installed.

> 
>  [ i was able to find a doc to enable nested virtualization (which is my main 
> goal)  that had asked me to enable hardware acceleration from bios] , i 
> couldn't really load bios menu, in the VM, (I'm not sure if it really loads 
> fast or it directly boots ) but its not showing the bios menu ,in the console 
> vv viewer of my RHEL8.1 VM
> 
> Any help is appreciated.! :-)
> 
> Thanks,
> Prajith K Prasad
> ___
> Devel mailing list -- devel@ovirt.org
> To unsubscribe send an email to devel-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/devel@ovirt.org/message/KGECQPIMFZYHPUR5PYYF34A7FYC75RM5/
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/PO3NSINMQ2F2DPHU5YSGXPJ4PGX26VTT/


[ovirt-devel] Re: [ovirt-users] Change Hosted engine VM cluster compatibility version throws error

2020-06-19 Thread Michal Skrivanek
On 19 Jun 2020, at 16:40, Ritesh Chikatwar  wrote:




On Fri, Jun 19, 2020, 7:26 PM Michal Skrivanek 
wrote:

>
>
> On 19 Jun 2020, at 13:41, Ritesh Chikatwar  wrote:
>
>
>
> On Thu, Jun 18, 2020 at 11:59 PM Michal Skrivanek <
> michal.skriva...@redhat.com> wrote:
>
>>
>>
>> On 18 Jun 2020, at 08:59, Ritesh Chikatwar  wrote:
>>
>> Hello Team,
>>
>>
>> When i try to change Cluster compatible version HE it throws error As
>>
>>
>> what exactly are you changing where?
>>
>
> I am trying to change the cluster compatible version for the Hosted engine
> in Ui. The drop down did not set any value and I am trying to set to 4.4.
>
>
> which drop down?
> Why are you changing cluster compatibility level of HE?
>
> maybe that’s the best question for starts - what’s the current situation
> and what are you trying to get to?:)
>

Yeah correct Michal I should have explained that in the beginning of mail
itself. Apologize for that.

I have 4.4 rhhi setup with storage as gluster. But in this setup gluster
service is not enable by default. I can make it enable from the UI by
editing cluster and when try the same I get the error as

 Error while executing action: Update of cluster compatibility version
failed because there are VMs/Templates [HostedEngine]


Ah ok, that explains a lot. The message is misleading, it has nothing to do
with cluster version.
Can you please share your engine.log with that failure to check what
exactly failed there?

Lucia, the message is definitely confusing and your patch should be
finalized and merged:)

Thanks,
michal

with incorrect configuration. To fix the issue, please go to each of them,
edit, change the Custom Compatibility Version of the VM/Template to the
cluster level you want to update the cluster to and press OK. If the save
does not pass, fix the dialog validation. After successful cluster update,
you can revert your Custom Compatibility Version change.

This is the reason I am changing vm's compatibility version.


I also have one doubt here when vm got created , why vm's not settled the
value for cluster compatible version.




> Thanks,
> michal
>
>
>>
>> Error while executing action:
>>
>> HostedEngine:
>>
>>- There was an attempt to change Hosted Engine VM values that are
>>locked.
>>
>> I am trying to change the version to 4.4 it was showing blank.
>>
>> Any suggestions on how I can edit.
>>
>> The VM other than HE is able to editi.
>>
>>
>>
>> *Ritesh*
>> ___
>> Users mailing list -- us...@ovirt.org
>> To unsubscribe send an email to users-le...@ovirt.org
>> Privacy Statement: https://www.ovirt.org/privacy-policy.html
>> oVirt Code of Conduct:
>> https://www.ovirt.org/community/about/community-guidelines/
>> List Archives:
>> https://lists.ovirt.org/archives/list/us...@ovirt.org/message/3EBGCBFDUBHNI6G5E3NG4DCD7RQJLUNC/
>>
>>
>>
>
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/3RBWL4PEJIEKNQ2GMONNJJDK2ZCWS4PO/


[ovirt-devel] Re: [ovirt-users] Change Hosted engine VM cluster compatibility version throws error

2020-06-19 Thread Michal Skrivanek


> On 19 Jun 2020, at 13:41, Ritesh Chikatwar  wrote:
> 
> 
> 
> On Thu, Jun 18, 2020 at 11:59 PM Michal Skrivanek 
> mailto:michal.skriva...@redhat.com>> wrote:
> 
> 
>> On 18 Jun 2020, at 08:59, Ritesh Chikatwar > <mailto:rchik...@redhat.com>> wrote:
>> 
>> Hello Team,
>> 
>> 
>> When i try to change Cluster compatible version HE it throws error As
> 
> what exactly are you changing where?
> 
> I am trying to change the cluster compatible version for the Hosted engine in 
> Ui. The drop down did not set any value and I am trying to set to 4.4.  

which drop down?
Why are you changing cluster compatibility level of HE?

maybe that’s the best question for starts - what’s the current situation and 
what are you trying to get to?:)

Thanks,
michal

> 
>> 
>> Error while executing action:
>> 
>> HostedEngine:
>> There was an attempt to change Hosted Engine VM values that are locked.
>> I am trying to change the version to 4.4 it was showing blank.
>> 
>> Any suggestions on how I can edit.
>> 
>> The VM other than HE is able to editi.
>> 
>> 
>> 
>> Ritesh
>> ___
>> Users mailing list -- us...@ovirt.org <mailto:us...@ovirt.org>
>> To unsubscribe send an email to users-le...@ovirt.org 
>> <mailto:users-le...@ovirt.org>
>> Privacy Statement: https://www.ovirt.org/privacy-policy.html 
>> <https://www.ovirt.org/privacy-policy.html>
>> oVirt Code of Conduct: 
>> https://www.ovirt.org/community/about/community-guidelines/ 
>> <https://www.ovirt.org/community/about/community-guidelines/>
>> List Archives: 
>> https://lists.ovirt.org/archives/list/us...@ovirt.org/message/3EBGCBFDUBHNI6G5E3NG4DCD7RQJLUNC/
>>  
>> <https://lists.ovirt.org/archives/list/us...@ovirt.org/message/3EBGCBFDUBHNI6G5E3NG4DCD7RQJLUNC/>
> 

___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/TZZZWD76X2PAFCIMR4GXEZARNWDJBCZ7/


[ovirt-devel] Re: [ovirt-users] Change Hosted engine VM cluster compatibility version throws error

2020-06-18 Thread Michal Skrivanek


> On 18 Jun 2020, at 08:59, Ritesh Chikatwar  wrote:
> 
> Hello Team,
> 
> 
> When i try to change Cluster compatible version HE it throws error As

what exactly are you changing where?

> 
> Error while executing action:
> 
> HostedEngine:
> There was an attempt to change Hosted Engine VM values that are locked.
> I am trying to change the version to 4.4 it was showing blank.
> 
> Any suggestions on how I can edit.
> 
> The VM other than HE is able to editi.
> 
> 
> 
> Ritesh
> ___
> Users mailing list -- us...@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/us...@ovirt.org/message/3EBGCBFDUBHNI6G5E3NG4DCD7RQJLUNC/

___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/EJDEDCQJ6DMCMMLDTQJCBUAO2TPSDG2C/


[ovirt-devel] Re: [ovirt-users] Re: oVirt-4.4 on CentOS 8.2

2020-06-16 Thread Michal Skrivanek
We need to update OST to use CentOS 8.2, there were several commented out tests 
waiting for it.

It can be a good “excuse” to start using ost-images[1] instead of manually 
rebuilding the template image. It’s not ready for other suites yet but it 
shouldn’t really be that hard to do

Thanks,
michal

[1] https://gerrit.ovirt.org/#/c/109378/

> On 16 Jun 2020, at 14:48, Galit Rosenthal  wrote:
> 
> Hi Dominik
> 
> We actually don't maintain a file with all the packages.
> We maintain only the repos.
> 
> I will make sure that the repos  has these packages.
> 
> Regards,
> Galit
> 
> On Tue, Jun 16, 2020 at 3:19 PM Dominik Holler  > wrote:
> Hello Galit,
> are there still manual maintained package lists in OST?
> If so, can you please remove openvswitch and ovn, and add
> openvswitch2.11
> ovirt-openvswitch-*
> ovirt-python-openvswitch
> python3-openvswitch2.11
> ovn2.11*
> like in
> https://pastebin.com/Wajkbnbt 
> 
> Thanks
> Dominik
> 
> On Tue, Jun 16, 2020 at 1:41 PM Yedidyah Bar David  > wrote:
> On Tue, Jun 16, 2020 at 12:51 PM Dominik Holler  > wrote:
> >
> >
> >
> > On Mon, Jun 15, 2020 at 10:20 PM Dominik Holler  > > wrote:
> >>
> >> Hello,
> >> CentOS 8.2 was released before oVirt was prepared for CentOS 8.2 .
> >> Currently oVirt-4.4 fails to install on CentOS 8.2 .
> >> This will be fixed soon.
> >
> >
> > An updated version of ovn and ovs is available already on some mirrors.
> > Please let me know if you have any problems regarding ovs/ovn on CentOS 8.2.
> 
> Tried OST He-basic master and it still failed:
> 
> https://jenkins.ovirt.org/job/ovirt-system-tests_he-basic-suite-master/1640/artifact/exported-artifacts/mock_logs/script/stdout_stderr.log
>  
> 
> 
> Error:
>  Problem: package
> ovirt-hosted-engine-setup-2.4.6-0.0.master.20200609105658.git1e950f2.el8.noarch
> requires vdsm-python >= 4.40.0, but none of the providers can be
> installed
>   - package vdsm-python-4.40.18-13.gitd65c4a6c2.el8.noarch requires
> vdsm-network = 4.40.18-13.gitd65c4a6c2.el8, but none of the providers
> can be installed
> ...
> 
>   - package vdsm-network-4.40.20-4.git180b82120.el8.x86_64 requires
> openvswitch >= 2.7.0, but none of the providers can be installed
>   - cannot install the best candidate for the job
>   - nothing provides librte_bitratestats.so.2()(64bit) needed by
> openvswitch-2.11.1-5.el8.x86_64
>   - nothing provides librte_bus_pci.so.2()(64bit) needed by
> openvswitch-2.11.1-5.el8.x86_64
> ...
> 
> Is this the version you built? Are there other missing stuff (deps,
> dnf modules, etc.)?
> 
> Thanks,
> -- 
> Didi
> 
> 
> 
> -- 
> 
> GALIT ROSENTHAL
> SOFTWARE ENGINEER
> Red Hat 
> 
>  
> ga...@redhat.com T: 972-9-7692230 
> 
> 
>  
> ___
> Devel mailing list -- devel@ovirt.org 
> To unsubscribe send an email to devel-le...@ovirt.org 
> 
> Privacy Statement: https://www.ovirt.org/privacy-policy.html 
> 
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/ 
> 
> List Archives: 
> https://lists.ovirt.org/archives/list/devel@ovirt.org/message/FMP6Y45SA6XE7RNKGHHYD4JICJJSJE4Y/
>  
> 
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/FBWYUMLQQNQU7EKHIPWS2FD34Z2M2OSA/


[ovirt-devel] CentOS Stream support

2020-06-05 Thread Michal Skrivanek
Hi all,
we would like to ask about interest in community about oVirt moving to CentOS 
Stream.
There were some requests before but it’s hard to see how many people would 
really like to see that.

With CentOS releases lagging behind RHEL for months it’s interesting to 
consider moving to CentOS Stream as it is much more up to date and allows us to 
fix bugs faster, with less workarounds and overhead for maintaining old code. 
E.g. our current integration tests do not really pass on CentOS 8.1 and we 
can’t really do much about that other than wait for more up to date packages. 
It would also bring us closer to make oVirt run smoothly on RHEL as that is 
also much closer to Stream than it is to outdated CentOS.

So..would you like us to support CentOS Stream?
We don’t really have capacity to run 3 different platforms, would you still 
want oVirt to support CentOS Stream if it means “less support” for regular 
CentOS? 
There are some concerns about Stream being a bit less stable, do you share 
those concerns?

Thank you for your comments,
michal
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/3B5MJKO7BS2DMQL3XOXPNO4BU3YDL52T/


[ovirt-devel] Re: OST on CentOS Stream

2020-06-02 Thread Michal Skrivanek


> On 2 Jun 2020, at 12:18, Marcin Sobczyk  wrote:
> 
> Hi,
> 
> On 6/2/20 11:08 AM, Michal Skrivanek wrote:
>> Hi there,
>> we have an issue with latest AdvancedVirt and our current CentOS 8.1 based 
>> OST. libguestfs seems to be always failing (tested by commented out 
>> virt-sparsify test, so it’s not visible normally) and we have no such issues 
>> when running same tests on RHEL 8.2. So I tried to move on and update to 
>> CentOS Stream and indeed it seems to be working ok. With CentOS 8.2 getting 
>> closer but not there yet maybe it’s best to use CentOS Stream for now. There 
>> is some interest from users as well, and we can’t really do anything to fix 
>> those 8.1 issues other than remove AV update and keep using old stuff with 
>> old bugs.
>> 
>> To run on Stream we need two things. openvswitch compatible with 8.2/Stream. 
>> I used Dominik’s [1] and to fix apparently missing ansible-runner dependency 
>> on python3-pyutils (Martin, please address. I suppose it was a transient dep 
>> before and it was just dropped)
>> 
>> I would suggest to move OST to CentOS Stream, and consider at least 
>> experimental support of CentOS Stream. It doesn’t seem like we’re missing 
>> much other than to point to the right repos.
> I think we'd also need CentOS Stream mock config to run anything in the CI.
> That, or some bare metal provisioning.

it’s for the OST guest environment, not the host. So I guess it doesn’t really 
require any change other than the right image.
I’ve set it up on my baremetal for now to see how flaky that is, but so far so 
good, first successful OST run in 3 weeks, yay!

> 
>> 
>> Thanks,
>> michal
>> 
>> [1] 
>> https://copr-be.cloud.fedoraproject.org/results/dominik/OpenvSwitch/epel-8-x86_64/
>> ___
>> Devel mailing list -- devel@ovirt.org
>> To unsubscribe send an email to devel-le...@ovirt.org
>> Privacy Statement: https://www.ovirt.org/privacy-policy.html
>> oVirt Code of Conduct: 
>> https://www.ovirt.org/community/about/community-guidelines/
>> List Archives: 
>> https://lists.ovirt.org/archives/list/devel@ovirt.org/message/ZJIEPDVEFJNWQ4N5QRYDDQG3CERFK2ZT/
> 
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/F2C5PDSFMAEIH4ZPJYGGYL7RZWFQYMFN/


[ovirt-devel] OST on CentOS Stream

2020-06-02 Thread Michal Skrivanek
Hi there,
we have an issue with latest AdvancedVirt and our current CentOS 8.1 based OST. 
libguestfs seems to be always failing (tested by commented out virt-sparsify 
test, so it’s not visible normally) and we have no such issues when running 
same tests on RHEL 8.2. So I tried to move on and update to CentOS Stream and 
indeed it seems to be working ok. With CentOS 8.2 getting closer but not there 
yet maybe it’s best to use CentOS Stream for now. There is some interest from 
users as well, and we can’t really do anything to fix those 8.1 issues other 
than remove AV update and keep using old stuff with old bugs.

To run on Stream we need two things. openvswitch compatible with 8.2/Stream. I 
used Dominik’s [1] and to fix apparently missing ansible-runner dependency on 
python3-pyutils (Martin, please address. I suppose it was a transient dep 
before and it was just dropped)

I would suggest to move OST to CentOS Stream, and consider at least 
experimental support of CentOS Stream. It doesn’t seem like we’re missing much 
other than to point to the right repos.

Thanks,
michal

[1] 
https://copr-be.cloud.fedoraproject.org/results/dominik/OpenvSwitch/epel-8-x86_64/
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/ZJIEPDVEFJNWQ4N5QRYDDQG3CERFK2ZT/


[ovirt-devel] Re: oVirt and Fedora

2020-05-20 Thread Michal Skrivanek


> On 19 May 2020, at 14:06, Neal Gompa  wrote:
> 
> On Mon, May 11, 2020 at 11:45 AM Michal Skrivanek
>  wrote:
>> 
>> 
>> 
>>> On 11 May 2020, at 14:49, Neal Gompa  wrote:
>>> 
>>> On Mon, May 11, 2020 at 8:32 AM Nir Soffer  wrote:
>>>> 
>>>> On Mon, May 11, 2020 at 2:24 PM Neal Gompa  wrote:
>>>>> 
>>>>> As far as the oVirt software keeping up with Fedora, the main problem 
>>>>> here has always been that people aren't integrating their software into 
>>>>> the distribution itself.
>> 
>> it was never a good fit for oVirt to be part of other distributions. We had 
>> individual packages part of Fedora in history, but there are things which 
>> are hard to accept (like automatically enabling of installed services, 
>> UIDs), and overall it’s just too complex, we’re rather a distribution than a 
>> simple app on top of base OS.
>> 
> 
> None of those things are hard to do in Fedora. They're incredibly easy
> to do. I know this because I've gone through this process already
> before.
> 
> But fine, let's assume I consider this argument valid. Then there's
> still no reason not to be continually providing support for Fedora as
> an add-on, as you have before.

the reason is mentioned in the original email, the lack of resources to keep 
actively supporting 3 different platforms.
If you want to provide a helping hand and maintain Fedora infrastructure I 
don’t think anyone would object 

> 
>>>>> That's how everything can get tested together. And this comes back to the 
>>>>> old bug about fixing vdsm so that it doesn't use /rhev, but instead 
>>>>> something FHS-compliant (RHBZ#1369102). Once that is resolved, pretty 
>>>>> much the entire stack can go into Fedora. And then you benefit from the 
>>>>> Fedora community being able to use, test, and contribute to the oVirt 
>>>>> project. As it stands, why would anyone do this for you when you don't 
>>>>> even run on the cutting edge platform that feeds into Red Hat Enterprise 
>>>>> Linux?
>>>> 
>>>> This was actually fixed a long time ago. With this commit:
>>>> https://github.com/oVirt/vdsm/commit/67ba9c4bc860840d6e103fe604b16f494f60a09d
>>>> 
>>>> You can configure a compatible vdsm that does not use /rhev.
>>>> 
>>>> Of course it is not backward compatible, for this we need much more
>>>> work to support live migration
>>>> between old and new vdsm using different data-center configurations.
>>>> 
>>> 
>>> It'd probably be simpler to just *change* it to an FHS-compatible path
>>> going forward with EL8 and Fedora and set up a migration path there,
>>> but it's a bit late for that... :(
>> 
>> It wouldn’t. We always support live migration across several versions (now 
>> it’s 4.2-4.4) and it needs to stay the same or youo have to go with arcane 
>> code to mangle it back and forth which gets a bit ugly when you consider 
>> suspend/resume, snapshots, etc
>> 
> 
> Erk. At some point you need to bite the bullet though...

it’s about capacity as well, it’s just a matter of someone writing a code which 
can handle the (long) transition period

Thanks,
michal
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/WWMZ5SYFHRP7QZMXLWCPBDFC2VADMEDX/


[ovirt-devel] Re: OST CI basic suite failure

2020-05-19 Thread Michal Skrivanek


> On 18 May 2020, at 17:58, Galit Rosenthal  wrote:
> 
> it is ost_dc_version = os.environ.get('OST_DC_VERSION', None)
> OST_DC_VERSION is 4.3 (it is parse from the name of suite script)

It’s still not completely clear to me
so - it’s running master suite (as in basic-suite-master/ files) with a 4.3 DC 
and Cluster, correct?
if so then it cannot do [1] because of the bugs we had in BIOS type handling in 
4.3. It needs to be one of the i440fx types, probably. 

Thanks,
michal

[1] 
https://github.com/oVirt/ovirt-system-tests/blob/master/basic-suite-master/test-scenarios/002_bootstrap_pytest.py#L1467
> 
> On Mon, May 18, 2020 at 5:31 PM Michal Skrivanek  <mailto:michal.skriva...@redhat.com>> wrote:
> 
> 
>> On 18 May 2020, at 11:59, Galit Rosenthal > <mailto:grose...@redhat.com>> wrote:
>> 
>> This is from last night
>> https://jenkins.ovirt.org/job/ovirt-system-tests_compat-4.3-suite-master/443/artifact/exported-artifacts/test_logs/compat-4.3-suite-master/post-004_basic_sanity_pytest.py/lago-compat-4-3-suite-master-engine/_var_log/ovirt-engine/engine.log
>>  
>> <https://jenkins.ovirt.org/job/ovirt-system-tests_compat-4.3-suite-master/443/artifact/exported-artifacts/test_logs/compat-4.3-suite-master/post-004_basic_sanity_pytest.py/lago-compat-4-3-suite-master-engine/_var_log/ovirt-engine/engine.log>
>> 
> 
> How does compat-4.3-suite-master work?
> i.e. what is the actual engine and host version and which exact 002_bootstrap 
> file is being used?
> 
> 
>> 2020-05-17 21:43:46,410-04 ERROR 
>> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] 
>> (ForkJoinPool-1-worker-17) [1769ac4b] 
>> EVENT_ID: VM_DOWN_ERROR(119), VM vm2 is down with error. Exit message: XML 
>> error: Invalid PCI address :02:01.0. slot must be <= 0.
>> 
>> 
>> On Mon, May 18, 2020 at 11:21 AM Yedidyah Bar David > <mailto:d...@redhat.com>> wrote:
>> On Mon, May 18, 2020 at 10:47 AM Steven Rosenberg > <mailto:srose...@redhat.com>> wrote:
>> >
>> > Dear Didi,
>> >
>> > We had issues with this previously when the Bios Type and Emulation 
>> > Machine chipsets are not compatible, mixing Q35 Bios Types with i440fx 
>> > Emulation Machine Types for example. We fixed the defaults and adjusted 
>> > the OST tests accordingly. Maybe something else changed that brought the 
>> > issue back.
>> >
>> 
>> OK. We do not have the logs anymore to check which version it ran.
>> Artem - I suggest to just rebase and try again, and ping if it fails again.
>> 
>> Good luck and best regards,
>> 
>> > With Best Regards.
>> >
>> > Steven.
>> >
>> > On Sun, May 17, 2020 at 4:36 PM Yedidyah Bar David > > <mailto:d...@redhat.com>> wrote:
>> >>
>> >> On Fri, May 15, 2020 at 4:24 PM Artem Hrechanychenko
>> >> mailto:ahrec...@redhat.com>> wrote:
>> >> >
>> >> > Hi,
>> >> > I'm getting failure on ci check for my patch for network suite changes
>> >> > https://gerrit.ovirt.org/#/c/108317/ 
>> >> > <https://gerrit.ovirt.org/#/c/108317/>
>> >> >
>> >> > CI failures
>> >> > https://jenkins.ovirt.org/job/ovirt-system-tests_standard-check-patch/9350/
>> >> >  
>> >> > <https://jenkins.ovirt.org/job/ovirt-system-tests_standard-check-patch/9350/>
>> >> >
>> >> > My patch does not change the basic-suite master and local run passed 
>> >> > locally. Does anybody know or get something similar ?
>> >>
>> >> engine.log has:
>> >>
>> >> https://jenkins.ovirt.org/job/ovirt-system-tests_standard-check-patch/9350/artifact/check-patch.compat-4.3_suite_master.el7.x86_64/test_logs/compat-4.3-suite-master/post-004_basic_sanity_pytest.py/lago-compat-4-3-suite-master-engine/_var_log/ovirt-engine/
>> >>  
>> >> <https://jenkins.ovirt.org/job/ovirt-system-tests_standard-check-patch/9350/artifact/check-patch.compat-4.3_suite_master.el7.x86_64/test_logs/compat-4.3-suite-master/post-004_basic_sanity_pytest.py/lago-compat-4-3-suite-master-engine/_var_log/ovirt-engine/>
>> >>
>> >> 2020-05-15 08:38:25,062-04 ERROR
>> >> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
>> >> (ForkJoinPool-1-worker-23) [2e5b2649] EVENT_ID: VM_DOWN_ERROR(119), VM
>> >> vm2 is down with error. Exit message: XML error: Invalid PCI address
>> >> :02:01.0. slot must be <= 0.
>> >>
>> >

  1   2   3   4   5   >