[ovirt-users] Re: Recent news & oVirt future

2020-12-10 Thread Sandro Bonazzola
Il giorno gio 10 dic 2020 alle ore 21:51 Charles Kozler <
char...@fixflyer.com> ha scritto:

> I guess this is probably a question for all current open source projects
> that red hat runs but -
>
> Does this mean oVirt will effectively become a rolling release type
> situation as well?
>

There's no plan to make oVirt a rolling release.


>
> How exactly is oVirt going to stay open source and stay in cadence with
> all the other updates happening around it on packages/etc that it depends
> on if the streams are rolling release? Do they now need to fork every piece
> of dependency?
>

We are going to test regularly oVirt on CentOS Stream, releasing oVirt Node
and oVirt appliance after testing them, without any difference to what we
are doing right now with CentOS Linux.
Any raised issue will be handled as usual.

What exactly does this mean for oVirt going forward and its overall
> stability?
>

oVirt plans about CentOS Stream have been communicated one year ago here:
https://blogs.ovirt.org/2019/09/ovirt-and-centos-stream/

That said, please note that oVirt documentation mentions "Enterprise Linux"
almost everywhere and not explicitly CentOS Linux.
As far as I can tell any RHEL binary compatible rebuild should just work
with oVirt despite I would recommend to follow what will be done within
oVirt Node and oVirt Appliance.



>
> *Notice to Recipient*: https://www.fixflyer.com/disclaimer
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/7IUWGES2IG4BELLUPMYGEKN3GC6XVCHA/
>


-- 

Sandro Bonazzola

MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV

Red Hat EMEA 

sbona...@redhat.com


*Red Hat respects your work life balance. Therefore there is no need to
answer this email out of your office hours.*
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/RI72WV2KPCGLKXY5AEXBO3MNQV5ICHHH/


[ovirt-users] Re: Nodes in CentOS 8.3 and oVirt 4.4.3.12-1.el8 but not able to update cluster version

2020-12-10 Thread Gianluca Cecchi
On Fri, Dec 11, 2020 at 7:08 AM Alex McWhirter  wrote:

> You have to put all hosts in the cluster to maintenance mode first, then
> you can change the compat version.
>
>
No, it never was this way. You only get a triangle on the VMs icon telling
you that for that VM to aquire the change it needs to be powered off and
then on again.
No full maintenance never required.
See also here:
https://www.ovirt.org/documentation/upgrade_guide/#Changing_the_Cluster_Compatibility_Version_4-2_local_db

Gianluca
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/EK366A26GW55KAP7LLW7FOLGI3NP3VLS/


[ovirt-users] Re: Nodes in CentOS 8.3 and oVirt 4.4.3.12-1.el8 but not able to update cluster version

2020-12-10 Thread Gianluca Cecchi
On Fri, Dec 11, 2020 at 6:39 AM Parth Dhanjal  wrote:

> Hello!
>
> To my knowledge, the latest build is oVirt/RHVH 4.4.3
> So there is no oVirt/RHVH - 4.5 as of now.
>
>
When 4.4.3 was released it was reported several times to the list that also
a 4.5 option for the cluster level had appeared.
And it was answered that this specific version (its logic seemed quite
strange to me indeed) would have been dedicated to linux nodes in RH EL 8.3
or CentOS 8.3 or RHVH/ovirt-node-ng built on top of this version.
Now my 3 nodes are all in CentOS 8.3 but no selection is possible.
I heard Sandro off the list and I think in a short time he or someone else
through the developers will answer more precisely about this.

Gianluca
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/7CVXSQA77Q5KG3BMRGTV4AUASFCGQBUW/


[ovirt-users] Re: CentOS 8 is dead

2020-12-10 Thread Alex McWhirter

On 2020-12-10 15:02, tho...@hoberg.net wrote:

I came to oVirt thinking that it was like CentOS: There might be bugs,
but given the mainline usage in home and coporate labs with light
workloads and nothing special, chances to hit one should be pretty
minor: I like looking for new fronteers atop of my OS, not inside.

I have been runing CentOS/OpenVZ for years in a previous job, mission
critical 24x7 stuff where minutes of outage meant being grilled for
hours in meetings afterwards. And with PCI-DSS compliance certified.
Never had an issue with OpenVZ/CentOS, all those minute goofs where
human error or Oracle inventing execution plans.

Boy was I wrong about oVirt! Just setting it up took weeks. Ansible
loves eating Gigahertz and I was running on Atoms. I had to learn how
to switch from an i7 in mid-installation to have it finish at all. I
the end I had learned tons of new things, but all I wanted was a
cluster that would work as much out of the box as CentOS or OpenVZ.

Something as fundamental as exporting and importing a VM might simply
not work and not even get fixed.

Migrating HCI from CentOS7/oVirt 4.3 to CentOS8/oVirt 4.4 is anything
but smooth, a complete rebuild seems the lesser evil: Now if only
exports and imports worked reliably!

Rebooting a HCI nodes seems to involve an "I am dying!" aria on the
network, where the whole switch becomes unresponsive for 10 minutes
and the fault tolerant cluster on it being 100% unresponsive
(including all other machines on that switch). I has so much fun
resynching gluster file systems and searching through all those log
files for signs as to what was going on!
And the instructions on how to fix gluster issues seems so wonderfully
detailed and vague, it seems one could spend days trying to fix things
or rebuild and restore. It doesn't help that the fate of Gluster very
much seems to hang in the air, when the scalable HCI aspect was the
only reason I ever wanted oVirt.

Could just be an issue with RealTek adapters, because I never oberved
something like that with Intel NICs or on (recycled old) enterprise
hardware

I guess official support for a 3 node HCI cluster on passive Atoms
isn't going to happen, unless I make happen 100% myself: It's open
source after all!

Just think what 3/6/9 node HCI based on Raspberry PI would do for the
project! The 9 node HCI should deliver better 10Gbit GlusterFS
performance than most QNAP units at the same cost with a single 10Gbit
interface even with 7:2 erasure coding!

I really think the future of oVirt may be at the edge, not in the
datacenter core.

In short: oVirt is very much beta software and quite simply a
full-time job if you depend on it working over time.

I can't see that getting any better when one beta gets to run on top
of another beta. At the moment my oVirt experience has me doubt RHV on
RHEL would work any better, even if it's cheaper than VMware.

OpenVZ was simply the far better alternative than KVM for most of the
things I needed from virtualization and it was mainly the hastle of
trying to make that work with RHEL which had me switching to CentOS.
CentOS with OpenVZ was the bedrock of that business for 15 years and
proved to me that Redhat was hell-bent on making bad decisions on
technological direction.

I would have actually liked to pay a license for each of the physical
hosts we used, but it turned out much less of a bother to forget about
negotiating licensing conditions for OpenVZ containers and use CentOS
instead.

BTW: I am going into a meeting tomorrow, where after two years of
pilot usage, we might just decide to kill our current oVirt farms,
because they didn't deliver on "a free open-source virtualization
solution for your entire enterprise".

I'll keep my Atoms running a little longer, mostly because I have
nothing else to use them for. For a first time in months, they show
zero gluster replication errors, perhaps because for lack of updates
there have been no node reboots. CentOS 7 is stable, but oVirt 4.3 out
of support.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct:
https://www.ovirt.org/community/about/community-guidelines/
List Archives:
https://lists.ovirt.org/archives/list/users@ovirt.org/message/GBZ46VXFZZXOMBNLNQTB34ZFYFVGDPB2/


oVirt has more or less always been RHEV upstream, while not necessarily 
beta, people using oVirt over RHEV have been subject to the occasional 
broken feature or two, at least at early release. If you need the 
stability and support, RHEV is the answer. However, we use oVirt and 
CentOS 8 in production on a fairly large scale and it's not an 
unreasonable amount of work to keep running. It's certainly not a set it 
and forget it scenario, but it works very well for us.


There are also a ton of moving parts to running oVirt at scale. 
hardware, firmware, network configuration, softwa

[ovirt-users] Re: Recent news & oVirt future

2020-12-10 Thread Alex McWhirter

On 2020-12-10 15:47, Charles Kozler wrote:

I guess this is probably a question for all current open source projects that red hat runs but -  

Does this mean oVirt will effectively become a rolling release type situation as well? 

How exactly is oVirt going to stay open source and stay in cadence with all the other updates happening around it on packages/etc that it depends on if the streams are rolling release? Do they now need to fork every piece of dependency? 


What exactly does this mean for oVirt going forward and its overall stability?

Notice to Recipient: https://www.fixflyer.com/disclaimer [1] 
___

Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/7IUWGES2IG4BELLUPMYGEKN3GC6XVCHA/


Time will tell, but i suspect if anything this will make ovirt
development easier in some regards. ovirt already enables multiple SIG
streams, unofficial updates, etc... A lot of that would be streamlined.
Ovirt would be able to target future RHEL much easier by targeting
CentOS stream. 


Links:
--
[1] https://www.fixflyer.com/disclaimer___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/7XEIEAWBERXD7PYQP572JDFJ56QBNP2E/


[ovirt-users] Re: Nodes in CentOS 8.3 and oVirt 4.4.3.12-1.el8 but not able to update cluster version

2020-12-10 Thread Alex McWhirter

You have to put all hosts in the cluster to maintenance mode first, then
you can change the compat version. 


On 2020-12-10 11:09, Gianluca Cecchi wrote:

Hello, 
my engine is 4.4.3.12-1.el8 and my 3 oVirt nodes (based on plain CentOS due to megaraid_sas kernel module needed) have been updated, bringing them to CentOS 8.3. 
But if I try to put cluster into 4.5 I get the error: 
" 
Error while executing action: Cannot change Cluster Compatibility Version to higher version when there are active Hosts with lower version. 
-Please move Host ov300, ov301, ov200 with lower version to maintenance first. 
" 
Do I have to wait until final 4.4.4 or what is the problem? 
Does 4.5 gives anything more apart the second number... (joking..)? 

On nodes: 


Software

OS Version: RHEL - 8.3 - 1.2011.el8
OS Description: CentOS Linux 8
Kernel Version: 4.18.0 - 240.1.1.el8_3.x86_64
KVM Version: 4.2.0 - 34.module_el8.3.0+555+a55c8938
LIBVIRT Version: libvirt-6.0.0-28.module_el8.3.0+555+a55c8938
VDSM Version: vdsm-4.40.35.1-1.el8
SPICE Version: 0.14.3 - 3.el8
GlusterFS Version: [N/A]
CEPH Version: librbd1-12.2.7-9.el8
Open vSwitch Version: [N/A]
Nmstate Version: nmstate-0.3.6-2.el8
Kernel Features: MDS: (Vulnerable: Clear CPU buffers attempted, no microcode; 
SMT vulnerable), L1TF: (Mitigation: PTE Inversion; VMX: conditional cache 
flushes, SMT vulnerable), SRBDS: (Not affected), MELTDOWN: (Mitigation: PTI), 
SPECTRE_V1: (Mitigation: usercopy/swapgs barriers and __user pointer 
sanitization), SPECTRE_V2: (Mitigation: Full generic retpoline, IBPB: 
conditional, IBRS_FW, STIBP: conditional, RSB filling), ITLB_MULTIHIT: (KVM: 
Mitigation: Split huge pages), TSX_ASYNC_ABORT: (Not affected), 
SPEC_STORE_BYPASS: (Mitigation: Speculative Store Bypass disabled via prctl and 
seccomp)
VNC Encryption: Disabled
FIPS mode enabled: Disabled 

Gianluca 
___

Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/W7UEBHJA2ACLTMVX2QTMWZVWU4QK2Y6L/___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/Z7JT2ZUIG76FQKZRZWVHYPTJ537UVY7F/


[ovirt-users] Re: Nodes in CentOS 8.3 and oVirt 4.4.3.12-1.el8 but not able to update cluster version

2020-12-10 Thread Parth Dhanjal
Hello!

To my knowledge, the latest build is oVirt/RHVH 4.4.3
So there is no oVirt/RHVH - 4.5 as of now.

On Thu, Dec 10, 2020 at 9:40 PM Gianluca Cecchi 
wrote:

> Hello,
> my engine is 4.4.3.12-1.el8 and my 3 oVirt nodes (based on plain CentOS
> due to megaraid_sas kernel module needed) have been updated, bringing them
> to CentOS 8.3.
> But if I try to put cluster into 4.5 I get the error:
> "
> Error while executing action: Cannot change Cluster Compatibility Version
> to higher version when there are active Hosts with lower version.
> -Please move Host ov300, ov301, ov200 with lower version to maintenance
> first.
> "
> Do I have to wait until final 4.4.4 or what is the problem?
> Does 4.5 gives anything more apart the second number... (joking..)?
>
> On nodes:
>
> Software
>
> OS Version: RHEL - 8.3 - 1.2011.el8
> OS Description: CentOS Linux 8
> Kernel Version: 4.18.0 - 240.1.1.el8_3.x86_64
> KVM Version: 4.2.0 - 34.module_el8.3.0+555+a55c8938
> LIBVIRT Version: libvirt-6.0.0-28.module_el8.3.0+555+a55c8938
> VDSM Version: vdsm-4.40.35.1-1.el8
> SPICE Version: 0.14.3 - 3.el8
> GlusterFS Version: [N/A]
> CEPH Version: librbd1-12.2.7-9.el8
> Open vSwitch Version: [N/A]
> Nmstate Version: nmstate-0.3.6-2.el8
> Kernel Features: MDS: (Vulnerable: Clear CPU buffers attempted, no
> microcode; SMT vulnerable), L1TF: (Mitigation: PTE Inversion; VMX:
> conditional cache flushes, SMT vulnerable), SRBDS: (Not affected),
> MELTDOWN: (Mitigation: PTI), SPECTRE_V1: (Mitigation: usercopy/swapgs
> barriers and __user pointer sanitization), SPECTRE_V2: (Mitigation: Full
> generic retpoline, IBPB: conditional, IBRS_FW, STIBP: conditional, RSB
> filling), ITLB_MULTIHIT: (KVM: Mitigation: Split huge pages),
> TSX_ASYNC_ABORT: (Not affected), SPEC_STORE_BYPASS: (Mitigation:
> Speculative Store Bypass disabled via prctl and seccomp)
> VNC Encryption: Disabled
> FIPS mode enabled: Disabled
>
> Gianluca
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/W7UEBHJA2ACLTMVX2QTMWZVWU4QK2Y6L/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/ONPBBONI3BTDCJ3XWWBN3DGVT4XZRY6H/


[ovirt-users] oVirt and RHEV

2020-12-10 Thread tommy
1、 If oVirt can be used to manage RHEV ?

2、 What relation between oVirt and RHEV?

 

Thanks!

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/RIDTSCC6EHHNQ577T2E74CFHCDTAVLRX/


[ovirt-users] Network Teamd support

2020-12-10 Thread Carlos C
Hi folks,

Does Ovirt 4.4.4 support or will support Network Teamd? Or only bonding?

regards
Carlos
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/ABGHHQYZBLO34YXBP4BKX6UGLIOL7IVU/


[ovirt-users] Re: Deploy hosted engine from backup fails

2020-12-10 Thread JCampos
Did the downgrade I used the arg, worked perfect!

Thank's, saved my day :D
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/JQ4RJ6JZCRYLOQBAK2DOSQVGVN2VMVEL/


[ovirt-users] Recent news & oVirt future

2020-12-10 Thread Charles Kozler
I guess this is probably a question for all current open source projects
that red hat runs but -

Does this mean oVirt will effectively become a rolling release type
situation as well?

How exactly is oVirt going to stay open source and stay in cadence with all
the other updates happening around it on packages/etc that it depends on if
the streams are rolling release? Do they now need to fork every piece of
dependency?

What exactly does this mean for oVirt going forward and its overall
stability?

-- 
*Notice to Recipient*: https://www.fixflyer.com/disclaimer 

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/7IUWGES2IG4BELLUPMYGEKN3GC6XVCHA/


[ovirt-users] Re: CentOS 8 is dead

2020-12-10 Thread thomas
I came to oVirt thinking that it was like CentOS: There might be bugs, but 
given the mainline usage in home and coporate labs with light workloads and 
nothing special, chances to hit one should be pretty minor: I like looking for 
new fronteers atop of my OS, not inside.

I have been runing CentOS/OpenVZ for years in a previous job, mission critical 
24x7 stuff where minutes of outage meant being grilled for hours in meetings 
afterwards. And with PCI-DSS compliance certified. Never had an issue with 
OpenVZ/CentOS, all those minute goofs where human error or Oracle inventing 
execution plans.

Boy was I wrong about oVirt! Just setting it up took weeks. Ansible loves 
eating Gigahertz and I was running on Atoms. I had to learn how to switch from 
an i7 in mid-installation to have it finish at all. I the end I had learned 
tons of new things, but all I wanted was a cluster that would work as much out 
of the box as CentOS or OpenVZ.

Something as fundamental as exporting and importing a VM might simply not work 
and not even get fixed.

Migrating HCI from CentOS7/oVirt 4.3 to CentOS8/oVirt 4.4 is anything but 
smooth, a complete rebuild seems the lesser evil: Now if only exports and 
imports worked reliably!

Rebooting a HCI nodes seems to involve an "I am dying!" aria on the network, 
where the whole switch becomes unresponsive for 10 minutes and the fault 
tolerant cluster on it being 100% unresponsive (including all other machines on 
that switch). I has so much fun resynching gluster file systems and searching 
through all those log files for signs as to what was going on!
And the instructions on how to fix gluster issues seems so wonderfully detailed 
and vague, it seems one could spend days trying to fix things or rebuild and 
restore. It doesn't help that the fate of Gluster very much seems to hang in 
the air, when the scalable HCI aspect was the only reason I ever wanted oVirt.

Could just be an issue with RealTek adapters, because I never oberved something 
like that with Intel NICs or on (recycled old) enterprise hardware

I guess official support for a 3 node HCI cluster on passive Atoms isn't going 
to happen, unless I make happen 100% myself: It's open source after all!

Just think what 3/6/9 node HCI based on Raspberry PI would do for the project! 
The 9 node HCI should deliver better 10Gbit GlusterFS performance than most 
QNAP units at the same cost with a single 10Gbit interface even with 7:2 
erasure coding!

I really think the future of oVirt may be at the edge, not in the datacenter 
core.

In short: oVirt is very much beta software and quite simply a full-time job if 
you depend on it working over time.

I can't see that getting any better when one beta gets to run on top of another 
beta. At the moment my oVirt experience has me doubt RHV on RHEL would work any 
better, even if it's cheaper than VMware.

OpenVZ was simply the far better alternative than KVM for most of the things I 
needed from virtualization and it was mainly the hastle of trying to make that 
work with RHEL which had me switching to CentOS. CentOS with OpenVZ was the 
bedrock of that business for 15 years and proved to me that Redhat was 
hell-bent on making bad decisions on technological direction.

I would have actually liked to pay a license for each of the physical hosts we 
used, but it turned out much less of a bother to forget about negotiating 
licensing conditions for OpenVZ containers and use CentOS instead.

BTW: I am going into a meeting tomorrow, where after two years of pilot usage, 
we might just decide to kill our current oVirt farms, because they didn't 
deliver on "a free open-source virtualization solution for your entire 
enterprise".

I'll keep my Atoms running a little longer, mostly because I have nothing else 
to use them for. For a first time in months, they show zero gluster replication 
errors, perhaps because for lack of updates there have been no node reboots. 
CentOS 7 is stable, but oVirt 4.3 out of support.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/GBZ46VXFZZXOMBNLNQTB34ZFYFVGDPB2/


[ovirt-users] Re: CentOS 8 is dead

2020-12-10 Thread Jayme
It looks like a few forks are popping up already. A new project called
RockyLinux and now CloudLinux announced an RHEL fork today which sounds
promising:
https://blog.cloudlinux.com/announcing-open-sourced-community-driven-rhel-fork-by-cloudlinux

On Thu, Dec 10, 2020 at 5:42 AM Jorick Astrego  wrote:

> Hi,
>
> Personally I don't really see the problem with the CentOS stream switch.
> Not trying to start a long discussion but I think it will be even be an
> improvement.
>
> Currently we use different combinations of EPEL, SCL, Elrepo etc. just
> to get some newer packages and a lot of people do the same and have no
> issues with this. oVirt even uses EPEL packages as dependency.
>
> Most of these will become redundant because of stream...
>
> Actually Red Hat has the same strategy for oVirt, it's an upstream for
> Red Hat Virtualization. So with the new CentOS strategy you will be one
> step ahead on OS and virtualization manager of the paid version.
>
> Testing is always required and with tooling like Katello you can push
> the packages after testing to production easily. If you need enterprise
> grade stability and support that much, then you should buy it or hire
> people to do it in house.
>
> Just my 2c as I see a lot of people getting really worked up about it.
>
> Jorick Astrego
>
> On 12/9/20 2:25 PM, Michal Skrivanek wrote:
> >
> >> On 9 Dec 2020, at 01:21, thilb...@generalpacific.com wrote:
> >>
> >> I to would like to see if Ubuntu could become a bit more main stream
> with oVirt now that CentOS is gone. I'm sure we won't hear anything until
> 2021 the oVirt staff need to figure out what to do now.
> > Right now we’re happy that CentOS 8.3 is finally here. That aligns 4.4.3
> and 4.4.4 again, makes the 4.5 cluster level usable, tons of bug fixes.
> > Afterwards…well, I think Stream is not a bad option, we already have it
> in some form. I suppose it’s going to be the most feasible option.
> > For anything else *someone* would need to do all the work. And I don’t
> mean it in a way that we - all the people with @redhat.com address - are
> forbidden to do that or something, it’s really about the sheer amount of
> work and dedication required, doubling the integration efforts. oVirt is
> (maybe surprisingly) complex and testing it on any new platform means a lot
> of extra manpower.
> >
> >
> >> ___
> >> Users mailing list -- users@ovirt.org
> >> To unsubscribe send an email to users-le...@ovirt.org
> >> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> >> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> >> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/7HHQH2XIHK2VPV4TTERO2NH7DGEYUWV4/
> > ___
> > Users mailing list -- users@ovirt.org
> > To unsubscribe send an email to users-le...@ovirt.org
> > Privacy Statement: https://www.ovirt.org/privacy-policy.html
> > oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> > List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/JCX24N4UYHZ6HAIE32D2FV4ZT3BAIZYH/
>
>
>
>
> Met vriendelijke groet, With kind regards,
>
> Jorick Astrego
>
> *Netbulae Virtualization Experts *
> --
> Tel: 053 20 30 270 i...@netbulae.eu Staalsteden 4-3A KvK 08198180
> Fax: 053 20 30 271 www.netbulae.eu 7547 TA Enschede BTW NL821234584B01
> --
>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/2KYL7E76HKVP2F2JPCCUI6BRUARLAGCT/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/W6G42E2O5LCSYJDNUI2WJ2CUSBXKRNND/


[ovirt-users] Nodes in CentOS 8.3 and oVirt 4.4.3.12-1.el8 but not able to update cluster version

2020-12-10 Thread Gianluca Cecchi
Hello,
my engine is 4.4.3.12-1.el8 and my 3 oVirt nodes (based on plain CentOS due
to megaraid_sas kernel module needed) have been updated, bringing them to
CentOS 8.3.
But if I try to put cluster into 4.5 I get the error:
"
Error while executing action: Cannot change Cluster Compatibility Version
to higher version when there are active Hosts with lower version.
-Please move Host ov300, ov301, ov200 with lower version to maintenance
first.
"
Do I have to wait until final 4.4.4 or what is the problem?
Does 4.5 gives anything more apart the second number... (joking..)?

On nodes:

Software

OS Version: RHEL - 8.3 - 1.2011.el8
OS Description: CentOS Linux 8
Kernel Version: 4.18.0 - 240.1.1.el8_3.x86_64
KVM Version: 4.2.0 - 34.module_el8.3.0+555+a55c8938
LIBVIRT Version: libvirt-6.0.0-28.module_el8.3.0+555+a55c8938
VDSM Version: vdsm-4.40.35.1-1.el8
SPICE Version: 0.14.3 - 3.el8
GlusterFS Version: [N/A]
CEPH Version: librbd1-12.2.7-9.el8
Open vSwitch Version: [N/A]
Nmstate Version: nmstate-0.3.6-2.el8
Kernel Features: MDS: (Vulnerable: Clear CPU buffers attempted, no
microcode; SMT vulnerable), L1TF: (Mitigation: PTE Inversion; VMX:
conditional cache flushes, SMT vulnerable), SRBDS: (Not affected),
MELTDOWN: (Mitigation: PTI), SPECTRE_V1: (Mitigation: usercopy/swapgs
barriers and __user pointer sanitization), SPECTRE_V2: (Mitigation: Full
generic retpoline, IBPB: conditional, IBRS_FW, STIBP: conditional, RSB
filling), ITLB_MULTIHIT: (KVM: Mitigation: Split huge pages),
TSX_ASYNC_ABORT: (Not affected), SPEC_STORE_BYPASS: (Mitigation:
Speculative Store Bypass disabled via prctl and seccomp)
VNC Encryption: Disabled
FIPS mode enabled: Disabled

Gianluca
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/W7UEBHJA2ACLTMVX2QTMWZVWU4QK2Y6L/


[ovirt-users] Re: Another illegal disk snapshot problem!

2020-12-10 Thread Benny Zlotnik
yes, the VM looks fine... to investigate this further I'd need the
full log vdsm log with error, please share it

On Wed, Dec 9, 2020 at 3:01 PM Joseph Goldman  wrote:
>
> Attached XML dump.
>
> Looks like its let me run a 'reboot' but im afraid to do a shutdown at
> this point.
>
> I have taken just a raw copy of the whole image group folder in the hope
> if worse came to worse I'd be able to recreate the disk with the actual
> files.
>
> All existing files seem to be referenced in the xmldump.
>
> On 9/12/2020 11:54 pm, Benny Zlotnik wrote:
> > The VM is running, right?
> > Can you run:
> > $ virsh -r dumpxml 
> >
> > On Wed, Dec 9, 2020 at 2:01 PM Joseph Goldman  wrote:
> >> Looks like the physical files dont exist:
> >>
> >> 2020-12-09 22:01:00,122+1000 INFO (jsonrpc/4) [api.virt] START
> >> merge(drive={u'imageID': u'23710238-07c2-46f3-96c0-9061fe1c3e0d',
> >> u'volumeID': u'4b6f7ca1-b70d-4893-b473-d8d30138bb6b', u'domainID':
> >> u'74c06ce1-94e6-4064-9d7d-69e1d956645b', u'poolID':
> >> u'e2540c6a-33c7-4ac7-b2a2-175cf51994c2'},
> >> baseVolUUID=u'c3dadf14-bb4e-45a7-8bee-b9a01fe29ae1',
> >> topVolUUID=u'a6d4533b-b0b0-475d-a436-26ce99a38d94', bandwidth=u'0',
> >> jobUUID=u'ff193892-356b-4db8-b525-e543e8e69d6a')
> >> from=:::192.168.5.10,56030,
> >> flow_id=c149117a-1080-424c-85d8-3de2103ac4ae,
> >> vmId=2a0df965-8434-4074-85cf-df12a69648e7 (api:48)
> >>
> >> 2020-12-09 22:01:00,122+1000 INFO  (jsonrpc/4) [api.virt] FINISH merge
> >> return={'status': {'message': 'Drive image file could not be found',
> >> 'code': 13}} from=:::192.168.5.10,56030,
> >> flow_id=c149117a-1080-424c-85d8-3de2103ac4ae,
> >> vmId=2a0df965-8434-4074-85cf-df12a69648e7 (api:54)
> >>
> >> Although looking on the physical file system they seem to exist:
> >>
> >> [root@ov-node1 23710238-07c2-46f3-96c0-9061fe1c3e0d]# ll
> >> total 56637572
> >> -rw-rw. 1 vdsm kvm  15936061440 Dec  9 21:51
> >> 4b6f7ca1-b70d-4893-b473-d8d30138bb6b
> >> -rw-rw. 1 vdsm kvm  1048576 Dec  8 01:11
> >> 4b6f7ca1-b70d-4893-b473-d8d30138bb6b.lease
> >> -rw-r--r--. 1 vdsm kvm  252 Dec  9 21:37
> >> 4b6f7ca1-b70d-4893-b473-d8d30138bb6b.meta
> >> -rw-rw. 1 vdsm kvm  21521825792 Dec  8 01:47
> >> a6d4533b-b0b0-475d-a436-26ce99a38d94
> >> -rw-rw. 1 vdsm kvm  1048576 May 17  2020
> >> a6d4533b-b0b0-475d-a436-26ce99a38d94.lease
> >> -rw-r--r--. 1 vdsm kvm  256 Dec  8 01:49
> >> a6d4533b-b0b0-475d-a436-26ce99a38d94.meta
> >> -rw-rw. 1 vdsm kvm 107374182400 Dec  9 01:13
> >> c3dadf14-bb4e-45a7-8bee-b9a01fe29ae1
> >> -rw-rw. 1 vdsm kvm  1048576 Feb 24  2020
> >> c3dadf14-bb4e-45a7-8bee-b9a01fe29ae1.lease
> >> -rw-r--r--. 1 vdsm kvm  320 May 17  2020
> >> c3dadf14-bb4e-45a7-8bee-b9a01fe29ae1.meta
> >>
> >> The UUID's match the UUID's in the snapshot list.
> >>
> >> So much stuff happens in vdsm.log its hard to pinpoint whats going on
> >> but grepping 'c149117a-1080-424c-85d8-3de2103ac4ae' (flow-id) shows
> >> pretty much those 2 calls and then XML dump.
> >>
> >> Still a bit lost on the most comfortable way forward unfortunately.
> >>
> >> On 8/12/2020 11:15 pm, Benny Zlotnik wrote:
>  [root@ov-engine ~]# tail -f /var/log/ovirt-engine/engine.log | grep ERROR
> >>> grepping error is ok, but it does not show the reason for the failure,
> >>> which will probably be on the vdsm host (you can use flow_id
> >>> 9b2283fe-37cc-436c-89df-37c81abcb2e1 to find the correct file
> >>> Need to see the underlying error causing: VDSGenericException:
> >>> VDSErrorException: Failed to SnapshotVDS, error =
> >>> Snapshot failed, code = 48
> >>>
>  Using unlock_entity.sh -t all sets the status back to 1 (confirmed in
>  DB) and then trying to create does not change it back to illegal, but
>  trying to delete that snapshot fails and sets it back to 4.
> >>> I see, can you share the removal failure log (similar information as
> >>> requested above)
> >>>
> >>> regarding backup, I don't have a good answer, hopefully someone else
> >>> has suggestions
> >>> ___
> >>> Users mailing list -- users@ovirt.org
> >>> To unsubscribe send an email to users-le...@ovirt.org
> >>> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> >>> oVirt Code of Conduct: 
> >>> https://www.ovirt.org/community/about/community-guidelines/
> >>> List Archives: 
> >>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/MJHKYBPBTINAWY4VDSLLZZPWYI2O3SHB/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/JGM4MNKBHS7EWOIPS6WGVQSKEKLKDAQ7/


[ovirt-users] Re: Deploy hosted engine from backup fails

2020-12-10 Thread Yedidyah Bar David
On Thu, Dec 10, 2020 at 11:09 AM JCampos  wrote:
>
> Command to execute backup:
> #engine-backup --mode=backup --file=backup1 --log=backup1.log
>
> Command to restore backup:
> #hosted-engine --deploy --restore-from-file=backup1
>
> Error:
> 2020-12-09 22:26:25,679+ ERROR 
> otopi.ovirt_hosted_engine_setup.ansible_utils 
> ansible_utils._process_output:107 fatal: [localhost]: FAILED! => {"changed": 
> false, "msg": "Available memory ( {'failed': False, 'changed': False, 
> 'ansible_facts': {u'max_mem': u'33211'}}MB ) is less then the minimal 
> requirement (4096MB). Be aware that 512MB is reserved for the host and cannot 
> be allocated to the engine VM."}

I didn't check details, but I think that this is a result of using
too-new ansible. You can try again with 2.9.13.

>
> Ovirt version:
> 4.3.10.4-1.el7
>
> # cat /etc/redhat-release
> CentOS Linux release 7.9.2009 (Core)
>
> Memory available:
> #free -m
>   totalusedfree  shared  buff/cache   
> available
> Mem:  64132   30478 494 145   33158   
> 32961
> Swap: 16383   0   16383
>
>
> Is there a way to ignore this check?

There should be. Assuming you use the CLI (not cockpit), you can try this:

hosted-engine --deploy
--otopi-environment=OVEHOSTED_CORE/memCheckRequirements=bool:False

or

hosted-engine --deploy
--otopi-environment=OVEHOSTED_CORE/checkRequirements=bool:False

Latter will likely skip over a few more requirements, but I didn't check which.

Best regards,
-- 
Didi
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/3O2K7I5PZ2UVMMMF5ZZJWD6TEEHYVNFR/


[ovirt-users] Re: CentOS 8 is dead

2020-12-10 Thread Jorick Astrego
Hi,

Personally I don't really see the problem with the CentOS stream switch.
Not trying to start a long discussion but I think it will be even be an
improvement.

Currently we use different combinations of EPEL, SCL, Elrepo etc. just
to get some newer packages and a lot of people do the same and have no
issues with this. oVirt even uses EPEL packages as dependency.

Most of these will become redundant because of stream...

Actually Red Hat has the same strategy for oVirt, it's an upstream for
Red Hat Virtualization. So with the new CentOS strategy you will be one
step ahead on OS and virtualization manager of the paid version.

Testing is always required and with tooling like Katello you can push
the packages after testing to production easily. If you need enterprise
grade stability and support that much, then you should buy it or hire
people to do it in house.

Just my 2c as I see a lot of people getting really worked up about it.

Jorick Astrego

On 12/9/20 2:25 PM, Michal Skrivanek wrote:
>
>> On 9 Dec 2020, at 01:21, thilb...@generalpacific.com wrote:
>>
>> I to would like to see if Ubuntu could become a bit more main stream with 
>> oVirt now that CentOS is gone. I'm sure we won't hear anything until 2021 
>> the oVirt staff need to figure out what to do now.
> Right now we’re happy that CentOS 8.3 is finally here. That aligns 4.4.3 and 
> 4.4.4 again, makes the 4.5 cluster level usable, tons of bug fixes. 
> Afterwards…well, I think Stream is not a bad option, we already have it in 
> some form. I suppose it’s going to be the most feasible option.
> For anything else *someone* would need to do all the work. And I don’t mean 
> it in a way that we - all the people with @redhat.com address - are forbidden 
> to do that or something, it’s really about the sheer amount of work and 
> dedication required, doubling the integration efforts. oVirt is (maybe 
> surprisingly) complex and testing it on any new platform means a lot of extra 
> manpower. 
>
>
>> ___
>> Users mailing list -- users@ovirt.org
>> To unsubscribe send an email to users-le...@ovirt.org
>> Privacy Statement: https://www.ovirt.org/privacy-policy.html
>> oVirt Code of Conduct: 
>> https://www.ovirt.org/community/about/community-guidelines/
>> List Archives: 
>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/7HHQH2XIHK2VPV4TTERO2NH7DGEYUWV4/
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/JCX24N4UYHZ6HAIE32D2FV4ZT3BAIZYH/




Met vriendelijke groet, With kind regards,

Jorick Astrego

Netbulae Virtualization Experts 



Tel: 053 20 30 270  i...@netbulae.euStaalsteden 4-3A
KvK 08198180
Fax: 053 20 30 271  www.netbulae.eu 7547 TA Enschede
BTW NL821234584B01



___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/2KYL7E76HKVP2F2JPCCUI6BRUARLAGCT/


[ovirt-users] Deploy hosted engine from backup fails

2020-12-10 Thread JCampos
Command to execute backup:
#engine-backup --mode=backup --file=backup1 --log=backup1.log

Command to restore backup: 
#hosted-engine --deploy --restore-from-file=backup1

Error:
2020-12-09 22:26:25,679+ ERROR 
otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:107 
fatal: [localhost]: FAILED! => {"changed": false, "msg": "Available memory ( 
{'failed': False, 'changed': False, 'ansible_facts': {u'max_mem': u'33211'}}MB 
) is less then the minimal requirement (4096MB). Be aware that 512MB is 
reserved for the host and cannot be allocated to the engine VM."}

Ovirt version:
4.3.10.4-1.el7

# cat /etc/redhat-release
CentOS Linux release 7.9.2009 (Core)

Memory available:
#free -m
  totalusedfree  shared  buff/cache   available
Mem:  64132   30478 494 145   33158   32961
Swap: 16383   0   16383


Is there a way to ignore this check?
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/7UWBBJ7IKJMLX6BW6XMCTVRH4QFDTPOG/