[ovirt-users] Re: how to convert centos 8 to centos 8 Stream

2022-02-22 Thread Adam Xu

I have upgraded ovirt-engine to 4.4.10.

thsese are my steps:
dnf --disablerepo appstream --disablerepo baseos --disablerepo 
powertools --disablerepo ovirt-4.4-centos-gluster8 --disablerepo 
ovirt-4.4-centos-ovirt44 --disablerepo ovirt-4.4-centos-opstools 
--disablerepo ovirt-4.4-centos-nfv-openvswitch --disablerepo 
ovirt-4.4-openstack-victoria install centos-release-stream
dnf --disablerepo appstream --disablerepo baseos --disablerepo 
powertools --disablerepo ovirt-4.4-centos-gluster8 --disablerepo 
ovirt-4.4-centos-ovirt44 --disablerepo ovirt-4.4-centos-opstools 
--disablerepo ovirt-4.4-centos-nfv-openvswitch --disablerepo 
ovirt-4.4-openstack-victoria swap centos-{linux,stream}-repos
dnf install --disablerepo='*' 
https://resources.ovirt.org/pub/yum-repo/ovirt-release44.rpm


#upgrade the engine
engine-upgrade-check
.

dnf distro-sync -x ansible-core    #ansible will got some issue in 
current repos.

reboot

在 2022/2/23 12:46, Sketch 写道:

On Wed, 23 Feb 2022, Adam Xu wrote:


How can we convert centos 8 to centos 8 stream? Thanks.


dnf install centos-release-stream
dnf swap centos-{linux,stream}-repos
dnf distro-sync

Note that the last command is effectively a yum update that syncs your 
packages with all of the installed repos, so make sure you install the 
latest ovirt-release44 package with the working mirror URLs before you 
run it, or you might end up with some (or all) oVirt packages removed.___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/OJF2F2YU7AUJYL72NM7VXJJIAZAI4SQF/


[ovirt-users] Re: how to convert centos 8 to centos 8 Stream

2022-02-22 Thread Adam Xu

I have upgraded ovirt-engine to 4.4.10.

thsese are my steps:

dnf --disablerepo appstream --disablerepo baseos --disablerepo 
powertools --disablerepo ovirt-4.4-centos-gluster8 --disablerepo 
ovirt-4.4-centos-ovirt44 --disablerepo ovirt-4.4-centos-opstools 
--disablerepo ovirt-4.4-centos-nfv-openvswitch --disablerepo 
ovirt-4.4-openstack-victoria install centos-release-stream
dnf --disablerepo appstream --disablerepo baseos --disablerepo 
powertools --disablerepo ovirt-4.4-centos-gluster8 --disablerepo 
ovirt-4.4-centos-ovirt44 --disablerepo ovirt-4.4-centos-opstools 
--disablerepo ovirt-4.4-centos-nfv-openvswitch --disablerepo 
ovirt-4.4-openstack-victoria swap centos-{linux,stream}-repos
dnf install --disablerepo='*' 
https://resources.ovirt.org/pub/yum-repo/ovirt-release44.rpm 



#upgrade the engine
engine-upgrade-check
.

dnf distro-sync -x ansible-core #ansible will got some issue in current 
repos.

reboot
在 2022/2/23 12:46, Sketch 写道:

On Wed, 23 Feb 2022, Adam Xu wrote:


How can we convert centos 8 to centos 8 stream? Thanks.


dnf install centos-release-stream
dnf swap centos-{linux,stream}-repos
dnf distro-sync

Note that the last command is effectively a yum update that syncs your 
packages with all of the installed repos, so make sure you install the 
latest ovirt-release44 package with the working mirror URLs before you 
run it, or you might end up with some (or all) oVirt packages removed.___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/7SQEWXEGFGUJHVIJ2Q2A2FP4QWU3ERDR/


[ovirt-users] Re: Migrating VMs from 4.3 Ovirt Env to new 4.4 Env

2022-02-22 Thread Patrick Hibbs
You're welcome. Good to hear you didn't loose anything.

Export to OVA has one of the hosts write an OVA file to the selected
host's filesystem. (As long as VDSM has write privileges to do so.) The
OVA file contains the *entire* VM configuration (Number of CPUs /
Chipset / Network Adapters / etc.) in addtion to a copy of the attached
disks. The intent of this is to allow importing the VM into another
hypervisor. (oVirt, or even other tech entirely such as VMWare /
Virtualbox / etc.) This tends to not work very well however due to the
previously discussed failures, and the fact that oVirt's conversion
code isn't very good. Even between oVirt versions on the same cluster,
an exported OVA may not re-import cleanly, if at all. (This is probably
related to the lack of a generic CPU target in oVirt, as no other
hypervisor solution is going to put forth the effort to maintain
compatibility with oVirt's overly-specific CPU model selections, flags,
and custom virtual hardware layouts.)

Export to Export Domain simply writes the disk images and VM domain XML
to the Export Domain for later import into another oVirt instance.
(This was the original way oVirt handled backups, and migration between
oVirt clusters.) This has been deprecated for a while now, and oVirt
4.4 can't even create a new Export Domain. (Although 4.4 will allow
importing an existing one.) Essentially, it's no different than OVA
Exports beyond the oVirt specific nature of it. (Both make a complete
copy of the VM and attached storage, OVA just adds a translation /
compatibility layer step, and it's output is meant to be handled
directly by the administrator.)

Exporting a template simply exports the template. It has a copy of the
disks just like the other two options. With the added caveat that you
need to instantiate a new VM from it prior to use. Which may or may not
be what you need for backups. Depending on whether or not you have
organizational policies / regulatory compliance to worry about. As this
will necessitate a bunch of changes in the virtualized hardware. That
being said, I can't say to it's reliability as I've never really used
templates for backup purposes.

If you really need a backup, OVA is fine. (Though you should test re-
importing it before calling the job finished.) If you really want to be
sure, use an external cloning solution like Clonezilla, (If you only
need to back up a few VMs) or something like FOG. (If you need more
advanced options, like a regular backup schedule, automated backups /
restorations, logging, access control, etc.)

- Patrick Hibbs

On Tue, 2022-02-22 at 01:24 +, Abe E wrote:
> Hey Patrick, First of all thank you for your help these past few
> days, I see you are quite experienced haha.
> 
> I thought so too, because i have like 10 VMs that are just clones of
> 1 main rockylinux vm I built and some failed while others passed,
> quite random. I almost destroyed the old VMs too so I am happy I
> discovered it when I did.
> 
> Im still trying to wrap my head around it all but please correct me
> if i am wrong.
> Export to OVA - Simply exports to OVA whereas Export to Export Domain
> will export the actual image files?
> 
> Another Idea I had was making the VM into a template and exporting
> that.
> Does this idea makes sense, it seems it generates the same disks as
> the original and then allows you to build a VM from them -- Or it is
> the exact same as Exporting a VM to EDomain in the first place.
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/LW4ZLFI7LAMBLYDRQ4EORJSCHNLK2HI4/

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/JBECARLVGKODCC4GLWSUHVS35GBFKN36/


[ovirt-users] how to convert centos 8 to centos 8 Stream

2022-02-22 Thread Adam Xu

Dear ovirt community,

I can't upgrade ovirt to version 4.4.10 because of EOL of centos8.

How can we convert centos 8 to centos 8 stream? Thanks.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/IURQQPQXKK2KHVDHZKS6BYLW6MJNAKSQ/


[ovirt-users] Re: oVirt alternatives

2022-02-22 Thread Michal Skrivanek


> On 22. 2. 2022, at 14:16, Roman Mohr  wrote:
> 
> 
> 
> On Tue, Feb 22, 2022 at 1:25 PM Thomas Hoberg  > wrote:
> > On Tue, Feb 22, 2022 at 9:48 AM Simone Tiraboschi  > >
> > wrote:
> > 
> > 
> > Just to clarify the state of things a little: It is not only technically
> > there. KubeVirt supports pci passthrough, GPU passthrough and
> > SRIOV (including live-migration for SRIOV). I can't say if the OpenShift UI
> > can compete with oVirt at this stage.
> > 
> > 
> > Best regards,
> > Roman
> 
> Well, I guess it's there, mostly because they didn't have to do anything new, 
> it's part of KVM/libvirt and more inherited than added.

That you can say about oVirt and Openstack and basically anyone else as well. 
The foundation for basically every virtualization features is always in qemu-kvm

> 
> The main reason I "don't see it coming" is that may create more problems than 
> it solves.
> 
> To my understanding K8 is all about truly elastic workloads, including 
> "mobility" to avoid constraints (including memory overcommit). Mobility in 
> quotes, because I don't even know if it migrates containers or just shuts 
> instances down in one place and launches them in another: migration itself 
> has a significant cost after all.
> 
> We implemented live migrations for VMs quite some time ago. In practice that 
> means that we are migrating qemu processes between pods on different nodes.
> k8s does not dictate anything regarding the workload. There is just a 
> scheduler which can or can not schedule your workload to nodes.
>  
> 
> But if it were to migrate them (e.g. via CRIU for containers and "vMotion" 
> for VMs) it would then to also have to understand (via KubeVirt), which 
> devices are tied,
> 
> As far as I know pci passthrough and live migration do not mix well in 
> general because neither oVirt nor OpenStack or other platforms can migrate 
> the pci device state, since it is not in a place where it can be copied. Only 
> SRIOV allows that via explicit unplug and re-plug.

Slowly but surely it is coming for other devices as well, it's in development 
for couple years now, and a topic for every KVMForum in the past ~5 years.

>  
> because they use a device that has too big a state (e.g. a multi-gig CUDA 
> workloads), a hard physical dependence (e.g. USB with connected devices) or 
> something that could move with the VM (e.g. SR-IOV FC/NIC/INF with a fabric 
> that can be re-configured to match or is also virtualized).
> 
> A proper negotiation between the not-so-dynamic physically available assets 
> of the DC and the much more dynamic resources required by the application are 
> the full scope of a virt-stack/k8 hybrid, encompassing a DC/Cloud-OS 
> (infrastructure) and K8 (platform) aspects.
> 
> While KubeVirt does not offer everything which oVirt has at the moment, like 
> Sandro indicated, the cases you mentioned are mostly solved and considered 
> stable.

indeed!
When speaking of kubevirt itself I really think it's "just" the lack of 
virt-specific UI that makes it look like a too low level tool compared to the 
oVirt experience. Openshift/OKD UI is fixing this gap...and there's always a 
long way towards more maturity and more niche features and use cases to add, 
sure, but it is getting better and better every day. 

Thanks,
michal
>  
> 
> While I'd love to have that, I can see how that won't be maintained by anyone 
> as a full free-to-use open-souce turn-key solution.
> 
> There are nice projects to install k8s easily, for installing kubevirt with 
> its operator you just apply some manifests on the  (bare-metal) cluster and 
> you can start right away.
> 
> I can understand that a new system like k8s may look intimidating.
> 
> Best regards,
> Roman
>  
> ___
> Users mailing list -- users@ovirt.org 
> To unsubscribe send an email to users-le...@ovirt.org 
> 
> Privacy Statement: https://www.ovirt.org/privacy-policy.html 
> 
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/ 
> 
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/OPAKEXBS4LZSG2HIU3AWGJJCLT22FFGF/
>  
> 
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/RXSWPU5MQHNRXBKOPWSDMJR4C6S2DPP5/

___
Users mailing list -- users@ovirt.org
To unsubscribe send 

[ovirt-users] [ANN] Schedule for oVirt 4.5.0

2022-02-22 Thread Sandro Bonazzola
The oVirt development team leads are pleased to inform that the
schedule for oVirt 4.5.0 has been finalized.

The key dates follows:

* Feature Freeze - String Freeze - Alpha release: 2022-03-15
* Alpha release test day: 2022-03-17
* Code freeze - Beta release: 2022-03-29
* Beta release test day: 2022-03-31
* General Availability release: 2022-04-12

A release management draft page has been created at:
https://www.ovirt.org/release/4.5.0/

If you're willing to help testing the release during the test days
please join the oVirt development mailing list at
https://lists.ovirt.org/archives/list/de...@ovirt.org/ and report your
feedback there.
Instructions for installing oVirt 4.5.0 Alpha and oVirt 4.5.0 Beta for
testing will be added to the release page
https://www.ovirt.org/release/4.5.0/ when the corresponding version
will be released.

Professional Services, Integrators and Backup vendors: please plan a
test session against your additional services, integrated solutions,
downstream rebuilds, backup solution accordingly.
If you're not listed here:
https://ovirt.org/community/user-stories/users-and-providers.html
consider adding your company there.

If you're willing to help updating the localization for oVirt 4.5.0
please follow https://ovirt.org/develop/localization.html

If you're willing to help promoting the oVirt 4.5.0 release you can
submit your banner proposals for the oVirt home page and for the
social media advertising at https://github.com/oVirt/ovirt-site/issues
As an alternative please consider submitting a case study as in
https://ovirt.org/community/user-stories/user-stories.html

Feature owners: please start planning a presentation of your feature
for oVirt Youtube channel: https://www.youtube.com/c/ovirtproject

Do you want to contribute to getting ready for this release?
Read more about oVirt community at https://ovirt.org/community/ and
join the oVirt developers https://ovirt.org/develop/

Thanks,
-- 

Sandro Bonazzola
MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV
Red Hat EMEA
sbona...@redhat.com
Red Hat respects your work life balance. Therefore there is no need to
answer this email out of your office hours.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/7646LEQIHL76HIJTAZWCXWAHT3M6V47C/


[ovirt-users] Re: oVirt alternatives

2022-02-22 Thread Thomas Hoberg
> On Tue, Feb 22, 2022 at 1:25 PM Thomas Hoberg  k8s does not dictate anything regarding the workload. There is just a
> scheduler which can or can not schedule your workload to nodes.
> 
One of these days I'll have to dig deep and see what it does.

"Scheduling" can encompass quite a few activities and I don't know which of 
them K8 covers.

Batch scheduling (also Singularity/HPC) type scheduling involves also the 
creation (and thus consumption) of RAM/storage/GPUs, so real 
instances/reservations are created, which in the case of pass-through would 
include some hard dependencies.

Normal OS scheduling is mostly CPU, while processes and stroage are alrady 
there and occupy resources and could find an equivalent in traffic steering, 
where the number of nodes that receive traffic is expanded or reduced. K8 to my 
understanding would do the traffic steering as a minimum and then have actions 
for instance creation and deletions.

But given a host with hybrid loads, some with tied resources, others generic 
without: to manage the decisions/allocations properly you need to come up with 
a plan that includes
1. Given a capacity bottleneck on a host, do I ask the lower-layer (DC-OS) to 
create additional containers elsewhere and shut down the local ones or do I 
migrate the running one on a new host?
2. Given a capacity underutilization on a host, how to go best about shutting 
down hosts, that aren't going to be needed for the next hours in a way where 
the migration cost do not exceed the power savings?

To my naive current understanding virt-stacks won't create and shut-down VMs, 
their typical (or only?) load management instrument is VM migration.

Kubernetes (and Docker swarm etc.) won't migrate node instances (nor VMs), but 
create and destory them to manage load.

At large scales (scale out) this swarm approach is obviously better, migration 
creates too much of an overhead.

In the home domain of the virt-stacks (scale in), live migration is perhaps 
necessary, because the application stacks aren't ready to deal with instance 
destruction without service disruption or just because it is rare enough to be 
cheaper than instance re-creation.

In the past it was more clear cut, because there was no live migration support 
for containers. But with CRIU (and its predecessors in OpenVZ), that could be 
done just as seamless as with VMs. And instance creation/deletions were more 
like fail-over scenarios, where (rare) service disruptions were accepted.

Today the two approaches can more easily be mingled but they don't easily mix 
yet, and the negotiating element between them is missing.
> 
> I can understand that a new system like k8s may look intimidating.

Just understanding the two approaches and how they mix is already filling the 
brain capacity I can give them.

Operating that mix is currently quite beyond the fact that it's only a small 
part of my job.
> 
> Best regards,
> Roman
Gleichfalls!
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/LFNGOUG7BEKA7QSHSO5BCZPPLAT7IUXP/


[ovirt-users] Re: Unable to update self hosted Engine due to missing mirrors

2022-02-22 Thread Thomas Hoberg
> Le 21/02/2022 à 17:15, Klaas Demter a écrit :
> Thank you, it is ok now but... we
> are faced to the first side effects of 
> an upstream distribution that continuously ships newer packages and 
> finally breaks dependencies (at least repos) into a stable ovirt realease.

which is the effect I was (so pessimistically) pointing to with regards to 
oVirt's future. And I'd say the repos are the least of your worries, it's the 
unavoidable code gaps in a fully agile multiverse that I'm more sceptical about.

It's fully [up]stream or nothing, unless someone outside IBM/Redhat takes that 
fully into their hands.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/SOZG67ULC4FQ5CKEMPF33XP4TGRXHJZ5/


[ovirt-users] Re: live storage migration & export speed

2022-02-22 Thread Thomas Hoberg
This is very cryptic: care to expand a little?

oVirt supports live migration--of VMs, meaning the (smaller) RAM contents--and 
tries to avoid (larger) storage migration.

The speed for VM migration has the network as an upper bound, not sure how 
intelligently unused (ballooned?) RAM is excluded or if there is some type of 
compression/sparsity optimization going on.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/HBAW3XCRNFCRTN24AYZ32JAAALRSEF3C/


[ovirt-users] Re: oVirt alternatives

2022-02-22 Thread Roman Mohr
On Tue, Feb 22, 2022 at 1:25 PM Thomas Hoberg  wrote:

> > On Tue, Feb 22, 2022 at 9:48 AM Simone Tiraboschi  >
> > wrote:
> >
> >
> > Just to clarify the state of things a little: It is not only technically
> > there. KubeVirt supports pci passthrough, GPU passthrough and
> > SRIOV (including live-migration for SRIOV). I can't say if the OpenShift
> UI
> > can compete with oVirt at this stage.
> >
> >
> > Best regards,
> > Roman
>
> Well, I guess it's there, mostly because they didn't have to do anything
> new, it's part of KVM/libvirt and more inherited than added.
>
> The main reason I "don't see it coming" is that may create more problems
> than it solves.
>
> To my understanding K8 is all about truly elastic workloads, including
> "mobility" to avoid constraints (including memory overcommit). Mobility in
> quotes, because I don't even know if it migrates containers or just shuts
> instances down in one place and launches them in another: migration itself
> has a significant cost after all.
>

We implemented live migrations for VMs quite some time ago. In practice
that means that we are migrating qemu processes between pods on different
nodes.
k8s does not dictate anything regarding the workload. There is just a
scheduler which can or can not schedule your workload to nodes.


>
> But if it were to migrate them (e.g. via CRIU for containers and "vMotion"
> for VMs) it would then to also have to understand (via KubeVirt), which
> devices are tied,


As far as I know pci passthrough and live migration do not mix well in
general because neither oVirt nor OpenStack or other platforms can migrate
the pci device state, since it is not in a place where it can be copied.
Only SRIOV allows that via explicit unplug and re-plug.


> because they use a device that has too big a state (e.g. a multi-gig CUDA
> workloads), a hard physical dependence (e.g. USB with connected devices) or
> something that could move with the VM (e.g. SR-IOV FC/NIC/INF with a fabric
> that can be re-configured to match or is also virtualized).
>
> A proper negotiation between the not-so-dynamic physically available
> assets of the DC and the much more dynamic resources required by the
> application are the full scope of a virt-stack/k8 hybrid, encompassing a
> DC/Cloud-OS (infrastructure) and K8 (platform) aspects.
>

While KubeVirt does not offer everything which oVirt has at the moment,
like Sandro indicated, the cases you mentioned are mostly solved and
considered stable.


>
> While I'd love to have that, I can see how that won't be maintained by
> anyone as a full free-to-use open-souce turn-key solution.
>

There are nice projects to install k8s easily, for installing kubevirt with
its operator you just apply some manifests on the  (bare-metal) cluster and
you can start right away.

I can understand that a new system like k8s may look intimidating.

Best regards,
Roman


> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/OPAKEXBS4LZSG2HIU3AWGJJCLT22FFGF/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/RXSWPU5MQHNRXBKOPWSDMJR4C6S2DPP5/


[ovirt-users] Re: Broke my GlusterFS somehow

2022-02-22 Thread Thomas Hoberg
I'm glad you made it work!

My main lesson from oVirt from the last two years is: It's not a turnkey 
solution.

Unless you are willing to dive deep and understand how it works (not so easy, 
because there is few up-to-date materials to explain the concepts) *AND* spend 
a significant amount of time to run it through its paces, you may not survive 
the first thing that goes wrong.

And if you do, that may be even worse: Trying to fix a busy production farm 
that has a core problem is stuff for nightmares.

So make sure you have a test environment or two to try things.

And it doesn't have to be physical, especially for the more complex things, 
that require a lot of restart-from-scratch before you do the thing that breaks 
things.

You can run oVirt on oVirt or any other hypervisor that supports nested 
virtualization and play with that before you deal with a "real patient".

I've gone through through catastrophic hardware failures, which would have 
resulted in in a total loss without the resilience oVirt HCI with Gluster 
replicas provided, but I've also lost a lot of hair and nails just running it.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/SDBFJNIAMKEH4BSVI6KEHZQKXQX2L4KV/


[ovirt-users] Re: Migrating VMs from 4.3 Ovirt Env to new 4.4 Env

2022-02-22 Thread Thomas Hoberg
sorry a type there: s/have both ends move/have both ends BOOT the Clonezilla 
ISO...
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/OGMGYAXDQ3IO4WGAPFAKWPYQOQGTRMUV/


[ovirt-users] Re: Migrating VMs from 4.3 Ovirt Env to new 4.4 Env

2022-02-22 Thread Thomas Hoberg
> So as title states I am moving VMs from an old system of ours with alot of 
> issues to a new
> 4.4 HC Gluster envr although it seems I am running into what I have learnt is 
> a 4.3 bug of
> some sort with exporting OVAs. 
The latest release of 4.3 still contained a bug, essentially a race condition 
which resulted in the OVA disk not being exported at all. Unfortunately you 
can't tell from the file size, it's only when you look inside the OVA that 
you'll see the disk image is actually empty.

I hand patched the (single line) fix to make OVA exports work, because they did 
not update 4.3 at that point any more.
https://bugzilla.redhat.com/show_bug.cgi?id=1813028

Since the ability to migrate from one major release to another without such 
problems, I since struggled with the oVirt team's motto "for the entire 
enterprise"
> 
> Some of my OVAs are showing large GB sizes although their actual size may be 
> 20K so they
> are showing no boot device as well, theres really no data there. Some OVAs 
> are fine but
> some are broken. So I was wondering if anyone has dealt with this and whats 
> the best way
> around this issue?
Yes, see above
> 
> I have no access remotely at the moment but when I get to them, I read 
> somewhere its
> better to just detach disks and download them to a HDD for example and build 
> new VMs and
> upload those disks and attach them instead this way? 
You can actually export the disks via the browser, but beware that sparse disks 
may wind up quite big. Unfortunately I relied on VDO/thin to not have me think 
about disk sizes at all (nasty habit picked up from VMware)...

> 
> Please do share if you have any better ways, usually I just Export OVA to an 
> external HDD,
> remount to new ovirt and import OVA but seems a few of these VMs out of alot 
> did not
> succeed and I am running out of downtime window.

While Clonezilla wasn't so great for backup up complete oVirt hosts, which 
might have VDO and Gluster bricks on them, it's turned out a great solution for 
backup up or beaming VMs from one "farm" to another.

E.g. I've used it to move VMs from my oVirt 4.3 HCI cluster to my new XCP-NG 
8.2 HCI cluster or sometimes even to a temporary VMware VM without storing it 
externally: Just create a target VM, have both ends move the Clonezilla ISO and 
start the transfer.

I believe backup domains were new in 4.4 so they only work for 4.4 -> 4.4 
mobility, but export domains, while deprecated, still work on both sides. So 
you can create and fill a backup domain on 4.3, disconnect and reconnect it on 
4.4 and import machines from there.

I believe I've faced some issues with VMs that were too old, or based on 
templates or whatnot, so it's best to try the export/import fully before taking 
down and rebuilding the 4.3, so the VMs in the backup domain can actually be 
recovered.

Generally any VM that is Q35 already on 4.3 seems to be less problematic on 4.4.
> 
> Thanks a bunch, I have noticed repeated names helping me, I am very grateful 
> for your
> help.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/6AU3JORW5PC4MWGCMTOE5MNKN2J2TFFS/


[ovirt-users] Re: oVirt alternatives

2022-02-22 Thread Thomas Hoberg
> On Tue, Feb 22, 2022 at 9:48 AM Simone Tiraboschi  wrote:
> 
> 
> Just to clarify the state of things a little: It is not only technically
> there. KubeVirt supports pci passthrough, GPU passthrough and
> SRIOV (including live-migration for SRIOV). I can't say if the OpenShift UI
> can compete with oVirt at this stage.
> 
> 
> Best regards,
> Roman

Well, I guess it's there, mostly because they didn't have to do anything new, 
it's part of KVM/libvirt and more inherited than added.

The main reason I "don't see it coming" is that may create more problems than 
it solves.

To my understanding K8 is all about truly elastic workloads, including 
"mobility" to avoid constraints (including memory overcommit). Mobility in 
quotes, because I don't even know if it migrates containers or just shuts 
instances down in one place and launches them in another: migration itself has 
a significant cost after all.

But if it were to migrate them (e.g. via CRIU for containers and "vMotion" for 
VMs) it would then to also have to understand (via KubeVirt), which devices are 
tied, because they use a device that has too big a state (e.g. a multi-gig CUDA 
workloads), a hard physical dependence (e.g. USB with connected devices) or 
something that could move with the VM (e.g. SR-IOV FC/NIC/INF with a fabric 
that can be re-configured to match or is also virtualized).

A proper negotiation between the not-so-dynamic physically available assets of 
the DC and the much more dynamic resources required by the application are the 
full scope of a virt-stack/k8 hybrid, encompassing a DC/Cloud-OS 
(infrastructure) and K8 (platform) aspects.

While I'd love to have that, I can see how that won't be maintained by anyone 
as a full free-to-use open-souce turn-key solution.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/OPAKEXBS4LZSG2HIU3AWGJJCLT22FFGF/


[ovirt-users] Re: oVirt alternatives

2022-02-22 Thread Roman Mohr
On Mon, Feb 21, 2022 at 12:27 PM Thomas Hoberg  wrote:

> That's exactly the direction I originally understood oVirt would go, with
> the ability to run VMs and container side-by-side on the bare metal or
> nested with containers inside VMs for stronger resource or security
> isolation and network virtualization. To me it sounded especially
> attractive with an HCI underpinning so you could deploy it also in the
> field with small 3 node clusters.
>

I think in general a big part of the industry is going down the path of
moving most things behind the k8s API/resource model. This means different
things for different companies. For instance vmware keeps its traditional
virt-stack, adding additional k8s apis in front of it, and crossing the
bridges to k8s clusters behind the scenes to get a unified view, while
other parts are choosing k8s (be it vanilla k8s, openshift, harvester, ...)
and then take for instance KubeVirt to deploy additional k8s clusters on
top of it, unifying the stack this way.

It is definitely true that k8s works significantly different to other
solutions like oVirt or OpenStack, but once you get into it, I think one
would be surprised how simple the architecture of k8s actually is, and also
how little resources core k8s actually takes.

Having said that, as an ex oVirt engineer I would be glad to see oVirt
continue to thrive. The simplicity of oVirt was always appealing to me.

Best regards,
Roman



>
> But combining all those features evidently comes at too high a cost for
> all the integration and the customer base is either too small or too poor:
> the cloud players are all out on making sure you no longer run any hardware
> and then it's really just about pushing your applications there as cloud
> native or "IaaS" compatible as needed.
>
> E.g. I don't see PCI pass-through coming to kubevirt to enable GPU use,
> because it ties the machine to a specific host and goes against the grain
> of K8 as I understand it.
>
> Memory overcommit is quite funny, really, because it's the same issue as
> the original virtual memory: essentially you lie to your consumer about the
> resources available and then swap pages forth and back in an attempt to
> make all your consumers happy. It was processes for virtual memory, it's
> VMs now for the hypervisor and in both cases it's about the consumer and
> the provider not continously negotiating for the resources they need and
> the price they are willing to pay.
>
> That negotiation is always better at the highest level of abstraction, the
> application itself, which why implementing it at the lower levels (e.g.
> VMs) becomes less useful and needed.
>
> And then there is technology like CXL which essentially turns RAM in to a
> fabric and your local CPU will just get RAM from another piece of hardware
> when your application needs more RAM and is willing to pay the premium
> something will charge for it.
>
> With that type of hardware much of what hypervisors used to do goes into
> DPUs/IPUs and CPUs are just running applications making hypercalls. The
> kernel is just there to bootstrap.
>
> Not sure we'll see that type of hardware at home or in the edge, though...
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/PC5SDUMCPUEHQCE6SCMITTQWK5QKGMWT/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/4IOI2KD7MNKFS3QQ5W4RVYPKBRZIDF5L/


[ovirt-users] Re: oVirt alternatives

2022-02-22 Thread Roman Mohr
On Tue, Feb 22, 2022 at 9:48 AM Simone Tiraboschi 
wrote:

>
>
> On Mon, Feb 21, 2022 at 12:27 PM Thomas Hoberg  wrote:
>
>> That's exactly the direction I originally understood oVirt would go, with
>> the ability to run VMs and container side-by-side on the bare metal or
>> nested with containers inside VMs for stronger resource or security
>> isolation and network virtualization. To me it sounded especially
>> attractive with an HCI underpinning so you could deploy it also in the
>> field with small 3 node clusters.
>>
>> But combining all those features evidently comes at too high a cost for
>> all the integration and the customer base is either too small or too poor:
>> the cloud players are all out on making sure you no longer run any hardware
>> and then it's really just about pushing your applications there as cloud
>> native or "IaaS" compatible as needed.
>>
>> E.g. I don't see PCI pass-through coming to kubevirt to enable GPU use,
>> because it ties the machine to a specific host and goes against the grain
>> of K8 as I understand it.
>>
>
> technically it's already there:
> https://kubevirt.io/user-guide/virtual_machines/host-devices/
>

Just to clarify the state of things a little: It is not only technically
there. KubeVirt supports pci passthrough, GPU passthrough and
SRIOV (including live-migration for SRIOV). I can't say if the OpenShift UI
can compete with oVirt at this stage.


Best regards,
Roman



>
>
>>
>> Memory overcommit is quite funny, really, because it's the same issue as
>> the original virtual memory: essentially you lie to your consumer about the
>> resources available and then swap pages forth and back in an attempt to
>> make all your consumers happy. It was processes for virtual memory, it's
>> VMs now for the hypervisor and in both cases it's about the consumer and
>> the provider not continously negotiating for the resources they need and
>> the price they are willing to pay.
>>
>> That negotiation is always better at the highest level of abstraction,
>> the application itself, which why implementing it at the lower levels (e.g.
>> VMs) becomes less useful and needed.
>>
>> And then there is technology like CXL which essentially turns RAM in to a
>> fabric and your local CPU will just get RAM from another piece of hardware
>> when your application needs more RAM and is willing to pay the premium
>> something will charge for it.
>>
>> With that type of hardware much of what hypervisors used to do goes into
>> DPUs/IPUs and CPUs are just running applications making hypercalls. The
>> kernel is just there to bootstrap.
>>
>> Not sure we'll see that type of hardware at home or in the edge, though...
>> ___
>> Users mailing list -- users@ovirt.org
>> To unsubscribe send an email to users-le...@ovirt.org
>> Privacy Statement: https://www.ovirt.org/privacy-policy.html
>> oVirt Code of Conduct:
>> https://www.ovirt.org/community/about/community-guidelines/
>> List Archives:
>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/PC5SDUMCPUEHQCE6SCMITTQWK5QKGMWT/
>>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/YDNE7EL3R36WGYDTGBLZJT33P4OSG2UC/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/MSX4I4K47SZYX6SZD5SO2KZ7DKT33IUS/


[ovirt-users] Re: oVirt alternatives

2022-02-22 Thread Simone Tiraboschi
On Mon, Feb 21, 2022 at 12:27 PM Thomas Hoberg  wrote:

> That's exactly the direction I originally understood oVirt would go, with
> the ability to run VMs and container side-by-side on the bare metal or
> nested with containers inside VMs for stronger resource or security
> isolation and network virtualization. To me it sounded especially
> attractive with an HCI underpinning so you could deploy it also in the
> field with small 3 node clusters.
>
> But combining all those features evidently comes at too high a cost for
> all the integration and the customer base is either too small or too poor:
> the cloud players are all out on making sure you no longer run any hardware
> and then it's really just about pushing your applications there as cloud
> native or "IaaS" compatible as needed.
>
> E.g. I don't see PCI pass-through coming to kubevirt to enable GPU use,
> because it ties the machine to a specific host and goes against the grain
> of K8 as I understand it.
>

technically it's already there:
https://kubevirt.io/user-guide/virtual_machines/host-devices/


>
> Memory overcommit is quite funny, really, because it's the same issue as
> the original virtual memory: essentially you lie to your consumer about the
> resources available and then swap pages forth and back in an attempt to
> make all your consumers happy. It was processes for virtual memory, it's
> VMs now for the hypervisor and in both cases it's about the consumer and
> the provider not continously negotiating for the resources they need and
> the price they are willing to pay.
>
> That negotiation is always better at the highest level of abstraction, the
> application itself, which why implementing it at the lower levels (e.g.
> VMs) becomes less useful and needed.
>
> And then there is technology like CXL which essentially turns RAM in to a
> fabric and your local CPU will just get RAM from another piece of hardware
> when your application needs more RAM and is willing to pay the premium
> something will charge for it.
>
> With that type of hardware much of what hypervisors used to do goes into
> DPUs/IPUs and CPUs are just running applications making hypercalls. The
> kernel is just there to bootstrap.
>
> Not sure we'll see that type of hardware at home or in the edge, though...
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/PC5SDUMCPUEHQCE6SCMITTQWK5QKGMWT/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/YDNE7EL3R36WGYDTGBLZJT33P4OSG2UC/