[ovirt-users] Re: oVirt alternatives

2022-02-22 Thread Michal Skrivanek


> On 22. 2. 2022, at 14:16, Roman Mohr  wrote:
> 
> 
> 
> On Tue, Feb 22, 2022 at 1:25 PM Thomas Hoberg  > wrote:
> > On Tue, Feb 22, 2022 at 9:48 AM Simone Tiraboschi  > 
> > wrote:
> > 
> > 
> > Just to clarify the state of things a little: It is not only technically
> > there. KubeVirt supports pci passthrough, GPU passthrough and
> > SRIOV (including live-migration for SRIOV). I can't say if the OpenShift UI
> > can compete with oVirt at this stage.
> > 
> > 
> > Best regards,
> > Roman
> 
> Well, I guess it's there, mostly because they didn't have to do anything new, 
> it's part of KVM/libvirt and more inherited than added.

That you can say about oVirt and Openstack and basically anyone else as well. 
The foundation for basically every virtualization features is always in qemu-kvm

> 
> The main reason I "don't see it coming" is that may create more problems than 
> it solves.
> 
> To my understanding K8 is all about truly elastic workloads, including 
> "mobility" to avoid constraints (including memory overcommit). Mobility in 
> quotes, because I don't even know if it migrates containers or just shuts 
> instances down in one place and launches them in another: migration itself 
> has a significant cost after all.
> 
> We implemented live migrations for VMs quite some time ago. In practice that 
> means that we are migrating qemu processes between pods on different nodes.
> k8s does not dictate anything regarding the workload. There is just a 
> scheduler which can or can not schedule your workload to nodes.
>  
> 
> But if it were to migrate them (e.g. via CRIU for containers and "vMotion" 
> for VMs) it would then to also have to understand (via KubeVirt), which 
> devices are tied,
> 
> As far as I know pci passthrough and live migration do not mix well in 
> general because neither oVirt nor OpenStack or other platforms can migrate 
> the pci device state, since it is not in a place where it can be copied. Only 
> SRIOV allows that via explicit unplug and re-plug.

Slowly but surely it is coming for other devices as well, it's in development 
for couple years now, and a topic for every KVMForum in the past ~5 years.

>  
> because they use a device that has too big a state (e.g. a multi-gig CUDA 
> workloads), a hard physical dependence (e.g. USB with connected devices) or 
> something that could move with the VM (e.g. SR-IOV FC/NIC/INF with a fabric 
> that can be re-configured to match or is also virtualized).
> 
> A proper negotiation between the not-so-dynamic physically available assets 
> of the DC and the much more dynamic resources required by the application are 
> the full scope of a virt-stack/k8 hybrid, encompassing a DC/Cloud-OS 
> (infrastructure) and K8 (platform) aspects.
> 
> While KubeVirt does not offer everything which oVirt has at the moment, like 
> Sandro indicated, the cases you mentioned are mostly solved and considered 
> stable.

indeed!
When speaking of kubevirt itself I really think it's "just" the lack of 
virt-specific UI that makes it look like a too low level tool compared to the 
oVirt experience. Openshift/OKD UI is fixing this gap...and there's always a 
long way towards more maturity and more niche features and use cases to add, 
sure, but it is getting better and better every day. 

Thanks,
michal
>  
> 
> While I'd love to have that, I can see how that won't be maintained by anyone 
> as a full free-to-use open-souce turn-key solution.
> 
> There are nice projects to install k8s easily, for installing kubevirt with 
> its operator you just apply some manifests on the  (bare-metal) cluster and 
> you can start right away.
> 
> I can understand that a new system like k8s may look intimidating.
> 
> Best regards,
> Roman
>  
> ___
> Users mailing list -- users@ovirt.org 
> To unsubscribe send an email to users-le...@ovirt.org 
> 
> Privacy Statement: https://www.ovirt.org/privacy-policy.html 
> 
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/ 
> 
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/OPAKEXBS4LZSG2HIU3AWGJJCLT22FFGF/
>  
> 
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/RXSWPU5MQHNRXBKOPWSDMJR4C6S2DPP5/

___
Users mailing list -- users@ovirt.org
To unsubscribe send 

[ovirt-users] Re: oVirt alternatives

2022-02-22 Thread Thomas Hoberg
> On Tue, Feb 22, 2022 at 1:25 PM Thomas Hoberg  k8s does not dictate anything regarding the workload. There is just a
> scheduler which can or can not schedule your workload to nodes.
> 
One of these days I'll have to dig deep and see what it does.

"Scheduling" can encompass quite a few activities and I don't know which of 
them K8 covers.

Batch scheduling (also Singularity/HPC) type scheduling involves also the 
creation (and thus consumption) of RAM/storage/GPUs, so real 
instances/reservations are created, which in the case of pass-through would 
include some hard dependencies.

Normal OS scheduling is mostly CPU, while processes and stroage are alrady 
there and occupy resources and could find an equivalent in traffic steering, 
where the number of nodes that receive traffic is expanded or reduced. K8 to my 
understanding would do the traffic steering as a minimum and then have actions 
for instance creation and deletions.

But given a host with hybrid loads, some with tied resources, others generic 
without: to manage the decisions/allocations properly you need to come up with 
a plan that includes
1. Given a capacity bottleneck on a host, do I ask the lower-layer (DC-OS) to 
create additional containers elsewhere and shut down the local ones or do I 
migrate the running one on a new host?
2. Given a capacity underutilization on a host, how to go best about shutting 
down hosts, that aren't going to be needed for the next hours in a way where 
the migration cost do not exceed the power savings?

To my naive current understanding virt-stacks won't create and shut-down VMs, 
their typical (or only?) load management instrument is VM migration.

Kubernetes (and Docker swarm etc.) won't migrate node instances (nor VMs), but 
create and destory them to manage load.

At large scales (scale out) this swarm approach is obviously better, migration 
creates too much of an overhead.

In the home domain of the virt-stacks (scale in), live migration is perhaps 
necessary, because the application stacks aren't ready to deal with instance 
destruction without service disruption or just because it is rare enough to be 
cheaper than instance re-creation.

In the past it was more clear cut, because there was no live migration support 
for containers. But with CRIU (and its predecessors in OpenVZ), that could be 
done just as seamless as with VMs. And instance creation/deletions were more 
like fail-over scenarios, where (rare) service disruptions were accepted.

Today the two approaches can more easily be mingled but they don't easily mix 
yet, and the negotiating element between them is missing.
> 
> I can understand that a new system like k8s may look intimidating.

Just understanding the two approaches and how they mix is already filling the 
brain capacity I can give them.

Operating that mix is currently quite beyond the fact that it's only a small 
part of my job.
> 
> Best regards,
> Roman
Gleichfalls!
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/LFNGOUG7BEKA7QSHSO5BCZPPLAT7IUXP/


[ovirt-users] Re: oVirt alternatives

2022-02-22 Thread Roman Mohr
On Tue, Feb 22, 2022 at 1:25 PM Thomas Hoberg  wrote:

> > On Tue, Feb 22, 2022 at 9:48 AM Simone Tiraboschi  
> > wrote:
> >
> >
> > Just to clarify the state of things a little: It is not only technically
> > there. KubeVirt supports pci passthrough, GPU passthrough and
> > SRIOV (including live-migration for SRIOV). I can't say if the OpenShift
> UI
> > can compete with oVirt at this stage.
> >
> >
> > Best regards,
> > Roman
>
> Well, I guess it's there, mostly because they didn't have to do anything
> new, it's part of KVM/libvirt and more inherited than added.
>
> The main reason I "don't see it coming" is that may create more problems
> than it solves.
>
> To my understanding K8 is all about truly elastic workloads, including
> "mobility" to avoid constraints (including memory overcommit). Mobility in
> quotes, because I don't even know if it migrates containers or just shuts
> instances down in one place and launches them in another: migration itself
> has a significant cost after all.
>

We implemented live migrations for VMs quite some time ago. In practice
that means that we are migrating qemu processes between pods on different
nodes.
k8s does not dictate anything regarding the workload. There is just a
scheduler which can or can not schedule your workload to nodes.


>
> But if it were to migrate them (e.g. via CRIU for containers and "vMotion"
> for VMs) it would then to also have to understand (via KubeVirt), which
> devices are tied,


As far as I know pci passthrough and live migration do not mix well in
general because neither oVirt nor OpenStack or other platforms can migrate
the pci device state, since it is not in a place where it can be copied.
Only SRIOV allows that via explicit unplug and re-plug.


> because they use a device that has too big a state (e.g. a multi-gig CUDA
> workloads), a hard physical dependence (e.g. USB with connected devices) or
> something that could move with the VM (e.g. SR-IOV FC/NIC/INF with a fabric
> that can be re-configured to match or is also virtualized).
>
> A proper negotiation between the not-so-dynamic physically available
> assets of the DC and the much more dynamic resources required by the
> application are the full scope of a virt-stack/k8 hybrid, encompassing a
> DC/Cloud-OS (infrastructure) and K8 (platform) aspects.
>

While KubeVirt does not offer everything which oVirt has at the moment,
like Sandro indicated, the cases you mentioned are mostly solved and
considered stable.


>
> While I'd love to have that, I can see how that won't be maintained by
> anyone as a full free-to-use open-souce turn-key solution.
>

There are nice projects to install k8s easily, for installing kubevirt with
its operator you just apply some manifests on the  (bare-metal) cluster and
you can start right away.

I can understand that a new system like k8s may look intimidating.

Best regards,
Roman


> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/OPAKEXBS4LZSG2HIU3AWGJJCLT22FFGF/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/RXSWPU5MQHNRXBKOPWSDMJR4C6S2DPP5/


[ovirt-users] Re: oVirt alternatives

2022-02-22 Thread Thomas Hoberg
> On Tue, Feb 22, 2022 at 9:48 AM Simone Tiraboschi  wrote:
> 
> 
> Just to clarify the state of things a little: It is not only technically
> there. KubeVirt supports pci passthrough, GPU passthrough and
> SRIOV (including live-migration for SRIOV). I can't say if the OpenShift UI
> can compete with oVirt at this stage.
> 
> 
> Best regards,
> Roman

Well, I guess it's there, mostly because they didn't have to do anything new, 
it's part of KVM/libvirt and more inherited than added.

The main reason I "don't see it coming" is that may create more problems than 
it solves.

To my understanding K8 is all about truly elastic workloads, including 
"mobility" to avoid constraints (including memory overcommit). Mobility in 
quotes, because I don't even know if it migrates containers or just shuts 
instances down in one place and launches them in another: migration itself has 
a significant cost after all.

But if it were to migrate them (e.g. via CRIU for containers and "vMotion" for 
VMs) it would then to also have to understand (via KubeVirt), which devices are 
tied, because they use a device that has too big a state (e.g. a multi-gig CUDA 
workloads), a hard physical dependence (e.g. USB with connected devices) or 
something that could move with the VM (e.g. SR-IOV FC/NIC/INF with a fabric 
that can be re-configured to match or is also virtualized).

A proper negotiation between the not-so-dynamic physically available assets of 
the DC and the much more dynamic resources required by the application are the 
full scope of a virt-stack/k8 hybrid, encompassing a DC/Cloud-OS 
(infrastructure) and K8 (platform) aspects.

While I'd love to have that, I can see how that won't be maintained by anyone 
as a full free-to-use open-souce turn-key solution.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/OPAKEXBS4LZSG2HIU3AWGJJCLT22FFGF/


[ovirt-users] Re: oVirt alternatives

2022-02-22 Thread Roman Mohr
On Mon, Feb 21, 2022 at 12:27 PM Thomas Hoberg  wrote:

> That's exactly the direction I originally understood oVirt would go, with
> the ability to run VMs and container side-by-side on the bare metal or
> nested with containers inside VMs for stronger resource or security
> isolation and network virtualization. To me it sounded especially
> attractive with an HCI underpinning so you could deploy it also in the
> field with small 3 node clusters.
>

I think in general a big part of the industry is going down the path of
moving most things behind the k8s API/resource model. This means different
things for different companies. For instance vmware keeps its traditional
virt-stack, adding additional k8s apis in front of it, and crossing the
bridges to k8s clusters behind the scenes to get a unified view, while
other parts are choosing k8s (be it vanilla k8s, openshift, harvester, ...)
and then take for instance KubeVirt to deploy additional k8s clusters on
top of it, unifying the stack this way.

It is definitely true that k8s works significantly different to other
solutions like oVirt or OpenStack, but once you get into it, I think one
would be surprised how simple the architecture of k8s actually is, and also
how little resources core k8s actually takes.

Having said that, as an ex oVirt engineer I would be glad to see oVirt
continue to thrive. The simplicity of oVirt was always appealing to me.

Best regards,
Roman



>
> But combining all those features evidently comes at too high a cost for
> all the integration and the customer base is either too small or too poor:
> the cloud players are all out on making sure you no longer run any hardware
> and then it's really just about pushing your applications there as cloud
> native or "IaaS" compatible as needed.
>
> E.g. I don't see PCI pass-through coming to kubevirt to enable GPU use,
> because it ties the machine to a specific host and goes against the grain
> of K8 as I understand it.
>
> Memory overcommit is quite funny, really, because it's the same issue as
> the original virtual memory: essentially you lie to your consumer about the
> resources available and then swap pages forth and back in an attempt to
> make all your consumers happy. It was processes for virtual memory, it's
> VMs now for the hypervisor and in both cases it's about the consumer and
> the provider not continously negotiating for the resources they need and
> the price they are willing to pay.
>
> That negotiation is always better at the highest level of abstraction, the
> application itself, which why implementing it at the lower levels (e.g.
> VMs) becomes less useful and needed.
>
> And then there is technology like CXL which essentially turns RAM in to a
> fabric and your local CPU will just get RAM from another piece of hardware
> when your application needs more RAM and is willing to pay the premium
> something will charge for it.
>
> With that type of hardware much of what hypervisors used to do goes into
> DPUs/IPUs and CPUs are just running applications making hypercalls. The
> kernel is just there to bootstrap.
>
> Not sure we'll see that type of hardware at home or in the edge, though...
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/PC5SDUMCPUEHQCE6SCMITTQWK5QKGMWT/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/4IOI2KD7MNKFS3QQ5W4RVYPKBRZIDF5L/


[ovirt-users] Re: oVirt alternatives

2022-02-22 Thread Roman Mohr
On Tue, Feb 22, 2022 at 9:48 AM Simone Tiraboschi 
wrote:

>
>
> On Mon, Feb 21, 2022 at 12:27 PM Thomas Hoberg  wrote:
>
>> That's exactly the direction I originally understood oVirt would go, with
>> the ability to run VMs and container side-by-side on the bare metal or
>> nested with containers inside VMs for stronger resource or security
>> isolation and network virtualization. To me it sounded especially
>> attractive with an HCI underpinning so you could deploy it also in the
>> field with small 3 node clusters.
>>
>> But combining all those features evidently comes at too high a cost for
>> all the integration and the customer base is either too small or too poor:
>> the cloud players are all out on making sure you no longer run any hardware
>> and then it's really just about pushing your applications there as cloud
>> native or "IaaS" compatible as needed.
>>
>> E.g. I don't see PCI pass-through coming to kubevirt to enable GPU use,
>> because it ties the machine to a specific host and goes against the grain
>> of K8 as I understand it.
>>
>
> technically it's already there:
> https://kubevirt.io/user-guide/virtual_machines/host-devices/
>

Just to clarify the state of things a little: It is not only technically
there. KubeVirt supports pci passthrough, GPU passthrough and
SRIOV (including live-migration for SRIOV). I can't say if the OpenShift UI
can compete with oVirt at this stage.


Best regards,
Roman



>
>
>>
>> Memory overcommit is quite funny, really, because it's the same issue as
>> the original virtual memory: essentially you lie to your consumer about the
>> resources available and then swap pages forth and back in an attempt to
>> make all your consumers happy. It was processes for virtual memory, it's
>> VMs now for the hypervisor and in both cases it's about the consumer and
>> the provider not continously negotiating for the resources they need and
>> the price they are willing to pay.
>>
>> That negotiation is always better at the highest level of abstraction,
>> the application itself, which why implementing it at the lower levels (e.g.
>> VMs) becomes less useful and needed.
>>
>> And then there is technology like CXL which essentially turns RAM in to a
>> fabric and your local CPU will just get RAM from another piece of hardware
>> when your application needs more RAM and is willing to pay the premium
>> something will charge for it.
>>
>> With that type of hardware much of what hypervisors used to do goes into
>> DPUs/IPUs and CPUs are just running applications making hypercalls. The
>> kernel is just there to bootstrap.
>>
>> Not sure we'll see that type of hardware at home or in the edge, though...
>> ___
>> Users mailing list -- users@ovirt.org
>> To unsubscribe send an email to users-le...@ovirt.org
>> Privacy Statement: https://www.ovirt.org/privacy-policy.html
>> oVirt Code of Conduct:
>> https://www.ovirt.org/community/about/community-guidelines/
>> List Archives:
>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/PC5SDUMCPUEHQCE6SCMITTQWK5QKGMWT/
>>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/YDNE7EL3R36WGYDTGBLZJT33P4OSG2UC/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/MSX4I4K47SZYX6SZD5SO2KZ7DKT33IUS/


[ovirt-users] Re: oVirt alternatives

2022-02-22 Thread Simone Tiraboschi
On Mon, Feb 21, 2022 at 12:27 PM Thomas Hoberg  wrote:

> That's exactly the direction I originally understood oVirt would go, with
> the ability to run VMs and container side-by-side on the bare metal or
> nested with containers inside VMs for stronger resource or security
> isolation and network virtualization. To me it sounded especially
> attractive with an HCI underpinning so you could deploy it also in the
> field with small 3 node clusters.
>
> But combining all those features evidently comes at too high a cost for
> all the integration and the customer base is either too small or too poor:
> the cloud players are all out on making sure you no longer run any hardware
> and then it's really just about pushing your applications there as cloud
> native or "IaaS" compatible as needed.
>
> E.g. I don't see PCI pass-through coming to kubevirt to enable GPU use,
> because it ties the machine to a specific host and goes against the grain
> of K8 as I understand it.
>

technically it's already there:
https://kubevirt.io/user-guide/virtual_machines/host-devices/


>
> Memory overcommit is quite funny, really, because it's the same issue as
> the original virtual memory: essentially you lie to your consumer about the
> resources available and then swap pages forth and back in an attempt to
> make all your consumers happy. It was processes for virtual memory, it's
> VMs now for the hypervisor and in both cases it's about the consumer and
> the provider not continously negotiating for the resources they need and
> the price they are willing to pay.
>
> That negotiation is always better at the highest level of abstraction, the
> application itself, which why implementing it at the lower levels (e.g.
> VMs) becomes less useful and needed.
>
> And then there is technology like CXL which essentially turns RAM in to a
> fabric and your local CPU will just get RAM from another piece of hardware
> when your application needs more RAM and is willing to pay the premium
> something will charge for it.
>
> With that type of hardware much of what hypervisors used to do goes into
> DPUs/IPUs and CPUs are just running applications making hypercalls. The
> kernel is just there to bootstrap.
>
> Not sure we'll see that type of hardware at home or in the edge, though...
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/PC5SDUMCPUEHQCE6SCMITTQWK5QKGMWT/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/YDNE7EL3R36WGYDTGBLZJT33P4OSG2UC/


[ovirt-users] Re: oVirt alternatives

2022-02-21 Thread Thomas Hoberg
That's exactly the direction I originally understood oVirt would go, with the 
ability to run VMs and container side-by-side on the bare metal or nested with 
containers inside VMs for stronger resource or security isolation and network 
virtualization. To me it sounded especially attractive with an HCI underpinning 
so you could deploy it also in the field with small 3 node clusters.

But combining all those features evidently comes at too high a cost for all the 
integration and the customer base is either too small or too poor: the cloud 
players are all out on making sure you no longer run any hardware and then it's 
really just about pushing your applications there as cloud native or "IaaS" 
compatible as needed.

E.g. I don't see PCI pass-through coming to kubevirt to enable GPU use, because 
it ties the machine to a specific host and goes against the grain of K8 as I 
understand it.

Memory overcommit is quite funny, really, because it's the same issue as the 
original virtual memory: essentially you lie to your consumer about the 
resources available and then swap pages forth and back in an attempt to make 
all your consumers happy. It was processes for virtual memory, it's VMs now for 
the hypervisor and in both cases it's about the consumer and the provider not 
continously negotiating for the resources they need and the price they are 
willing to pay.

That negotiation is always better at the highest level of abstraction, the 
application itself, which why implementing it at the lower levels (e.g. VMs) 
becomes less useful and needed.

And then there is technology like CXL which essentially turns RAM in to a 
fabric and your local CPU will just get RAM from another piece of hardware when 
your application needs more RAM and is willing to pay the premium something 
will charge for it.

With that type of hardware much of what hypervisors used to do goes into 
DPUs/IPUs and CPUs are just running applications making hypercalls. The kernel 
is just there to bootstrap.

Not sure we'll see that type of hardware at home or in the edge, though...
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/PC5SDUMCPUEHQCE6SCMITTQWK5QKGMWT/


[ovirt-users] Re: oVirt alternatives

2022-02-21 Thread Sandro Bonazzola
Il giorno lun 21 feb 2022 alle ore 09:18 Nathanaël Blanchet <
blanc...@abes.fr> ha scritto:

>
>
> Le 21 févr. 2022 08:31, Sandro Bonazzola  a écrit :
>
>
>
> Il giorno dom 20 feb 2022 alle ore 22:47 Nathanaël Blanchet <
> blanc...@abes.fr> ha scritto:
>
> Hello, Is okd/openshift virtualization designed to be a full replacement
> of ovirt/redhat by embedding the same level of advanced
>
>
> oVirt is a very mature project, integrated with most of the Red Hat
> ecosystem, mostly being maintained without any new big features.
> It has live-snapshot, live-storage-migration, memory overcommit
> management, passthrough of a very specific PCI device on a particular host,
> a VM portal, OpenShift IPI.
> It lacks integrated container management.
>
> OKD Virtualization is being very actively developed quickly closing gaps.
> It has integrated container management, ability to leverage the k8s
> distributed architecture/infrastructure and to leverage k8s assets like
> exclusive CPU placements.
> It currently lacks live-snapshot, live-storage-migration, memory
> overcommit management, passthrough of a very specific PCI device on a
> particular host, VM portal (OKD UI is more similar to Admin portal),
> thin-provisioning (of VMs on top of templates), hot (un)plug
> (disk/memory/NIC), high availability with VM leases, incremental backup,
> VDI features like template versions, sealing (virt-sysprep).
>
> So OKD is not feature complete replacement for oVirt yet.
>
> So okd virtualization aims to be a replacement in the next years when all
> ovirt features will be achivied for kubvirt, that's why ovirt continue to
> be maintained for the moment, great new!
> Openshift IPI being also maintained is a great new because bare metal okd
> installation is not as easy as ovirt is (even I know we need bare metal to
> consume okd virtualization). Can you confirm we can merge bare metal and
> virtualized core is hosts? Is nested virtualization available?
>

CC @Fabian Deutsch  who can elaborate more on the
Kubevirt side





>
> --
>
> Sandro Bonazzola
>
> MANAGER, SOFTWARE ENGINEERING, EMEA R RHV
>
> Red Hat EMEA 
>
> sbona...@redhat.com
> 
>
> *Red Hat respects your work life balance. Therefore there is no need to
> answer this email out of your office hours.*
>
>
>
>

-- 

Sandro Bonazzola

MANAGER, SOFTWARE ENGINEERING, EMEA R RHV

Red Hat EMEA 

sbona...@redhat.com


*Red Hat respects your work life balance. Therefore there is no need to
answer this email out of your office hours.*
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/RVAJT4NBKJLVD3KW2IY64244VWDO5EVQ/


[ovirt-users] Re: oVirt alternatives

2022-02-21 Thread Angus Clarke
Hi Sandro

Thanks for sharing, probably the stand out question to the uninitiated would 
be; why not integrate oVirt as OpenShift's virtualization stack?

Thanks
Angus



From: Sandro Bonazzola 
Sent: Monday, 21 February 2022, 8:31 am
To: Nathanaël Blanchet
Cc: users
Subject: [ovirt-users] Re: oVirt alternatives



Il giorno dom 20 feb 2022 alle ore 22:47 Nathanaël Blanchet 
mailto:blanc...@abes.fr>> ha scritto:
Hello, Is okd/openshift virtualization designed to be a full replacement of 
ovirt/redhat by embedding the same level of advanced

oVirt is a very mature project, integrated with most of the Red Hat ecosystem, 
mostly being maintained without any new big features.
It has live-snapshot, live-storage-migration, memory overcommit management, 
passthrough of a very specific PCI device on a particular host, a VM portal, 
OpenShift IPI.
It lacks integrated container management.

OKD Virtualization is being very actively developed quickly closing gaps.
It has integrated container management, ability to leverage the k8s distributed 
architecture/infrastructure and to leverage k8s assets like exclusive CPU 
placements.
It currently lacks live-snapshot, live-storage-migration, memory overcommit 
management, passthrough of a very specific PCI device on a particular host, VM 
portal (OKD UI is more similar to Admin portal), thin-provisioning (of VMs on 
top of templates), hot (un)plug (disk/memory/NIC), high availability with VM 
leases, incremental backup, VDI features like template versions, sealing 
(virt-sysprep).

So OKD is not feature complete replacement for oVirt yet.

--

Sandro Bonazzola

MANAGER, SOFTWARE ENGINEERING, EMEA R RHV

Red Hat 
EMEA<https://emea01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.redhat.com%2F=04%7C01%7C%7C86e7328c4a424cdf30da08d9f50c3bc7%7C84df9e7fe9f640afb435%7C1%7C0%7C637810255159084746%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000=s9Tl9F7V3ChKAdzEpXJrEwwJmVy%2Fqja84MBop0C2R8w%3D=0>

sbona...@redhat.com<mailto:sbona...@redhat.com>

[https://static.redhat.com/libs/redhat/brand-assets/2/corp/logo--200.png]<https://emea01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.redhat.com%2F=04%7C01%7C%7C86e7328c4a424cdf30da08d9f50c3bc7%7C84df9e7fe9f640afb435%7C1%7C0%7C637810255159084746%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000=s9Tl9F7V3ChKAdzEpXJrEwwJmVy%2Fqja84MBop0C2R8w%3D=0>
Red Hat respects your work life balance. Therefore there is no need to answer 
this email out of your office hours.



___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/TQ5FF44JV4K7VVHDMVDKDHFM47X52Q7K/


[ovirt-users] Re: oVirt alternatives

2022-02-21 Thread Nathanaël Blanchet
Le 21 févr. 2022 08:31, Sandro Bonazzola  a écrit :Il giorno dom 20 feb 2022 alle ore 22:47 Nathanaël Blanchet  ha scritto:Hello, Is okd/openshift virtualization designed to be a full replacement of ovirt/redhat by embedding the same level of advancedoVirt is a very mature project, integrated with most of the Red Hat ecosystem, mostly being maintained without any new big features.It has live-snapshot, live-storage-migration, memory overcommit management, passthrough of a very specific PCI device on a particular host, a VM portal, OpenShift IPI.It lacks integrated container management.OKD Virtualization is being very actively developed quickly closing gaps.It has integrated container management, ability to leverage the k8s distributed architecture/infrastructure and to leverage k8s assets like exclusive CPU placements.It currently lacks live-snapshot, live-storage-migration, memory overcommit management, passthrough of a very specific PCI device on a particular host, VM portal (OKD UI is more similar to Admin portal), thin-provisioning (of VMs on top of templates), hot (un)plug (disk/memory/NIC), high availability with VM leases, incremental backup, VDI features like template versions, sealing (virt-sysprep).So OKD is not feature complete replacement for oVirt yet.So okd virtualization aims to be a replacement in the next years when all ovirt features will be achivied for kubvirt, that's why ovirt continue to be maintained for the moment, great new! Openshift IPI being also maintained is a great new because bare metal okd installation is not as easy as ovirt is (even I know we need bare metal to consume okd virtualization). Can you confirm we can merge bare metal and virtualized core is hosts? Is nested virtualization available?-- Sandro BonazzolaMANAGER, SOFTWARE ENGINEERING, EMEA R RHVRed Hat EMEAsbona...@redhat.com   Red Hat respects your work life balance. Therefore there is no need to answer this email out of your office hours.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/ILMHSVVLZPQIOHDPNONEZOL3ZNX4UVHI/


[ovirt-users] Re: oVirt alternatives

2022-02-20 Thread Sandro Bonazzola
Il giorno dom 20 feb 2022 alle ore 22:47 Nathanaël Blanchet <
blanc...@abes.fr> ha scritto:

> Hello, Is okd/openshift virtualization designed to be a full replacement
> of ovirt/redhat by embedding the same level of advanced
>

oVirt is a very mature project, integrated with most of the Red Hat
ecosystem, mostly being maintained without any new big features.
It has live-snapshot, live-storage-migration, memory overcommit
management, passthrough of a very specific PCI device on a particular host,
a VM portal, OpenShift IPI.
It lacks integrated container management.

OKD Virtualization is being very actively developed quickly closing gaps.
It has integrated container management, ability to leverage the k8s
distributed architecture/infrastructure and to leverage k8s assets like
exclusive CPU placements.
It currently lacks live-snapshot, live-storage-migration, memory overcommit
management, passthrough of a very specific PCI device on a particular
host, VM portal (OKD UI is more similar to Admin portal), thin-provisioning
(of VMs on top of templates), hot (un)plug (disk/memory/NIC), high
availability with VM leases, incremental backup, VDI features like template
versions, sealing (virt-sysprep).

So OKD is not feature complete replacement for oVirt yet.

-- 

Sandro Bonazzola

MANAGER, SOFTWARE ENGINEERING, EMEA R RHV

Red Hat EMEA 

sbona...@redhat.com


*Red Hat respects your work life balance. Therefore there is no need to
answer this email out of your office hours.*
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/FJLPNFV6ITEB3ZDARLIM5MBY4EJU6TY2/


[ovirt-users] Re: oVirt alternatives

2022-02-20 Thread Strahil Nikolov via Users
Openshift/Kubernetes virtualization is not as feature-rich as oVirt (based on 
what I read).
Best Regards,Strahil Nikolov
 
 
  On Sun, Feb 20, 2022 at 23:49, Nathanaël Blanchet wrote:   
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/QZKA2VEMG6B3MOJL2RV6RS5W6XSHNQDB/
  
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/LINUBUPE5CSTT5BHNCONVBODEDVZ627X/


[ovirt-users] Re: oVirt alternatives

2022-02-20 Thread Nathanaël Blanchet
Hello, Is okd/openshift virtualization designed to be a full replacement of 
ovirt/redhat by embedding the same level of advanced

Il giorno dom 6 feb 2022 alle ore 14:06 Wesley Stewart 
ha scritto:

> Has anyone tried the open shift upstream old?  Looks like they support
> virtualization now.  Which I'm guessing is the upstream for openshift
> virtualization?
>
> https://docs.okd.io/latest/virt/about-virt.html
>

I gave a presentation about it 2 days ago at FOSDEM:
https://fosdem.org/2022/schedule/event/vai_intro_okd/
but looks like recordings are not yet available at
https://video.fosdem.org/2022/
Slides are here:
https://fosdem.org/2022/schedule/event/vai_intro_okd/attachments/slides/4843/export/events/attachments/vai_intro_okd/slides/4843/OKD_Virtualization_Community.pdf




>
>
>
> On Sat, Feb 5, 2022, 10:34 PM Alex McWhirter  wrote:
>
>> Oh i have spent years looking.
>>
>> ProxMox is probably the closest option, but has no multi-clustering
>> support. The clusters are more or less isolated from each other, and
>> would need another layer if you needed the ability to migrate between
>> them.
>>
>> XCP-ng, cool. No spice support. No UI for managing clustered storage
>> that is open source.
>>
>> Harvester, probably the closest / newest contender. Needs a lot more
>> attention / work.
>>
>> OpenNebula, more like a DIY AWS than anything else, but was functional
>> last i played with it.
>>
>>
>>
>> Has anyone actually played with OpenShift virtualization (replaces RHV)?
>> Wonder if OKD supports it with a similar model?
>>
>> On 2022-02-05 07:40, Thomas Hoberg wrote:
>> > There is unfortunately no formal announcement on the fate of oVirt,
>> > but with RHGS and RHV having a known end-of-life, oVirt may well shut
>> > down in Q2.
>> >
>> > So it's time to hunt for an alternative for those of us to came to
>> > oVirt because they had already rejected vSAN or Nutanix.
>> >
>> > Let's post what we find here in this thread.
>> > ___
>> > Users mailing list -- users@ovirt.org
>> > To unsubscribe send an email to users-le...@ovirt.org
>> > Privacy Statement: https://www.ovirt.org/privacy-policy.html
>> > oVirt Code of Conduct:
>> > https://www.ovirt.org/community/about/community-guidelines/
>> > List Archives:
>> >
>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/R4YFNNCTW5VVVRKSV2OORQ2UWZ2MTUDD/
>> ___
>> Users mailing list -- users@ovirt.org
>> To unsubscribe send an email to users-le...@ovirt.org
>> Privacy Statement: https://www.ovirt.org/privacy-policy.html
>> oVirt Code of Conduct:
>> https://www.ovirt.org/community/about/community-guidelines/
>> List Archives:
>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/EYPC6QXF55UCQPMQL5LDU6XMAF2CZOEG/
>>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/Z6JTGNYABYPZHHZ3F5Y75KF3KYDWV5OC/
>


-- 

Sandro Bonazzola

MANAGER, SOFTWARE ENGINEERING, EMEA R RHV

Red Hat EMEA 

sbona...@redhat.com


*Red Hat respects your work life balance. Therefore there is no need to
answer this email out of your office hours.*


___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/JSNPCQPTN32WIFEDI3LR6X63ELE57GMM/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/QZKA2VEMG6B3MOJL2RV6RS5W6XSHNQDB/


[ovirt-users] Re: oVirt alternatives

2022-02-20 Thread Wesley Stewart
Thanks, I'll check them out.

On Mon, Feb 7, 2022, 3:56 AM Sandro Bonazzola  wrote:

>
>
> Il giorno dom 6 feb 2022 alle ore 14:06 Wesley Stewart <
> wstewa...@gmail.com> ha scritto:
>
>> Has anyone tried the open shift upstream old?  Looks like they support
>> virtualization now.  Which I'm guessing is the upstream for openshift
>> virtualization?
>>
>> https://docs.okd.io/latest/virt/about-virt.html
>>
>
> I gave a presentation about it 2 days ago at FOSDEM:
> https://fosdem.org/2022/schedule/event/vai_intro_okd/
> but looks like recordings are not yet available at
> https://video.fosdem.org/2022/
> Slides are here:
> https://fosdem.org/2022/schedule/event/vai_intro_okd/attachments/slides/4843/export/events/attachments/vai_intro_okd/slides/4843/OKD_Virtualization_Community.pdf
>
>
>
>
>>
>>
>>
>> On Sat, Feb 5, 2022, 10:34 PM Alex McWhirter  wrote:
>>
>>> Oh i have spent years looking.
>>>
>>> ProxMox is probably the closest option, but has no multi-clustering
>>> support. The clusters are more or less isolated from each other, and
>>> would need another layer if you needed the ability to migrate between
>>> them.
>>>
>>> XCP-ng, cool. No spice support. No UI for managing clustered storage
>>> that is open source.
>>>
>>> Harvester, probably the closest / newest contender. Needs a lot more
>>> attention / work.
>>>
>>> OpenNebula, more like a DIY AWS than anything else, but was functional
>>> last i played with it.
>>>
>>>
>>>
>>> Has anyone actually played with OpenShift virtualization (replaces RHV)?
>>> Wonder if OKD supports it with a similar model?
>>>
>>> On 2022-02-05 07:40, Thomas Hoberg wrote:
>>> > There is unfortunately no formal announcement on the fate of oVirt,
>>> > but with RHGS and RHV having a known end-of-life, oVirt may well shut
>>> > down in Q2.
>>> >
>>> > So it's time to hunt for an alternative for those of us to came to
>>> > oVirt because they had already rejected vSAN or Nutanix.
>>> >
>>> > Let's post what we find here in this thread.
>>> > ___
>>> > Users mailing list -- users@ovirt.org
>>> > To unsubscribe send an email to users-le...@ovirt.org
>>> > Privacy Statement: https://www.ovirt.org/privacy-policy.html
>>> > oVirt Code of Conduct:
>>> > https://www.ovirt.org/community/about/community-guidelines/
>>> > List Archives:
>>> >
>>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/R4YFNNCTW5VVVRKSV2OORQ2UWZ2MTUDD/
>>> ___
>>> Users mailing list -- users@ovirt.org
>>> To unsubscribe send an email to users-le...@ovirt.org
>>> Privacy Statement: https://www.ovirt.org/privacy-policy.html
>>> oVirt Code of Conduct:
>>> https://www.ovirt.org/community/about/community-guidelines/
>>> List Archives:
>>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/EYPC6QXF55UCQPMQL5LDU6XMAF2CZOEG/
>>>
>> ___
>> Users mailing list -- users@ovirt.org
>> To unsubscribe send an email to users-le...@ovirt.org
>> Privacy Statement: https://www.ovirt.org/privacy-policy.html
>> oVirt Code of Conduct:
>> https://www.ovirt.org/community/about/community-guidelines/
>> List Archives:
>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/Z6JTGNYABYPZHHZ3F5Y75KF3KYDWV5OC/
>>
>
>
> --
>
> Sandro Bonazzola
>
> MANAGER, SOFTWARE ENGINEERING, EMEA R RHV
>
> Red Hat EMEA 
>
> sbona...@redhat.com
> 
>
> *Red Hat respects your work life balance. Therefore there is no need to
> answer this email out of your office hours.*
>
>
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/KAJFH5BOH6PZ3JMZI7UE2VSLQPMROTFQ/


[ovirt-users] Re: oVirt alternatives

2022-02-07 Thread Sandro Bonazzola
Il giorno dom 6 feb 2022 alle ore 14:06 Wesley Stewart 
ha scritto:

> Has anyone tried the open shift upstream old?  Looks like they support
> virtualization now.  Which I'm guessing is the upstream for openshift
> virtualization?
>
> https://docs.okd.io/latest/virt/about-virt.html
>

I gave a presentation about it 2 days ago at FOSDEM:
https://fosdem.org/2022/schedule/event/vai_intro_okd/
but looks like recordings are not yet available at
https://video.fosdem.org/2022/
Slides are here:
https://fosdem.org/2022/schedule/event/vai_intro_okd/attachments/slides/4843/export/events/attachments/vai_intro_okd/slides/4843/OKD_Virtualization_Community.pdf




>
>
>
> On Sat, Feb 5, 2022, 10:34 PM Alex McWhirter  wrote:
>
>> Oh i have spent years looking.
>>
>> ProxMox is probably the closest option, but has no multi-clustering
>> support. The clusters are more or less isolated from each other, and
>> would need another layer if you needed the ability to migrate between
>> them.
>>
>> XCP-ng, cool. No spice support. No UI for managing clustered storage
>> that is open source.
>>
>> Harvester, probably the closest / newest contender. Needs a lot more
>> attention / work.
>>
>> OpenNebula, more like a DIY AWS than anything else, but was functional
>> last i played with it.
>>
>>
>>
>> Has anyone actually played with OpenShift virtualization (replaces RHV)?
>> Wonder if OKD supports it with a similar model?
>>
>> On 2022-02-05 07:40, Thomas Hoberg wrote:
>> > There is unfortunately no formal announcement on the fate of oVirt,
>> > but with RHGS and RHV having a known end-of-life, oVirt may well shut
>> > down in Q2.
>> >
>> > So it's time to hunt for an alternative for those of us to came to
>> > oVirt because they had already rejected vSAN or Nutanix.
>> >
>> > Let's post what we find here in this thread.
>> > ___
>> > Users mailing list -- users@ovirt.org
>> > To unsubscribe send an email to users-le...@ovirt.org
>> > Privacy Statement: https://www.ovirt.org/privacy-policy.html
>> > oVirt Code of Conduct:
>> > https://www.ovirt.org/community/about/community-guidelines/
>> > List Archives:
>> >
>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/R4YFNNCTW5VVVRKSV2OORQ2UWZ2MTUDD/
>> ___
>> Users mailing list -- users@ovirt.org
>> To unsubscribe send an email to users-le...@ovirt.org
>> Privacy Statement: https://www.ovirt.org/privacy-policy.html
>> oVirt Code of Conduct:
>> https://www.ovirt.org/community/about/community-guidelines/
>> List Archives:
>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/EYPC6QXF55UCQPMQL5LDU6XMAF2CZOEG/
>>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/Z6JTGNYABYPZHHZ3F5Y75KF3KYDWV5OC/
>


-- 

Sandro Bonazzola

MANAGER, SOFTWARE ENGINEERING, EMEA R RHV

Red Hat EMEA 

sbona...@redhat.com


*Red Hat respects your work life balance. Therefore there is no need to
answer this email out of your office hours.*
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/JSNPCQPTN32WIFEDI3LR6X63ELE57GMM/


[ovirt-users] Re: oVirt alternatives

2022-02-07 Thread Sandro Bonazzola
Il giorno sab 5 feb 2022 alle ore 13:42 Thomas Hoberg 
ha scritto:

> There is unfortunately no formal announcement on the fate of oVirt, but
> with RHGS and RHV having a known end-of-life, oVirt may well shut down in
> Q2.
>

I believe the fate of oVirt has been discussed already several times but
anyway, no, oVirt is not going to shut down.


> So it's time to hunt for an alternative for those of us to came to oVirt
> because they had already rejected vSAN or Nutanix.
>
> Let's post what we find here in this thread.
>


-- 

Sandro Bonazzola

MANAGER, SOFTWARE ENGINEERING, EMEA R RHV

Red Hat EMEA 

sbona...@redhat.com


*Red Hat respects your work life balance. Therefore there is no need to
answer this email out of your office hours.*
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/FXV5KERO3SRAOV2BZIC7SKOMJBTGI52N/


[ovirt-users] Re: oVirt alternatives

2022-02-06 Thread Wesley Stewart
Good insight.

I use containers a lot, but the virtualization sections are surely
separate?  I might have to try out a node and see how it goes.

On Sun, Feb 6, 2022, 12:25 PM Strahil Nikolov  wrote:

> I've setup a test cluster with Kadalu (which is actually conternerized
> GlusterFS) for storage.
>
> To be honest, it is far more complex. I remember my first day with oVirt
> -> the UI doesn't need any explanation, while working with OKD requires at
> least some knowledge in the terminology of the k8s World.
>
> Best Regards,
> Strahil Nikolov
>
> On Sun, Feb 6, 2022 at 16:46, Wesley Stewart
>  wrote:
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/GD3MQT6L6G5XO3JRVRKBUTUSCMWJ4HUO/
>
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/RZHTSOE636IIRPWRVO7WKNWP4QLPU4AG/


[ovirt-users] Re: oVirt alternatives

2022-02-06 Thread Strahil Nikolov via Users
I've setup a test cluster with Kadalu (which is actually conternerized 
GlusterFS) for storage.
To be honest, it is far more complex. I remember my first day with oVirt -> the 
UI doesn't need any explanation, while working with OKD requires at least some 
knowledge in the terminology of the k8s World.
Best Regards,Strahil Nikolov
 
 
  On Sun, Feb 6, 2022 at 16:46, Wesley Stewart wrote:   
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/GD3MQT6L6G5XO3JRVRKBUTUSCMWJ4HUO/
  
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/MSKWJSX7UGIB5KJCEKMNZMQIB27ZL542/


[ovirt-users] Re: oVirt alternatives

2022-02-06 Thread Wesley Stewart
Sorry autocorrect on a cellphone.

Has anyone tried the openshift upstream okd?*

On Sun, Feb 6, 2022, 8:04 AM Wesley Stewart  wrote:

> Has anyone tried the open shift upstream old?  Looks like they support
> virtualization now.  Which I'm guessing is the upstream for openshift
> virtualization?
>
> https://docs.okd.io/latest/virt/about-virt.html
>
>
>
> On Sat, Feb 5, 2022, 10:34 PM Alex McWhirter  wrote:
>
>> Oh i have spent years looking.
>>
>> ProxMox is probably the closest option, but has no multi-clustering
>> support. The clusters are more or less isolated from each other, and
>> would need another layer if you needed the ability to migrate between
>> them.
>>
>> XCP-ng, cool. No spice support. No UI for managing clustered storage
>> that is open source.
>>
>> Harvester, probably the closest / newest contender. Needs a lot more
>> attention / work.
>>
>> OpenNebula, more like a DIY AWS than anything else, but was functional
>> last i played with it.
>>
>>
>>
>> Has anyone actually played with OpenShift virtualization (replaces RHV)?
>> Wonder if OKD supports it with a similar model?
>>
>> On 2022-02-05 07:40, Thomas Hoberg wrote:
>> > There is unfortunately no formal announcement on the fate of oVirt,
>> > but with RHGS and RHV having a known end-of-life, oVirt may well shut
>> > down in Q2.
>> >
>> > So it's time to hunt for an alternative for those of us to came to
>> > oVirt because they had already rejected vSAN or Nutanix.
>> >
>> > Let's post what we find here in this thread.
>> > ___
>> > Users mailing list -- users@ovirt.org
>> > To unsubscribe send an email to users-le...@ovirt.org
>> > Privacy Statement: https://www.ovirt.org/privacy-policy.html
>> > oVirt Code of Conduct:
>> > https://www.ovirt.org/community/about/community-guidelines/
>> > List Archives:
>> >
>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/R4YFNNCTW5VVVRKSV2OORQ2UWZ2MTUDD/
>> ___
>> Users mailing list -- users@ovirt.org
>> To unsubscribe send an email to users-le...@ovirt.org
>> Privacy Statement: https://www.ovirt.org/privacy-policy.html
>> oVirt Code of Conduct:
>> https://www.ovirt.org/community/about/community-guidelines/
>> List Archives:
>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/EYPC6QXF55UCQPMQL5LDU6XMAF2CZOEG/
>>
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/GD3MQT6L6G5XO3JRVRKBUTUSCMWJ4HUO/


[ovirt-users] Re: oVirt alternatives

2022-02-06 Thread Wesley Stewart
Has anyone tried the open shift upstream old?  Looks like they support
virtualization now.  Which I'm guessing is the upstream for openshift
virtualization?

https://docs.okd.io/latest/virt/about-virt.html



On Sat, Feb 5, 2022, 10:34 PM Alex McWhirter  wrote:

> Oh i have spent years looking.
>
> ProxMox is probably the closest option, but has no multi-clustering
> support. The clusters are more or less isolated from each other, and
> would need another layer if you needed the ability to migrate between
> them.
>
> XCP-ng, cool. No spice support. No UI for managing clustered storage
> that is open source.
>
> Harvester, probably the closest / newest contender. Needs a lot more
> attention / work.
>
> OpenNebula, more like a DIY AWS than anything else, but was functional
> last i played with it.
>
>
>
> Has anyone actually played with OpenShift virtualization (replaces RHV)?
> Wonder if OKD supports it with a similar model?
>
> On 2022-02-05 07:40, Thomas Hoberg wrote:
> > There is unfortunately no formal announcement on the fate of oVirt,
> > but with RHGS and RHV having a known end-of-life, oVirt may well shut
> > down in Q2.
> >
> > So it's time to hunt for an alternative for those of us to came to
> > oVirt because they had already rejected vSAN or Nutanix.
> >
> > Let's post what we find here in this thread.
> > ___
> > Users mailing list -- users@ovirt.org
> > To unsubscribe send an email to users-le...@ovirt.org
> > Privacy Statement: https://www.ovirt.org/privacy-policy.html
> > oVirt Code of Conduct:
> > https://www.ovirt.org/community/about/community-guidelines/
> > List Archives:
> >
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/R4YFNNCTW5VVVRKSV2OORQ2UWZ2MTUDD/
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/EYPC6QXF55UCQPMQL5LDU6XMAF2CZOEG/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/Z6JTGNYABYPZHHZ3F5Y75KF3KYDWV5OC/


[ovirt-users] Re: oVirt alternatives

2022-02-06 Thread Florian Schmid via Users
Hi

We have been using LXC, before we went to oVirt 4.0. We started with 3.6, but 
production use was 4.0.

The problem we had with LXC was the missing live migration feature and the 
biggest issue was the shared storage on NFS.
With all the hundred millions of files on the NFS storage, the performance got 
worse and worse with each new container and we had hundreds of them.

In my former company, we were using OpenVZ, but I thought it was dead after 
RedHat 7 was released.

Now with ovirt 4.3, we are really satisfied with it. We have several NFS 
storage domains and nearly 1000 VMs running on.

I have experience with Proxmox, but the big problem we may have with it, that 
we have to split our clusters and have all of them to manage separately.
Also, one of our clusters have nearly 40 hosts in there, which also might be 
problematic with their management approach and corosync.
At least in older versions before using the latest corosync, I think 30 host 
was the maximum supported and tested hosts in one cluster.
I also want to use my network for important data and not completely for 
corosync data...

Opennebula, we had a look before we went to ovirt, but in the past, I wasn't 
satisfied with it.

We are still on ovirt 4.3 because of the issues with CentOS 8 EOL and we don't 
want to use CentOS stream for our production.
Also the new model of ovirt with the "rolling releases" like datacenter 
version, 4.5, 4.6 and what ever is now there, I was never happy with.

In the past, when we have been on 4.2, we have upgraded directly to 4.3.8 and 
it was perfectly stable.
Now, I really don't know, if it is stable enough, because of the new features 
added with every new datacenter version.

The good thing with ovirt 4.3 is, that you can use CentOS 7.9, which is the 
last one. With ovirt 4.4, when it will be deprecated, we may be at 8.7 or so, 
but you may not be able to upgrade then to a later version.


Proxmox has I think a HCI with ceph.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/236A7PNIUHTOJ6CRRAYNXMHYOUE4JW7M/


[ovirt-users] Re: oVirt alternatives

2022-02-06 Thread Thomas Hoberg
> Oh i have spent years looking.
> 
> ProxMox is probably the closest option, but has no multi-clustering 
> support. The clusters are more or less isolated from each other, and 
> would need another layer if you needed the ability to migrate between 
> them.
Also been looking at ProxMox for ages. We were using OpenVZ in compliance-heavy 
production so a nice GUI and a VM option seemed ideal. But 
SWsoft/Parallels/Virtuzzo and ProxMox were competing not collaborating and 
diverging with distinct hypervisors and IaaS containers, which seems a silly 
tribal squarrel in face of today's cloud invasion. Never thought LXC might do 
better than OpenVZ (or I might return to Xen from KVM). Redhat fought both IaaS 
containers with nothing but VMs and only got saved by (PaaS) Docker, which they 
then tried to smother with podman and Kubernetes.

But Proxmox is not HCI or only via DIY.
> 
> XCP-ng, cool. No spice support. No UI for managing clustered storage 
> that is open source.
true and the most attractive option seems a paid upgrade (and not ready yet)
Don't think I'd miss SPICE or that it has much of a future.
But nothing HCI has ever deployed so quickly and easily, including vSphere as 
the supposed market leader.
> 
> Harvester, probably the closest / newest contender. Needs a lot more 
> attention / work.
It looks great, thanks for the tip. No idea if they survive the next three 
months.
I am adding the link here, because that name needs to be searched right ;-) 
https://harvesterhci.io/
> 
> OpenNebula, more like a DIY AWS than anything else, but was functional 
> last i played with it.
Lots of material, but is it actually open source? And do they have a free tier? 
They seem to have everything... which makes me more suspicious than happy these 
days.
> 
> 
> Has anyone actually played with OpenShift virtualization (replaces RHV)? 
> Wonder if OKD supports it with a similar model?
The oVirt team had hinted at an integrated container VM solution when I started 
with oVirt. Obviously the pods and etc-daemons need to run somewhere and VMs or 
IaaS containers like OpenVZ or LXC would be a good start. That never 
materialized and evidently they chose to abandon IaaS and HCI completely.

But there is plenty of workloads still out there, that are more comfortable 
with a IaaS abstraction and more concerned with scale-in than scale-out. Or 
which live at the real edge, in the field, on tracks or roads or in the middle 
of an ocean.

I don't quite see how OpenShift replaces RHV, especially not at the [real] 
edge. The software industry may be transitioning towards cloudy application 
models, but it's note quite all there yet.

My impression is that Redhat is very carefully avoiding a CentOS repeat for 
every other open source project they do. Their upstream variants seem far more 
beta in OKD and Kubvirt than oVirt ever was.

"No Free Lunch" seems to have been chiseled into the three letters of their new 
owners for almost a century now.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/FUUDMTAAHS3IQGRDWE77SOFC2GLBKJGG/


[ovirt-users] Re: oVirt alternatives

2022-02-06 Thread Nathaniel Roach via Users
Just a point of clarification here - Hyper-V as a standalone ISO is EOL 
- the technology and services will continue in the main distribution.


On 5/2/22 21:03, marcel d'heureuse wrote:

Moin,

We will take a look into proxmox.
Hyper v is also eol, if server server2022 is standard.


Br
Marcel

Am 5. Februar 2022 13:40:30 MEZ schrieb Thomas Hoberg 
:


There is unfortunately no formal announcement on the fate of oVirt, but 
with RHGS and RHV having a known end-of-life, oVirt may well shut down in Q2.

So it's time to hunt for an alternative for those of us to came to oVirt 
because they had already rejected vSAN or Nutanix.

Let's post what we find here in this thread.

Users mailing list --users@ovirt.org
To unsubscribe send an email tousers-le...@ovirt.org
Privacy Statement:https://www.ovirt.org/privacy-policy.html
oVirt Code of 
Conduct:https://www.ovirt.org/community/about/community-guidelines/
List 
Archives:https://lists.ovirt.org/archives/list/users@ovirt.org/message/R4YFNNCTW5VVVRKSV2OORQ2UWZ2MTUDD/


___
Users mailing list --users@ovirt.org
To unsubscribe send an email tousers-le...@ovirt.org
Privacy Statement:https://www.ovirt.org/privacy-policy.html
oVirt Code of 
Conduct:https://www.ovirt.org/community/about/community-guidelines/
List 
Archives:https://lists.ovirt.org/archives/list/users@ovirt.org/message/QIT3ZGEXCGTCNXHS7GQQ6EPB3XR5MHHE/

--

*Nathaniel Roach*
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/INI6QOAQQOVKY66F5GL5A3V7R52KECAK/


[ovirt-users] Re: oVirt alternatives

2022-02-05 Thread Thomas Hoberg
> I wonder if Oracle would not be interested in keeping the ovirt.  It will
> really be too bad that ovirt is discontinued.
> 
> https://docs.oracle.com/en/virtualization/oracle-linux-virtualization-man...
> 
> 
> Em sáb., 5 de fev. de 2022 09:43, Thomas Hoberg  escreveu:

I've been looking there, once I discovered they had axed their own original 
product (which used to be the only hypervisor officially sanctioned to get CPU 
partitioning good enough for core based licensing of their SQL servers) and 
gone with oVirt for their commercial offers.

But the most outstanding evidence is that they never made the transition to 
4.4, almost two years after 4.3 support stopped at oVirt. The only news since 
ages is a couple of videos in November: they are up the creek without a paddle, 
too, and that's the only aspect of this EOL that I find slightly amusing.

oVirt is currently made up of so many components over which Redhat has 
exclusive control, that anyone who isn't Redhat's special friend would be crazy 
to take it on, because they couldn't keep things coordinated enough to create a 
product.

Actually, my personal impression is that it's what killed oVirt, especially in 
the HCI variant even inside Redhat.

I've just set up three XCP-ng nodes using nested virtualization on one of my 
home-lab workstations and I've been dumb-struck just how fast and painless it 
went. I then added a Xen-Orchestra appliance(the equivalent of the management 
engine), which again dumbfounded me by just how easy and quick it went (a 
single command grabs the appliance off the Internet and installs it on the 
node).

Of course, then came the inevitable: the nagware! Every other button on the UI 
is nothing but a hint to upgrade to one of the many paid variants. At least one 
of those hidden away buttons actually allows you to upgrade the (freshly 
downloaded..?) nodes, so perhaps the free variant is minimally usable.

The XOSAN HCI variant at €6000/year is definitely out of my home-lab range, but 
might compare favorably to RHV and certainly vSphere or Nutanix. I don't think 
they are quite in the same league, though, but I'll keep checking as much as I 
can.

It's rather ironical that XCP-ng does support Gluster for HCI...

One of the things I loved about oVirt was that you could follow what's going 
on. In parts of our business we have SLAs where outside help will always come 
too late. What I didn't like was that you had to dig deep far too often and 
that it was full of bugs in the setup phase, which had me doubt in its 
operational performance.

In retrospect I have to conceed, that it didn't do so badly there even if I had 
plenty of scares, which is why I still have 4.3 running in the corporate labs: 
until EOL of CentOS7 do us part.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/5CASSYL67XUXCECKOJPJ3BSJ45KGQYCW/


[ovirt-users] Re: oVirt alternatives

2022-02-05 Thread Sketch

On Sat, 5 Feb 2022, Alex McWhirter wrote:

ProxMox is probably the closest option, but has no multi-clustering support. 
The clusters are more or less isolated from each other, and would need 
another layer if you needed the ability to migrate between them.


It's also Debian-based, so if you're an EL shop, it may not play well with 
a lot of your existing infrastructure.  This was the main reason we went 
with oVirt in the first place.


OpenNebula, more like a DIY AWS than anything else, but was functional last i 
played with it.


I never tried it, but it sounds more like a lighter weight version of 
OpenStack.  I guess that's sort of the same thing...


Has anyone actually played with OpenShift virtualization (replaces RHV)? 
Wonder if OKD supports it with a similar model?


Not yet, but it looks like it's a plugin for OKD.

https://docs.okd.io/latest/virt/about-virt.html

One other possible replacement if you don't use the more advanced 
capabilities of oVirt which some may not think about is Foreman.  It 
allows you to provision VMs on libvirt, and post-provisioning it gives you 
the ability to launch a graphical console on them from a single web-based 
management interface.  You will need to log directly into hosts to use 
virsh for things like migration, adjusting VM parameters, etc, so it's not 
a complete replacement, but may work for some comfortable with using 
CLI-based management tools.

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/TTHKOX4HYE4XHESHY6TMUTCQTQYPCRZR/


[ovirt-users] Re: oVirt alternatives

2022-02-05 Thread Alex McWhirter

Oh i have spent years looking.

ProxMox is probably the closest option, but has no multi-clustering 
support. The clusters are more or less isolated from each other, and 
would need another layer if you needed the ability to migrate between 
them.


XCP-ng, cool. No spice support. No UI for managing clustered storage 
that is open source.


Harvester, probably the closest / newest contender. Needs a lot more 
attention / work.


OpenNebula, more like a DIY AWS than anything else, but was functional 
last i played with it.




Has anyone actually played with OpenShift virtualization (replaces RHV)? 
Wonder if OKD supports it with a similar model?


On 2022-02-05 07:40, Thomas Hoberg wrote:

There is unfortunately no formal announcement on the fate of oVirt,
but with RHGS and RHV having a known end-of-life, oVirt may well shut
down in Q2.

So it's time to hunt for an alternative for those of us to came to
oVirt because they had already rejected vSAN or Nutanix.

Let's post what we find here in this thread.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct:
https://www.ovirt.org/community/about/community-guidelines/
List Archives:
https://lists.ovirt.org/archives/list/users@ovirt.org/message/R4YFNNCTW5VVVRKSV2OORQ2UWZ2MTUDD/

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/EYPC6QXF55UCQPMQL5LDU6XMAF2CZOEG/


[ovirt-users] Re: oVirt alternatives

2022-02-05 Thread Maxlen Santos
I wonder if Oracle would not be interested in keeping the ovirt.  It will
really be too bad that ovirt is discontinued.

https://docs.oracle.com/en/virtualization/oracle-linux-virtualization-manager/


Em sáb., 5 de fev. de 2022 09:43, Thomas Hoberg 
escreveu:

> There is unfortunately no formal announcement on the fate of oVirt, but
> with RHGS and RHV having a known end-of-life, oVirt may well shut down in
> Q2.
>
> So it's time to hunt for an alternative for those of us to came to oVirt
> because they had already rejected vSAN or Nutanix.
>
> Let's post what we find here in this thread.
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/R4YFNNCTW5VVVRKSV2OORQ2UWZ2MTUDD/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/JKHZZIXGLY6ZWQU6KH7ERB6HZA5SFYDI/


[ovirt-users] Re: oVirt alternatives

2022-02-05 Thread marcel d'heureuse
Moin,

We will take a look into proxmox. 
Hyper v is also eol, if server server2022 is standard.


Br
Marcel

Am 5. Februar 2022 13:40:30 MEZ schrieb Thomas Hoberg :
>There is unfortunately no formal announcement on the fate of oVirt, but with 
>RHGS and RHV having a known end-of-life, oVirt may well shut down in Q2.
>
>So it's time to hunt for an alternative for those of us to came to oVirt 
>because they had already rejected vSAN or Nutanix.
>
>Let's post what we find here in this thread.
>___
>Users mailing list -- users@ovirt.org
>To unsubscribe send an email to users-le...@ovirt.org
>Privacy Statement: https://www.ovirt.org/privacy-policy.html
>oVirt Code of Conduct: 
>https://www.ovirt.org/community/about/community-guidelines/
>List Archives: 
>https://lists.ovirt.org/archives/list/users@ovirt.org/message/R4YFNNCTW5VVVRKSV2OORQ2UWZ2MTUDD/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/QIT3ZGEXCGTCNXHS7GQQ6EPB3XR5MHHE/


[ovirt-users] Re: oVirt alternatives

2022-02-05 Thread Thomas Hoberg
Xen came before KVM, but ultimately Redhat played a heavy hand to swing much of 
the market but with Citrix it managed to survive (so far).

XCP-ng is a recent open source spin-off, which attempts to gather a larger 
community.
Their XOSAN storage is aimed to deliver a HCI solution somewhat like Gluster. 
But designed only as a companion to XCP-ng, may be easier to maintain and 
faster. v2 is in the works and not backward compatible to v1, so you may want 
to start with other alternatives, of which there are a few.

https://xcp-ng.org/docs/

https://xen-orchestra.com/#!/xo-home
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/KXFGINENM6RIO6T5DSCT4QLJQIX5X26B/