[ovirt-users] Re: What is the status of the whole Ovirt Project?

2023-07-13 Thread Alex McWhirter
I would personally put CloudStack in the same category as OpenStack. 
Really the only difference is monolithic vs micro services. Both scale 
quite well regardless, just different designs. People can weigh the pros 
and cons of either to figure out what suites them. CloudStack is 
certainly an easier thing to wrap your head around initially.


There is also OpenNebula, similar to CloudStack in design. However their 
upgrade tool is closed source, so i can't recommend them as much as i 
would like to as i think they have a decent product otherwise.


ProxMox currently doesn't work as an oVirt replacement for us as it 
cannot scale beyond a single cluster and has limited multi tenancy 
support. I hear they have plans for multi cluster at some point, but 
currently i regard it as more akin to ESXi without vSphere. Where oVirt 
i would have compared closer to ESXi with vSphere.


I think XCP-ng has a bright future at the rate they are going, just 
doesn't currently supply all the features of oVirt.


OKD (OpenShift) virtualization is more or less just kubevirt. If all you 
need vm's for is a few server instances, works well enough. Feature wise 
it's not comparable to everything above, but if you can live with your 
vm's being treated more or less like containers workflow / life cycle 
wise and don't need much more then that, it probably gets the job done.


Harvester is another consideration if you think the OKD solution will 
work, but don't like the complexity. It's based on k3s, provides 
kubevirt, and is a much more simple install. I think they even provide 
an ISO installer.



Reality is, there is no great oVirt replacement so to speak. If you only 
needed oVirt to manage a single cluster, ProxMox probably fits the bill. 
XCP-ng as well if you are willing to do some leg work.


Otherwise closest FOSS thing i have found (spent over a year evaluating) 
is CloudStack. I am still hopeful someone will step up to maintain oVirt 
(Oracle?), but it's clear to me at this point it will need rebased onto 
Fedora or something else to keep it's feature set fully alive.



On 2023-07-13 11:10, Volenbovskyi, Konstantin via Users wrote:

Hi,
We switched from Gluster to NFS provided by SAN array: maybe it was
matter of combination of factors (configuration/version/whatever),
but it was unstable for us.
SPICE/QXL in RHEL 9: yeah, I understand that for some people it is
important (I saw that someone is doing some forks whatever)

I think that ovirt 4.5 (nightly build __) might be OK for some time,
but I think that alternatives are:
-OpenStack for larger setups (but be careful with distribution -as I
remember Red Hat abandons TripleO and introduces OpenShift stuff for
installation of OpenStack)
-ProxMox and CloudStack for all sizes.
-Maybe XCP-ng + (paid?) XenOrchestra, but I trust KVM/QEMU more than 
Xen __

OpenShift Virtualization/OKD Virtualization - I don't know...
Actually might be good if someone specifically comments on going from
ovirt to OpenShift Virtualization/OKD Virtualization.

Not sure if this statement below
https://news.ycombinator.com/item?id=32832999 is still correct and
what exactly are the consequences of 'OpenShift Virtualization is just
to give a path/time to migrate to containers'
"The whole purpose behind OpenShift Virtualization is to aid in
organization modernization as a way to consolidate workloads onto a
single platform while giving app dev time to migrate their work to
containers and microservice based deployments."



BR,
Konstantin

Am 13.07.23, 09:10 schrieb "Alex McWhirter" mailto:a...@triadic.us>>:


We still have a few oVirt and RHV installs kicking around, but between
this and some core features we use being removed from el8/9 (gluster,
spice / qxl, and probably others soon at this rate) we've heavily been
shifting gears away from both Red Hat and oVirt. Not to mention the
recent drama...


In the past we toyed around with the idea of helping maintain oVirt, 
but

with the list of things we'd need to support growing beyond oVirt and
into other bits as well, we aren't equipped to fight on multiple fronts
so to speak.


For the moment we've found a home with SUSE / Apache CloudStack, and
when el7 EOL's that's likely going to be our entire stack moving
forward.


On 2023-07-13 02:21, eshwa...@gmail.com  
wrote:

I am beginning to have very similar thoughts. It's working fine for
me now, but at some point something big is going to break. I already
have VMWare running, and in fact, my two ESXi nodes have the exact
same hardware as my two KVM nodes. Would be simple to do, but I
really don't want to go just yet. At the same time, I don't want to
be the last person turning off the lights. Difficult times.
___
Users mailing list -- users@ovirt.org 
To unsubscribe send an email to users-le...@ovirt.org 

Privacy Statement: https://www.ovirt.org/privacy-policy.html 

[ovirt-users] Re: What is the status of the whole Ovirt Project?

2023-07-13 Thread Volenbovskyi, Konstantin via Users
Hi,
We switched from Gluster to NFS provided by SAN array: maybe it was matter of 
combination of factors (configuration/version/whatever),
but it was unstable for us.
SPICE/QXL in RHEL 9: yeah, I understand that for some people it is important (I 
saw that someone is doing some forks whatever)

I think that ovirt 4.5 (nightly build __) might be OK for some time, but I 
think that alternatives are:
-OpenStack for larger setups (but be careful with distribution -as I remember 
Red Hat abandons TripleO and introduces OpenShift stuff for installation of 
OpenStack)
-ProxMox and CloudStack for all sizes.
-Maybe XCP-ng + (paid?) XenOrchestra, but I trust KVM/QEMU more than Xen __
OpenShift Virtualization/OKD Virtualization - I don't know...
Actually might be good if someone specifically comments on going from ovirt to 
OpenShift Virtualization/OKD Virtualization.

Not sure if this statement below https://news.ycombinator.com/item?id=32832999 
is still correct and what exactly are the consequences of 'OpenShift 
Virtualization is just to give a path/time to migrate to containers'
"The whole purpose behind OpenShift Virtualization is to aid in organization 
modernization as a way to consolidate workloads onto a single platform while 
giving app dev time to migrate their work to containers and microservice based 
deployments."



BR,
Konstantin

Am 13.07.23, 09:10 schrieb "Alex McWhirter" mailto:a...@triadic.us>>:


We still have a few oVirt and RHV installs kicking around, but between
this and some core features we use being removed from el8/9 (gluster,
spice / qxl, and probably others soon at this rate) we've heavily been
shifting gears away from both Red Hat and oVirt. Not to mention the
recent drama...


In the past we toyed around with the idea of helping maintain oVirt, but
with the list of things we'd need to support growing beyond oVirt and
into other bits as well, we aren't equipped to fight on multiple fronts
so to speak.


For the moment we've found a home with SUSE / Apache CloudStack, and
when el7 EOL's that's likely going to be our entire stack moving
forward.


On 2023-07-13 02:21, eshwa...@gmail.com  wrote:
> I am beginning to have very similar thoughts. It's working fine for
> me now, but at some point something big is going to break. I already
> have VMWare running, and in fact, my two ESXi nodes have the exact
> same hardware as my two KVM nodes. Would be simple to do, but I
> really don't want to go just yet. At the same time, I don't want to
> be the last person turning off the lights. Difficult times.
> ___
> Users mailing list -- users@ovirt.org 
> To unsubscribe send an email to users-le...@ovirt.org 
> 
> Privacy Statement: https://www.ovirt.org/privacy-policy.html 
> 
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/ 
> 
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/EJFIRAT6TNCS5TZUFPGBV5UZSCBW6LE4/
>  
> 
___
Users mailing list -- users@ovirt.org 
To unsubscribe send an email to users-le...@ovirt.org 

Privacy Statement: https://www.ovirt.org/privacy-policy.html 

oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/ 

List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/EFBHZ76GZ73HA52XJMGTRGTYCPIGNPHH/
 




___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/O6KPWG3HA6IJY2FQYNJIQIDDC6GD56WD/


[ovirt-users] Re: What is the status of the whole Ovirt Project?

2023-07-13 Thread Alex McWhirter
We still have a few oVirt and RHV installs kicking around, but between 
this and some core features we use being removed from el8/9 (gluster, 
spice / qxl, and probably others soon at this rate) we've heavily been 
shifting gears away from both Red Hat and oVirt. Not to mention the 
recent drama...


In the past we toyed around with the idea of helping maintain oVirt, but 
with the list of things we'd need to support growing beyond oVirt and 
into other bits as well, we aren't equipped to fight on multiple fronts 
so to speak.


For the moment we've found a home with SUSE / Apache CloudStack, and 
when el7 EOL's that's likely going to be our entire stack moving 
forward.


On 2023-07-13 02:21, eshwa...@gmail.com wrote:

I am beginning to have very similar thoughts.  It's working fine for
me now, but at some point something big is going to break.  I already
have VMWare running, and in fact, my two ESXi nodes have the exact
same hardware as my two KVM nodes.  Would be simple to do, but I
really don't want to go just yet.  At the same time, I don't want to
be the last person turning off the lights.  Difficult times.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct:
https://www.ovirt.org/community/about/community-guidelines/
List Archives:
https://lists.ovirt.org/archives/list/users@ovirt.org/message/EJFIRAT6TNCS5TZUFPGBV5UZSCBW6LE4/

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/EFBHZ76GZ73HA52XJMGTRGTYCPIGNPHH/


[ovirt-users] Re: What is the status of the whole Ovirt Project?

2023-07-13 Thread eshwayri
I am beginning to have very similar thoughts.  It's working fine for me now, 
but at some point something big is going to break.  I already have VMWare 
running, and in fact, my two ESXi nodes have the exact same hardware as my two 
KVM nodes.  Would be simple to do, but I really don't want to go just yet.  At 
the same time, I don't want to be the last person turning off the lights.  
Difficult times.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/EJFIRAT6TNCS5TZUFPGBV5UZSCBW6LE4/