Stephan Wenderlich <[email protected]> wrote:

> Proxmox is on its own. *Based* on Debian 10 with a customized Ubuntu
> Kernel.
>
Don't you mean Proxmox VE?!

> Own Repositories, own ISO and own developer team. Yes, you can install it
> on top of a default Debian, but this is not the recommend way.
>
That's non-VE then.  That's like oVirt, and one can use many different
distros for oVirt.  Again, oVirt != libvirt but oVirt = { libvirt, vdsm,
and dozens of other components).  I see some are even used by Proxmox.

I also see Proxmox avoids talking about and comparing Proxmox VE to Red Hat
[Enterprise] Virtualization (RH[E]V) in its documentation, because RHV-H
keys are built with minimal RHEL images, although non-H (akin to ESX, not
ESXi), can be installed on an RHEL install (ESX really doesn't exist any
more, basically once they stopped using RHEL5).  The RHV-H keys are
distributed with OEMs (e.g., Dell, HP, Lenovo, and tier-2s that use
SuperMicro, etc...), and the Self-Hosted Engine (RHV-M[anager] ~ vSphere)
is setup just the first time, and is completely fault-tolerant from then on
(as long as at least one RHV-H is up).

Yes, I can go to Dell, HP and Lenovo, or even Tier-2s that use SuperMicro,
and get servers with RHV-H keys or even RHV firmware built-in.  I just
'plug'em into my network' and use it.  I can use oVirt instead as well, and
save money.  One does *not* buy 'RHEL' for virtualization.  Most, one buys
RHEL entitlements with a virtualization entitlement for a host, and that
includes RHV, *not* RHEL.  Although one can run RHEL on VMware so entitled,
and not use RHV.

This is what really frustrates me on this list, especially when people say
what Red Hat does ... and it's just "base" CentOS/RHEL.  That's _not_ what
Red Hat 'sells' for distributed virtualization or containers.  Red Hat
doesn't even support Docker on RHEL now, and Podman is not for distributed
deployments.

What also really frustrates me is we are talking _products_ instead of
FLOSS technologies.  We need to talk FLOSS technologies and implements,
about _all_ solutions, not 'brand names' of products.  And we need to
really avoid proprietary, even if free (as in beer).  But we _should_ talk
the FLOSS that support them.

At least that's my view and understanding.  I can be completely wrong.

I know from companies who use multiple proxmox clusters and hundreds of
> VM's or/in conjunction LXC.
>

But not KVM.  I've noted a lot of things about that, especially HA.  It's
one of my frustrations with OpenStack as well, which uses the same
corosync-pacemaker solution.  It's really designed for eight (8) or less
nodes.  I've been through that so many times, and know most of the core
developers at Red Hat, even a few at SuSE AG.


> And aside that my ex employer and my current employer do use proxmox and
> we are not a small mom and pop shop :-)
>

And Origin (OpenShift) has a lot of users too.  A big reason is because it
existed before LXC too, leveraging Kernel CGroups and Namespaces early on
(along with, what would become, SELinux MCS).  This was late last decade.
;)


> It's not free when you need support, only the unsupported version is free
> (but the same features).
>

But how much is 'free (as in beer)'?  We really need to _avoid_ that.
Again, if there are FLOSS components to Proxmox, like QEMU, the
libvirtd-KVM and other components, as well as LXC, let's focus on those.
We should _not_ talk about _proprietary_ products, even if they are free
(as in beer).

 Perhaps you tell your observation the trainer in Saarbrücken, or one of
their huge customers (*coughtüvcough*)

> It abstracts a lot of the internals
> involved in setting up complex things like a Ceph cluster.
>
> Nope, the admin still needs to know Ceph in and out. Yes, you get help
> here and there but in a big clustered environment you need skills and
> especially you must understand the internal used techniques or you can't
> maintain and TSHOOT. The GUI is simply an adjustable interface that offers
> a lot but not everything.
>

Just like oVirt hyperconverged and the concept of adding another node to
get more compute + storage.  At some point one wants to just manage ones
own CEPH, instead of using a basic CEPH RBD or GlusterFS or other 'easy to
setup' option.


> Because you can use some assisted help, means not that you now can hire a
> monkey. Call Dr.Weil who invented Ceph and that you just can click some
> buttons ... or tell that the CERN Team in sitzerland.
>

Of which Red Hat bought Intank in 2014 April.

Professional Experience:  I know, I was Red Hat's primary, post-sales
architect trying to get GlusterFS to work reliably for OpenStack for just
Glance storage, let alone Swift, and CEPH RBD was much better at ... well
... the 'B' (block) in RBD.  ;)  I spent 6 weeks with a Tier-1 PC OEM
Server vendor in early 2014, and finally, I literally woke up on a Sunday
and was told they finally did it.  I'm under NDA on the other details, but
I can share that.

> Not even close. What is ESXi  or Hyper-V  for you? They offer all some
> kind of GUI and management tools, including an absurd price tag.
>
> The major difference of Proxmox to the RH-world is its orchestration
> called qemu-server that they use instead of 
> libvirt:https://github.com/proxmox/qemu-server
>
> Proxmox Cluster File System (pmxcfs)
> <https://pve.proxmox.com/wiki/Proxmox_Cluster_File_System_(pmxcfs)> is
> build by them, which is basically a replication feature to maintain a
> cluster in conjunction with other tools like corosync.
>
> As Brian mentioned before, they use many standard tools to facilitate a
> whole Virtualization package and they also develop own interfaces, wrappers
> and tools that are 100 % own builds of Proxmox .
>
Yes.  That's what we need to focus on.  But not Proxmox, Proxmox VE, et al.
if it's free (as in beer).

A higher standard than XEN classic, maybe that is only my experience. I
> only hear "we use or want use proxmox instead of VMWare" or "KVM but in
> combination with proxmox".
>

Then you haven't been exposed to oVirt (RHV) at all.

Also, we need people to separate SLAT (whether hardware, or software,
virtualization) from same kernel (containers).


> We drift away from the topic and as Brian wrote, such topics become
> emotional and fanboyish without deep technical details
> Questions like "what is the Cluster File System" ?
>

That's now a separate exam.  ;)

- bjs
_______________________________________________
lpi-examdev mailing list
[email protected]
https://list.lpi.org/mailman/listinfo/lpi-examdev

Reply via email to