Stephan Wenderlich <[email protected]> wrote:
> XEN does live in regard of Citrix and xcp-ng. The later is FLOSS as far as
> I know. XEN classic...well...RHEL dropped support when RHEL 6 was born ;-).
>
Slight Nitpick: Actually, it started with RHEL5.4. The biggest issue was
the fact that Red Hat stopped doing all of the deep, Upstream development
to get the Xen Hypervisor Dom0 in the kernel. That forced a lot of
things. But that aside ...
> A good alternative for lightweight VMs in terms of PVM (not HVM) - means
> shared Kernel - is LXC/LXD which has its own IP on the SAME subnet than the
> host plus SystemD, plus all major Distros as small packages
>
Er, um, LXC is not a Paravirtualization Hypervisor. I think you're
crossing concepts.
Paravirtualization Hypervisors do Virtualization without the requirement of
SLAT, one of the early advantages of Xen. But it's still virtualization,
still loading another kernel (DomU) on a host Hypervisor (Dom0).
Linux Containers are on the same kernel, not a kernel atop of a kernel, and
use Kernel Control Groups (CGroups), Kernel Namespace with optional
security like [SELinux] Multi Container Security (MCS) or another option,
plus registries -- aka 'vertical application packaging' -- optional
orchestration, et al. to 'segment.'
Side Note: It actually existed before LXC in Origin (OpenShift) too, which
made is easy for Red Hat to adopt Docker, Kubernetes, et al. over time.
That's why Origin (OpenShift) has a huge, commercial following, especially
security-wise.
Further Side Note: We also need to give credit that a lot of these
concepts are evolutions from advanced BSD chroot jails, AIX LPARs and other
solutions, including those on yet other UNIX flavors. I'm biased and
prefer solutions on GNU/LInux with SELinux MCS implementations for security
reasons.
> plus pre build Apps from Turnkey.
>
Yes, that's how Registries work. The applications are 'vertically
packaged' so they can run atop of a generic, and minimal, host, using the
same kernel, but minimal, base, attack vector.
Infrastructure-wise it's taking over from the classic 'pre-built VMs'
because instead of a whole OS image in a VM, the core components and setup
are by the end-site. Security-wise, there are pros and cons, pros to the
company's setup, cons if there isn't something like MCS labeling or
similar, as everyone has root on the same kernel.
This has *nothing* to do with Proxmox. People really need to 'step back'
and get out of the 'branding' aspect of technology, and focus on the
underlying 'technologies' that *all* solutions share. That's LPI's target.
Side Note: In the Red Hat world, that was Fedora/RHEL(CentOS) Atomic Host,
then CoreOS -- which is a bit more 'bloated' with RHEL8 CoreOS /
OpenShift4, but less than prior Atomic -- plus the VARs, ISVs and others
with 'registries.' Again, I'm biased preferring SELinux MCS enabled
solutions.
LPI Focus: From a LPI standpoint, we should limit our coverage of anything
but maybe basic Proxmox, Docker, Kubernetes, oVirt and Origin, and even
then, it's really more libvirtd ("d" not libvirt 'end-user' tools like
'virt-manager') for VMs, and LXC, Docker, Podman, et al.
Otherwise we're going to get into a lot of the 'flavor of the month.'
> Of course it does. But why should you place such a high weight on it ?
> Docker-Swarm has (imho) no acceptance out there, while LXD has its own
> orchestrator. However, what I miss is some hint about WHEN to use a
> container (like docker) and when not.
>
Reading the objectives, I see ...
352.4 Container Orchestration Platforms (weight: 2)
*Weight* 2
*Description* Candidates should understand the importance of container
orchestration and the key concepts Docker Swarm and Kubernetes provide to
implement container orchestration.
*Key Knowledge Areas:*
- Understand the relevance of container orchestration
- Understand the key concepts of Docker Compose and Docker Swarm
- Understand the key concepts of Kubernetes and Helm
- Awareness of OpenShift, Rancher and Mesosphere DC/OS
It's not just Swarm. If you have another orchestration solution to add to
the list, please tell Fabian, or update the Wiki directly. That's what
he's asking for. Focus on _where_ this should go, not that 'Proxmox is
great.' Otherwise I can have a huge conversation of how Red Hat's JBoss
division was a dozen years ahead of everyone on Containerization ... all
100% FLOSS too.
Proxmox is a "I can do everything better and cheaper package" to me.
>
Great! Now break that down into LPI objectives for Fabian et al. That's
what he's asking. Not, "We need to cover Proxmox instead of various Docker
technologies." It should be _where_ Proxmox fits in, like the underlying
services it uses in _other_ objectives, not just 'Proxmox' branding fits in
351.1. That's what Fabian et al. need!
> Ceph Nautilus for shared VM and/or LXC storage in just a few clicks,
> interconnected with 10G (or more) in a 3 piece quorum instance without the
> need of a expensive 10G Switch. It includes nbtables, backups and a ton of
> more features like Replication, FailOver, Monitoring, own custom Kernel,
> great documentation ...
>
There are a lot of solutions in this space. It's not just Proxmox. I'm
not saying Proxmox isn't a solution, far from it! I'm just saying it's not
the only solution that does all sorts of stuff. There are pluses and
minuses to 'all-in-one' front-ends, including granularity.
People assume otherwise because they've never touched oVirt (RH[E]V) or
Origin (OpenShift). These are established solutions of a good decade-plus
now. There are others.
Yes, Origin (OpenShift) pre-dates Docker and even LXC. A lot of this stuff
was Red Hat developers working on Upstream. Even OpenStack got a serious
boost when Red Hat and HP (for their limited time -- I was
heaviily involved first-hand) added their developers. Same with Docker for
Containers, Kubernetes for orchestration (e.g., Brian Stevens, Red Hat CTO,
moved to Google -- not a shocker what happened).
In fact, that's why Red Hat ignored OpenStack for so long, and OpenStack v.
oVirt v. Origin is a serious debate at times, let alone Origin (OpenShift)
on bare-metal is very fast and powerful, with a complete DevOps solution,
control of registries and security without sacrificing performance. I have
this debate just in the Red Hat world alone.
Proxmox is just the latest 'front-end' to show up. It's using underlying
back-ends.
> PLUS: It has a powerful REST API to fully automated VM/LXC creation
> ...and all that is FLOSS.
>
You've obviously never used Red Hat's FLOSS stacks then. Proxmox is not
the only 'powerful' REST API. In fact, in looking at some of
Proxmox's libraries for things, guess who are the main developers and
copyright holders on a quite a bit of it? ;)
You may have also heard of CloudForms, which was both natively developed
also evolved after Red Hat purchased -- and open sourced -- one of the top
VM management tools at the time, ManageIQ. There are so many FLOSS tools,
even outside of Red Hat.
Yes, I get it, Proxmox is the bomb. Please give Fabian details on _where_
to put that into objectives.
> <Fanboy on>
> Once you know Proxmox, you wont use Cockpit(primitive), VMWare or anything
> else
> <Fanboy off>
>
Your post is very fanboy'ish, without any _technical_ details. That's the
problem I've had with Proxmox, without digging too deep into the code.
It's like watching people say "use OpenLDAP," utterly ignorant that Red Hat
bought the #1 LDAP project in 2004, open sourced it in 2005 as 389 (had to
re-write some code with 3rd party licenses that couldn't be acquired), and
then built IPA around that to solve a lot of issues. Even Microsoft tells
people to use IPA, especially for "Zero Trust."
As far as "Cockpit" -- that's a 'built-in' tool on Fedora/RHEL for managing
_standalone_ servers _individually_. It is _never_ meant for large,
distributed Enterprise use on its own. So why you would compare Cockpit to
Proxmox, ignoring a half-dozen Red Hat 'technology components' that are in
so many other "Enterprise Infrastructure" solutions is beyond me.
That's like saying using Docker v. Origin (OpenShift), or KVM and
virt-manager v. oVirt (RH[E]V).
Technologies, underlying technologies, should be LPI's focus, not
'branding.' We want to focus on the components. Again, I'm looking at
what Proxmox uses, and I see a lot of things they didn't develop. They are
largely leveraging other things, and it's a 'front end.' It's difficult to
cut through it all with the fanboy'isms without deep dives.
Sure it is, but LXC is even supported on some CISCO Nexus Switches (which
> run IOS-NX).
>
And Cisco is throwing its weight behind Ansible for general orchestration
too. I'm in the middle of this right now, with both network and security
looking to Red Hat for this. So they are getting heavily entrenched into
Red Hat tooling for a reason, just like Microsoft (who is way, way behind
-- don't get me started).
Side Note: I've known DeHaan since the mid '00s, before even Cobbler-Koan,
not that it matters. But I knew where he was 'going' with this since he
created [The] 'FUNC' (Fedora Unified Network Controller). Again, this is
late '00 stuff, just like Origin (OpenShift) _knew_ where Containers were
going, let alone BSD, AIX and others had 'been there' too.
> I know from some serious big companies, that they would love to adopt
> Proxmox but can't because VMWare is supported by various products they use
> and Proxmox officially not (yet). Again, it is far more than just a front
> end.
>
And so is oVirt (RH[E]V). I also have trouble getting Origin (OpenShift)
on bare metal, especially since VMware pushes
And I also don't like dealing with VRA, when oVirt (RH[E]V) was doing fully
automated deployments, and had a full REST API, when VMware was still
offering just a _broken_ Perl API. It was also my work at Red Hat at a
major customer that final led VMware to finally take it seriously, and
before that I was doing VDI in '08, before RH[E]V VDI was an option yet,
which woke Microsoft up (and cost them 8 figures).
Proxmox is just another 'front-end' solution using a lot of _existing_
subsystems. Break it down into components for Fabian et al. in the
objectives.
Topics I miss in *"Topic 353: VM Deployment and Provisioning"*
>
> Kickstart, Preseed and FAI
> Those tools can truly automated the VM deployment
>
> Vagrant: Weight 3 ...justified ?
>
It FLOSS and still being used, especially for VMware, VirtualBox and other,
non-FLOSS or non-Upstream adopted Hypervisors.
Kickstart can probably be dropped though. I'm not sure where Kickstart
fits any more. For VMs, most of us haven't written many Kickstrts with
Cobbler snippets in them, and just have a base build, then deploy via
Ansible.
E.g., because we cannot get rid of VMware either, we have VRA drop a JSON
file, and our Ansible deployment reads it to know the 'super-role' for the
server, which breaks down into Ansible roles. We also use these roles and
components for post-build playbooks too, unifying the design.
Side Note: Again, DeHaan had a much grander vision than just Cobbler (and
Koan) back in '08 when I first met him, which really started with FUNC.
Kickstart is 20 years old, and while Cobbler turned 20 pages of Kickstart
%post scripts into 5 lines, Ansible was the future.
Kubernetes and other developments are also fast changing too. It's going
to be difficult to 'snaphost' everything.
- bjs
_______________________________________________
lpi-examdev mailing list
[email protected]
https://list.lpi.org/mailman/listinfo/lpi-examdev