Ian Shields wrote:

> So really neither memory nor CPU power should be reasons for building a
> custom kernel. Add to that the fact that a huge number of USB devices
> exist, so kernel modules really do much of the I/O work that might have
> formerly been embedded in the kernel.


IoT and network solutions are about the only places I think that are left,
other than for just 'prep'ing' for building kernel modules or subsystems
the vendor leaves out, or adding kernel modules where some 'additional
prep' is needed of the source.

Although security-wise, there is still the legacy, but still
valid, argument for a modular-less kernel, which prevents any other modules
from loading.  Although SELinux has long had the Integrity (e.g., binary,
module and other file) enforcement, and can do it through not just the
kernel, but boot at the GRUB and chip-level (e.g., Intel Trusted Platform
Module, TPM) with SecureBoot, which renders a lot of things moot here (if
one really wants to go there).

Although with kprobes and kernel instrumentation, as I'm finding out, you
can still wreck havoc with the kernel, even via official ABIs in the kernel
itself (don't get me started ... security-related).

Then there's the new 'stateless' server solutions, like CoreOS, and the
RHEL implementation of it (of which RHEL CoreOS is a dirty word to some
former CoreOS, now Red Hat, employed).

So I think we're at the point it's specalized.  I just don't know how much,
which brings me back to the Survey, JTA, etc...


> I don't think LPI sysadmin certification targets embedded processor
> applications where I can see a possible reason to build a very compact
> custom kernel.


Unfortunately a lot of Linux/aarch64 systems are still _not_ uEFI, so we're
still dealing with custom kernels, uBoot, et al.  How much that gets
addressed, I don't know.  I would rather limit the LPIC program to uEFI,
and only maybe leave non-uEFI to IoT or other, plausible initiatives.

But we are seeing more and more high core count A53-A55 solutions, and even
some A72-A73 option, that best the highest end Xeon core counts in
benchmarks, while being less than half the power.  But many of those are
not inexpensive servers, and usually are uEFI with IPMI-like, Out-of-Band
(OoB) management, from the OEMs.

Amazon has just made a huge investment into these solutions, with many
different players.  For the vector market, nVidia and AMD -- both long-time
ARM licensees -- including their GPUs.  But when one goes outside of
x86-64, the GPU players multiply, from legacy PC GPU vendor PowerVR (yes,
they are still around, and huge in the ARM space) to ARM itself (Mali)
re-appear.

Heck, with the next iteration of GPUs, we're looking at 36-48A at 12V just
for the GPU.  I warned in 2003 that the PC architecture, including even the
forthcoming PCIe interconnect was still a 'peripheral interconnect' was
'too leagcy,' and GPUs not only belonged on the system interconnect, but
should be leading it.  We're there now.  I think ARM via nVidia and AMD
going to force the shift, and the PC will be 'left behind.'

SIDE NOTE:  I won't go into the blame *cough*Intel*cough* (and I was right
about that in 2007, which showed up in 2017 with countless security
exploits I've known about for 10 years under NDA),

So ... again ...

I think -- again, 100% PEER, overbearing (wrong?) opinion -- I think is
where that Survey and JTA show up.  We need to get more Cloud users
involved and survey them on what they are doing, from Amazon to Google to
Rackspace to Tier-1 hosting providers, many on Debian and custom builds to
CentOS and, where required for SLAs, Red Hat, SuSE and Canonical.

We're probably really outta date here, and it's going to shift this decade
... hard.  The PC as we know it is dying, and there will be a console in 2
generations that shows us why, which will totally transform the system
builder for not just the enthusiast, but even the cloud.

After all, Amazon EC2 F1 has shown the market for it.  Heck, I was with a
small Red Hat FedRAMP Medium/High and DoD IL4/IL5 Cloud Provider that was a
niche competitor to Amazon and others, with far more security controls.
GPUs are the bomb.  FPGAs another bomb.  These don't require custom kernels.

So, back to Survey, JTA, et al.  Especially of providers.  Some do have
custom kernels.  They aren't using stock Red Hat solutions like we were.

So I'd be interested to know what the real case
> is for building a custom kernel in the space that LPI certification
> targets. I've worked with Linux for about 25 years or so and I have built
> kernels, but I'd probably fail your test if you wanted me to build
> onein an interview.
>

Your experience is always greatly appreciated.

- bjs
_______________________________________________
lpi-examdev mailing list
[email protected]
https://list.lpi.org/mailman/listinfo/lpi-examdev

Reply via email to