Hello everybody.
I'm going to drop a couple of considerations of mine.

This topic drops on the right time, as for the first time since before the pandemic I was asked to hold an LPIC-2 course.


On 23/10/23 20:25, Marc Baudoin via lpi-examdev wrote:

[...]

- Topic 206 (old)/205 (new) was updated and extended to include
resource management, along with some aspects of the former
topic 200.
A weight of 2 for installations from source code is way too low
because this is the heart and soul of free software (or open
source).


I do not agree, even though this mostly depends on the chosen primary target of the LPI certification: developers or data-center specialists, which are the two main fields of corporate use of Linux that I am acquainted with. Linux today owes most of it's fortune as a platform of massive, easy, quick and cheap customized VM deployment. That I know, besides the Linux-specific development, it is not the choice development platform of business software, as this software is mostly Java code developed in Windows and tested in a Linux VM. That I know, also a lot of Python AI related code is developed the same way.

I do know that traditionally C development and Unix system management were considered, and for the most part actually were, distinct skills but expected to be mastered at the same time by Unix gurus. Unix sysadmins were often able to perform application software debugging to spot the cause of the low performance of the appliance or to understand why a mailserver subprocess implementing forwarded email security scanning crashes randomly every week.
But this is no longer the case today.
In the '90s and early 2000s kernel coding and recompilation skills were necessary. Today they are no more. For both the reasons are the same: on one side, Linux sysadmin has become a lot more complex: not just users, filesystems, processes, logs, updates and access rights, but also virtualization, containers, cryptography, Trusted Computing, Mandatory Access Control infrastructure (SELinux and AppArmor), clusters, RAID/LVM and what else are expected to be mastered by any sysadmin in a corporate environment. Plus networking.
The same is true for software development and debugging.
Today when a sysadmin spots application software malfunctions he no longer tries to fix it. He is not authorised to do so even if he could do it. He just opens a ticket to the DevOps team.

On the other side, development is both no longer necessary on the kernel to be able to run it in corporate iron (it's purchased from vendors that certify it to be able to run a given Enterprise Linux distro) and it's actually a security liability to do on the spot development to cut corners or extend an enterprise installation. You are just not allowed to do any development of your own  on production servers, doing so would void the vendor's warranty. The only ones who can do any coding are the members of the development team specifically tasked to design and implement the functionality that was requested by the management and planned and detailed by the Project Manager.

I am not denying the current DevOp principles, they are however used (at least in the places where I worked) within the development teams to enable them to run through all the software development and testing cycles without tasking the sysadmin team with the repeated deployments of the alpha-beta-release candidate-preproduction-customer acceptance packages in the testing cluster.

So, unless LPIC-2 is going to cover principles of DevOp or is going to put enterprise use of Linux as a minor target, I think that development is to be put aside and, specifically, kernel configuration and recompilation should be gradually diminished and eventually put aside. LPIC should focus on what is relevant today to the enterprise deployment of Linux.

Remembering the days before Linux, system administrators *had* to
build software from source code,

This is no longer the case, even for personal use on non-Linux certified hardware. Snap, FlatPack and AppImage are increasingly the preferred way to go for third-party non-system software installs.

What do then the "days before Linux" have to do with what today the industry requires of Linux professionals?


  having to modify it along the
way because the author used a different kind of UNIX.

Today, in the corporate OS industry, Unix means Linux.
Full stop.

The only non-Linux Unix installations I came across the past two years were a few legacy AIX 6 installations, End-of-Support May 2017, that were left lingering a little while running non critical but unportable software before they will be definitively switched off and decommissioned sometime soon.


   System
administrators also had to know C and the POSIX API for that.

You wrote it right: "HAD to".
Please let's leave the nostalgia of the '90s outside the definition of the sysadmin skills that are relevant in a III millennium Linux Professional Certification.


Since Linux has binary packages, the level of knowledge system
administrators have about their system has dramatically dwindled

It shifted, it no longer contemplates knowing the full software stack from the iron to the top level of userland abstraction.
If anything because the iron is most of the times not there.
And even when it is there, I'll say it again, a sysadmin is not allowed to deal with it in any way: it's against corporate rules and the service and warranty agreement that management signed with the OS vendor and paid for. Tampering with the bootloader, let alone the kernel, could make the production system unbootable because the underlying TPM system would no longer recognize it and stop the booting for violating the local security policies.

Also, even in the lucky case where you could actually do it, corporate systems are typically shared between several groups: the sysadmins, the app developers, the backup and security team, the business metrics team (e.g., those who measure the server or cluster actual use and load to charge the customer). In such cases no alteration is ever accepted if it was not previously agreed on, planned and accepted by the management, because it's difficult to understand what changes that are irrelevant to some operations actually impact another.

You made a DB run 5% faster implementing or installing an alternative Socket.IO library?
Great.
Now management wants to know how comes the customer is paying 3.6% less than it did before even though the hosting business is paying the same for the hardware that is running their DB cluster.
Oh, you made it more efficient?
Did you know the customer has a pay-as-you-consume contract and that the lower load to run the same application means the hosting business is getting a lower margin on their operations?


and most of them don't understand correctly the basic tools
provided by the system (I often see that about things as simple
as redirects and pipes).


People in corporate environments are not expected to know everything down to the nuts-and-bolts of the system. They are expected to keep the damn complex stuff up and running and to close the tickets that surface as quickly as possible. It would be nice to have both, but the economic constraints that most IT technical operators work in mean there is little time and return in developing the under-the-hood knowledge: teams are often understaffed, human resource strategies prefer people who do more tasks quicker even if they understand little of why things work that way compared to people who can explain how comes Solaris' nawk throws an error running a scrip that works well in RHEL8 but are slower managing the corporate web-based ticketing system (which is also used to meter people's productivity on the job).

I do know that on the long run this exposes the corporate infrastructure to run into some devilish problem that will take more time to fix because the IT people could not promptly understand what was going on. Take the one in a million case where a DB slows to a crawl under medium-high CPU usage and the culprit was a low-level network card rx/tx buffer overflowing under rare circumstances.

The cost in hiring deeply qualified and proficient IT managers that could solve such problems in a matter of minutes instead of hours? 5% of the personnel budget.

The potential cost of the one in a million occurrence causing the hosting business to lose money due to SLA overshoots?
1, maybe 2% of the annual budget. 3% worst case scenario.

The business will go for the 5% saving.
It's the economy, darling.

...



Ansible seems to have the largest adoption nowadays, along with
the most convenient learning curve of the major tools in this
field. This topic mostly fills weights that became vacant due
to the reduction of the resource management and kernel
compilation objectives.
On the contrary, the weight on this is way too much. Don't take me wrong, Ansible is a wonderful tool, I wish I had it 20 years ago but it is not Linux-specific. SQL has been taken out of exam 102 when going from 4.0 to 5.0 (it also is not Linux-specific) so the weight of 206 should be lowered as. Or is Red Hat going to acquire LPI?

I cannot make up my mind on this topic.
I agree Ansible is not Linux-specific, but then HTTP, Apache, NGINX, PKI, DNS and Samba too are not.
Yet their knowledge is indeed useful to a Linux sysadmin.
I do detest the idea of having to study Ansible to certify as as Linux professional, I never came across it professionally and an Ansible workshop a former employer of mine sent me to made me hate it.

I would like to read as many opinions on it's relevance as possible, maybe my experience is the corner case here.


Alessandro



Attachment: OpenPGP_0x0B0C681E8D9C1603.asc
Description: OpenPGP public key

Attachment: OpenPGP_signature
Description: OpenPGP digital signature

_______________________________________________
lpi-examdev mailing list
[email protected]
https://list.lpi.org/mailman/listinfo/lpi-examdev

Reply via email to