> On 1. Feb 2020, at 04:35, Konstantin Olchanski <[email protected]> wrote:
> 
>> 
>>> - direction is very stable, each next release is "the same as the previous 
>>> release",
>>> no surprises, no strange changes, no confusion.
>> 
>> Mir -> Wayland, Upstart -> systemd, Unity -> Gnome Shell... sure were 
>> surprises to me, direction rather seems "not that stable" etc.
> 
> I agree, but Red Hat went the same direction. Wayland, systemd, gnome.

Yes, but with the exception of the "Upstart" detour in EL6, Red Hat's direction 
has been much more stable than Ubuntu's. It simply isn't a point for Ubuntu.

>>> - many hardware vendors now supply Ubuntu and Debian centric drivers and 
>>> support
>> 
>> Which vendors support upgrading the Firmware on servers (I'm not talking 
>> about desktops or laptops) from a running Ubuntu system? (That's a genuine 
>> question - this is the main reason keeping me from considering generally 
>> running the latest Ubuntu LTS on all our bare metal and doe all the rest in 
>> VMs or containers).
> 
> Since I write firmware myself, the function to upgrade the firmware on
> a running system without having the reboot the OS is pretty much
> the first thing that I implement (during firmware development,
> rebooting the OS to load each new firmware test version gets old very 
> quickly).
> 
> So I find it annoying that hardware vendors creare special "mystique" about
> firmware updates, require special magical tools, dances with rubber chicken, 
> etc.
> 
> If it were up to me, I would always provide the firmware binary file and the 
> statically
> linked linux executable (so it runs on any linux, no matter what userland 
> shared libraries
> are available) to load the binary file into hardware, to read the firmware 
> from
> the device (so you can clone firmware from one device to another) and to 
> verify
> the loaded firmware (so you can check that the correct stuff is loaded).
> 
> That said, "bum" firmware and "bricked" hardware is a fact of life that one
> has to deal with regardless. Against the first one, you first update your test
> machine, against the second one you learn to use the JTAG programmer tools
> and dongles.

Thanks a lot for your insights, but I'll still take this as "none that I'd know 
of".

>>> Now that both Ubuntu and Red Hat use systemd, NetworkManager & co management
>>> of both has become very similar.
>> 
>> Well, yes, but then Ubuntu invented "netplan"...
>> 
> 
> What tends to work well and is reasonbly reliable and fool proof
> is "ifconfig" commands in /etc/rc.local. Not for everybody and in every
> situation. But works ok in "build once, run for 5 years; touch nothing,
> change nothing" physics experiments.
> 
> systemd and NetworkManager add unwanted dynamism and unpredictability
> to booting, running and shutdown.
> 
> (FWIW, I was going to look an Ubuntu's netplan next week...)

I'm afraid netplan is a (YAML - shudder...) frontend to NetworkManager. Maybe 
in the future a frontend to systemd-networkd (shudder..). NB EL8 still ships 
the good old static network scripts. They're deprecated, but they still exist. 
On Ubuntu, they're gone for good.

>>> Only big remaining difference is the package manager - apt vs rpm/yum, but 
>>> even
>>> here Red Hat have muddied the waters by switching to dnf and a new package 
>>> format
>>> (new checksum algorythms).
>> 
>> dnf is a replacement for yum, not rpm (which corresponds to dpkg, not apt), 
>> and supposed to be backward compatible. RPM package checksums have evolved 
>> in the past, without serious issues, with backward compatibility and options 
>> to create packages usable on older systems on newer ones. I haven't tried 
>> EL8 in this respect, but I doubt this has all changed.
>> 
> 
> My typical need is to create a package that will install 1 perl script into 
> /usr/bin.
> 
> 15 years ago, RPM did not have an example of how to do this easily,
> today, RPM does not have an example of how to do it easily. (no I am not
> building gcc from a tarball with local patches).

Come on, "Maximum RPM" has been around for decades. Besides a lot more on the 
background and more sophisticated packaging, it always had a Chapter 11 
"Building Packages: A Simple Example" 
(https://urldefense.proofpoint.com/v2/url?u=http-3A__ftp.rpm.org_max-2Drpm_ch-2Drpm-2Dbuild.html&d=DwIFAg&c=gRgGjJ3BkIsb5y6s49QqsA&r=gd8BzeSQcySVxr0gDWSEbN-P-pgDXkdyCtaMqdCgPPdW1cyL5RIpaIYrCn8C5x2A&m=X9YLljx6jPPpT6447dWrTidd7R3NpkzAe82YrvMcg8U&s=8LA4aOITlr0zIWMyo0qmRkJSXqdCW4yNGm8K9foMt5U&e=
 ) which I'm very certain is just a fly-by for someone like you.

>> There are a few more major differences (and the different package managing 
>> systems are rather peanuts compared to those):
>> 
>> * Support life time. EL: 10 years (admittedly, with limitations during the 
>> last 3). Ubuntu: 5 years (but limited to 3 years for a great many packages - 
>> especially quite a few important on desktops/laptops).
>> 
> 
> This turned out to be the wrong life expectancy measure. What we see is the 
> prefectly
> servicable SL6 is obsoleted overnight by ROOT (root.cern.ch) suddenly starting
> to require C++11. Then things like MediaWiki ask for newer PHP, certbot asks
> for newer python, etc. So you install devtoolset for C++11, PHP from webtatic,
> pyhton3 from EPEL, but can this mongrel OS still be called SL6?
> 
> The usual mantra is "never upgrade!", but users demand "upgrade early, 
> upgrade often!". Rock, hard place, etc.

Hmm, our users are different than your's then. They demand "the latest the 
greatest" when they start out with their project, but then want complete 
stability ~forever. EL fits that nicely, except that the .0 typically is a bit 
dated already.

>> * Stable kernel ABI. EL: stable over the whole lifetime for whitelisted 
>> interfaces, stable within minor releases for others. Ubuntu: no such thing. 
>> Actually, may backport ABI-changing changes from the lates mainline kernel 
>> anytime.
> 
> Not in SL-6, not in CentOS-7. Each new kernel update breaks nvidia and/or ZFS 
> drivers.

Not sure that's correct, but I lack recent hands-on experience with those.

> True, the (very simple) kernel drivers that I write and maintain do load and
> work with each kernel, so some stability, for sure. (But latest kernels 
> complain,
> "you must recompile you driver with new GCC to deal with latest Intel CPU 
> exploits").

Well that is due to the Spectre/Meltdown mess and can't be blamed on Red Hat's 
kernel engineers. Also note that drivers not recompiled will still work 
perfectly, they'll just not protect you against attacks generally unknown at 
the time of the GA release. I'd rather call this an impressive success than a 
failure.

>> * Hardware enablement: EL: at least 5-7 years, with a single kernel flavour 
>> (with the above advantages regarding ABI stability). Ubuntu: May require 
>> "HWE" kernels (different base version) rather early. Actually will upgrade 
>> you automatically to those unless you're using a server installation.
> 
> No way around this, if stock kernel does not have a driver for your hardware,
> you have to use some other kernel, by hook or crook. Build kernel from 
> sources,
> in the worst case. True for any Linux, for any OS, Windows, BSD, etc. Only 
> MacOS
> has hardware and software in lock-step. (severe and unescapable lock-step).

Er, no. Red Hat provides, for many years, kernels that will include support for 
new hardware - without breaking compatibility with older systems, older 
software and even older kernel modules. That's one major feature distinguishing 
an "enterprise" distribution from the others.

>> Again, I like Ubuntu. Nor do I want to start a big argument. Just felt 
>> compelled to add a few minor pieces of information to your statements.
>> 
> 
> Yes, good discussion. A shift away from Red Hat Linux after 15 years with SL,
> and not quite 10 years with RHL (before "E") is a big deal, worthy of
> big argument.

Reply via email to