[email protected] writes: > On Tue, Aug 27, 2019 at 05:18:25PM -0700, Russ Allbery wrote: >> [email protected] writes:
>>> 1. building appstores or repositories that can be used by different >>> Linux distributions, comforming to different levels of LSB, and then >>> populated by different apps devellopers, hopefully including big >>> packages like gnome and kde, and possibly also packagers picking up >>> sources, maybe even debian packagers. In this way even smaller >>> distros could have a large set of packages, and developpers could have >>> one place to address a lot of distros. This could be built for the >>> different architectures including i386, amd64 and arm. >> This is a dying mechanism of software distribution. You can achieve >> the same goal by shipping a container or some container-like thing that >> includes all the shared libraries you care about. > I am puzzled. I run a Linux distro mirror, and most of the distos have > vast binary repositories or appstores, some have source repositories. I > don't see them going away. They are vital for the distro infrastructure, The point of a distro is that you *don't* do what LSB is doing and try to maintain a stable ABI. The point of a distro is that all of the software changes regularly, and a whole bunch of people work to update it as necessary to cope with those changes. As part of that process, all of the software is rebuilt from source against the latest versions, periodically cohering into a release, and the ABI is fixed only within a release and changes again in the next release. LSB is not useful within this world. Putting on my hat as a Debian Policy editor, I can say that LSB is a net *cost* to the distribution, because it requires maintaining unnatural invariants and retaining obsolete versions of libraries. It's something we would do grudgingly for our users who need it, not something we would do voluntarily or with any enthusiasm. Maintaining a stable ABI is quite expensive. There has to be some corresponding benefit to that cost. It's not something a distribution naturally wants to do, since a distribution has available to it the tool of simply recompiling the world. > So this is key technology for Linux/unix systems, or am I wrong? I think the piece that you may be missing is that the world has diverged into two much clearer camps, now that we all have more collective experience with how to put together a Linux software ecosystem. In one camp are the distribution packagers. Their goal is to create a single coherent base system that works together. LSB would be the ABI that they provide to *other* people; it is an undesirable design constraint, not a useful tool in doing this work. The natural way to do this work is to get the latest version of everything, make world, and iterate on fixing the resulting bugs until you get something stable. In the other camp are software developers who want to distribute binary packages that run on multiple distributions. (I'm not including software developers who distribute as source that the end user compiles themselves; I think LSB is orthogonal to their world.) Back when LSB started, we collectively (and quite reasonably) thought that a good way to make life easier for these folks would be to have all the various Linux distributions provide a common ABI, so that software developers could compile their binaries once and be assured that they would run on every LSB-compatible Linux distribution. In retrospect, this was always difficult, and required extensive engineering effort and verification. Part of why this was difficult is that the maintainers of the libraries that are incorporated into the ABI by and large are uninterested or even hostile to doing this because maintaining a stable ABI has a high cost for them. It imposes all sorts of restrictions on how they make changes to their libraries which are often undesirable from their perspective, as compared to the world in which they can release a new library with a new SONAME and have the distributions rebuilld the world. Therefore, even back in the height of LSB, it only sort of worked and a lot of binary software providers still included in their deployment the libraries that they cared about. Since then, the Linux world has developed pre-built VMs, containers, and all sorts of related technology that allow a software vendor to start from a distribution-maintained image, layer their software on top, and ship the corresponding bundle to customers to run in any environment capable of running a VM or container, without regard to what distribution that environment is using. This achieves the same goal that LSB was striving towards -- application package portability -- at considerably less human engineering cost, since generating those VMs or containers is much, much easier than maintaining a stable ABI. This comes with some downsides, such as how to do security patching and increased package size. However, I'm seeing no appetite for reversing this trend due to those downsides; instead, people are pouring resources into continuous build systems and upgrade systems to address the security patching issues, and increased package size is just being swallowed by increase storage and network capacity or being addressed by container-focused distributions that try to be as small as possible. If anything, the trend towards supplying all prebuilt software packages as containers is accelerating, with huge app stores devoted to supplying even open source packages that could be handled through distribution rebuild-the-world approaches as containers instead. So, to summarize, for distributions LSB was always an engineering *cost*, not a benefit, that was paid in order to retain customers and that would be gladly dropped as soon as customers no longer cared. And the customers that cared the most have now switched to VM images and containers, and no longer care. -- Russ Allbery ([email protected]) <http://www.eyrie.org/~eagle/> _______________________________________________ lsb-discuss mailing list [email protected] https://lists.linuxfoundation.org/mailman/listinfo/lsb-discuss
