On Thu, Jan 09, at 05:43 Ken Moffat via lfs-dev wrote:
> On Wed, Jan 08, 2020 at 10:01:48PM -0600, Bruce Dubbs via lfs-dev wrote:
> > 
> > I do agree that we could classify this as 'might be needed in {,B}LFS one
> > day', but I am anticipating what I think upstream will do.  I just don't
> > know when.  The issue really is whether we should be proactive or reactive.

It's usually wise to be proactive, especially when there is a compelling reason,
like adopting a superior technology. Unfortunatelly in our world, it's a little
bit more complicated, as timing and/or marketing are also quite important 
factors,
so superiority by itself isnot always enough.

For this kind of application, three things matters:
  - compression speed
  - decompression speed
  - compression ratio/size

What we are really interested here is about the latters. And we should do, as 
compression
is about economy, both when transferring bytes over the network and while 
spending cpu
cycles when decompressing.

I'm curius though to see, what are those main advantages of this algorithm.
Douglas mentioned a benchmark. It would be good i think, a brief summary that 
lists
those advantages.

>> The reason is consistency with gzip, bzip2, and xz.  All of these
>> are compression/decompression utilities and should be treated in the
>> same way.

I'm afraid this is not quite strong of an argument. Prerequisity for a package 
to be
included in LFS-BOOK, unless the rules has been relaxed with time, it was 
because it
was a dependency of a based package.
Or, and in this particular case, if a package in BLFS didn't offered its 
distributed
sources in one of the established formats.

Is there any such package?

> OK, go for it.

This isn't to say do not go for it. I say, go ahead if you feel quite confident 
about
your instict, and if you can anticipate the future.

But, we could be almost positive, that when the engineers designed the 
algorithm, what
they probably had in mind, it were the network protocols, since that they do in 
their
buisness, and not so much of distributing sources over the network (though if 
there
are major advantages, of course it should be adopted).
Thus we could be almost certain, that it will be used mostly by 
applications/libraries,
especially if developers know, that it is backed by a mazor corporation.
I could bet that (without even look) that people already developed bindings for 
zstd
(for sure python and java).

So in my humble opinion, its place is in BLFS, and if things change 
drastically, we
can be ready to react instantly (nothing wrong to be reactive also).

As a side note:
It wasn't long ago, when the number of LFS packages were a little bit more than 
forty.
I remember, that i could have a full blown desktop, with X and firefox, with a 
little
bit more than 400 MB (stripped by debugging symbols), that could run in an X 
environment.
wasting a little bit more than 25 MB of resources without firefox of course, 
but not
too much with it running.
Compare that with today standards that only the chrome browser requires almost 
a giga
of memory, and which spawns a zillion of processes, making 5/6 years old 
computers to
be almost unusable, suffering on performance.
And there is no alternative, as also Firefox followed this route, when in 2004 
(in
their phoenix days) promised the first beta testers a lean browser (and it was 
really).
That's why we should be skeptical to leave buissness to lead the standards, 
though i
reckon this case it's not this kind of thing.

-- 
http://lists.linuxfromscratch.org/listinfo/lfs-dev
FAQ: http://www.linuxfromscratch.org/faq/
Unsubscribe: See the above information page

Reply via email to