On Thu, Nov 08, 2007 at 02:28:51PM -0800, Bryan J. Smith wrote:
> So you would advocate repacking and rebuilding everything?  And when
> another package is built against a library that it is incompatible
> with, just deal with it?

Sure.  You have to deal with the incompatibility anyhow.

> My points are ...
> 
> 1.  Debian ships multiple kernels, GLibC and other package versions
> in a single release, quite an undertaking.

Debian ships ONE kernel version and ONE glibc version in a release.
There may be multiple versions of some other libraries though, if that
library has multiple ABI versions.

> 2.  Red Hat just picks one and sticks with it, backporting as
> necessary, quite limiting.
> 
> A ports-based distro allows any combination you can build, not what
> packagers decided to standardize on or offer options for.
> 
> As far as the "lower bar" comment, understand that Debian and many
> other distros "play it safe" and wait for another distro to retarget
> a new GLibC version, or finally throw the switch that forces ANSI C++
> compliance or adopts Mandatory Access Controls (MAC) and puts forth
> the efforts to integrate such.

Debian decides when the change will occour, document it and make it
policy so that no one should be surprised by the change.

> You can "play it safe" and wait on someone else, or you can "lead"
> and actually put forth the real, extensive and difficult efforts to
> actually get something new to work.  That's the reality of one
> distro's history over the last 10 years, and why they also have a
> "trailing edge" one that is supported for 7+ years, longer than
> anyone else (for a price, of course, which people pay for).
> 
> Of which I have argued can be _significant_ sometimes.

It can be, but doing it all outside the package manager doesn't make ANY
real difference to the amount of work it is.

> Not often with a ports approach.

If you want to compile foo version X and it depends on a feature of lib
bar that only exists in version Y, then you have to deal with that.
Ports makes no difference what so ever.

> Like?

bind crashed multiple times a day in redhat 6.0, while in 5.2 it had
been running very well.

> If you quote me Red Hat Linux 5's switch to GLibC 2, then you're just
> ignorant.  People are still complaining about that, not realizing it
> was the hard efforts of Red Hat that we moved to GLibC 2 -- efforts
> that other distros "piggy-backed" off of after the work was done.

I actually thought the move from redhat 4 to 5 went quite well.

> Same can be said about various GCC changes.  There ware many issues
> with GCC 2.8, 2.95.x, etc...  People also forget that Red Hat was the
> official maintainer of GCC at the time people were complaining. 
> Forcing ANSI C++ compliance _broke_ GCC 2.x code (let alone GCC 2.7,
> 2.8 and 2.95.x C++ implementations conflicted).

Redhat also released 7.x with gcc 2.96 which the gcc developers they
were to some extent employing said wasn't ready.  And I believe they had
a glibc from a development snapshot as well.  I think marketing insisted
on having another release 6 months after the previous one, and who cared
if the software was ready or not.

> Today people say SELinux is "broken."  What don't they understand
> about "MAC"?  It purposely breaks things!  It's not "buggy."

SELinux certainly takes some work to understand and configure.

> > I most certainly won't move to Gentoo ever because ports are
> > clearly not better than what Debian does,
> 
> Yet more subjective dribble.
> 
> LPI is *NOT* about subjective dribble, it's about what enterprises
> use even if *YOU* do not.  That's the definition of a "standards
> organization."  Try being involved with an IEEE subcommittee
> sometime.  ;)

Well a problem with many certification tests (including the LPI at least
when I took the first one) is that it to a large extend tests whether
you have memorized command line arguments to a bunch of stuff, while
what you really want to know is if the person knows how to solve
problems and how to find the answers they need.  It doesn't matter if
you remember what -a or --all does on a command as long as you have a
good idea that the command has an option to do something and you know
how to look up what the option is.

> The Pentium 4 was an engineering feat -- completely redesign a core
> in 18 months when 40+ months is typical.  But you wouldn't know the
> first thing about that.

Doing it quicker doesn't mean the design is sound.  It was an impresive
design job on a bad design concept.

> What you define as "fundamentally wrong" is not what others see.  You
> had better open your mind to looking at something differently, or
> you're just going to be yet another of the 97% of Linux users who
> piss "brand name" everywhere they go.

The pipeline on the Pentium4 was way way too long, and had every
indication of having been designed simply to post the highest clock
rates for merketing purposes.  The design did work well for linear tasks
like video encoding and other stream processing, but for anything branch
heavy the long pipeline was a serious killer.  Most information on
pipeline designs talk about 5 to 10 stage pipelines and how past that
there tends to be a serious performance hit, and sure enough the Pentium
4 had serious performance hits on a lot of application types, just as
anyone could have predicted by looking at the specs.  The Core2 on the
other hand has a much more normal pipeline length, and the performance
is amazing.

> I consider people who think "one distro fits all" to be a trait of
> people who can't realize the problem is different for different
> people.

There are quite a few nice distributions with different intended
targets.  Gentoo as a concept offends me.  I hate waste and source based
distribution is wasteful.  If one person can compile something and get
the same result thousands of others would get if they compiled the same
thing, then those thousands of other people should not be compiling it
from source.  I know it gets a bad reputation in general from the very
vocal users that think tweaking compiler options is the greatest thing
ever, but I don't actually care about that.  I just care about the tons
of wasted compile time and cpu cycles.

> Leading-edge development is typically done from source within the
> first 6 months.  As things mature, it moves to a packages distro for
> the next year.  As software is released and goes into sustainment, it
> goes into a backport distro -- up to 7 years!
> 
> No one distro fits all development, sorry.  Been involved with way
> too many software projects.  They may eventually target RHEL for 7+
> years of support, but they don't start at RHEL -- or you've wasted 2
> years of longevity.

Not everyone thinks RHEL is the only target in the world.

> You don't have people running your code in leading-edge development. 
> You're changing things so much that you can't track things.
> 
> When the architecture is well defined and prototyped, that's when it
> hits more established and mainstream development and goes into a more
> ECM controlled distro.  In Red Hat terms, that is Fedora -- which is
> what the next version of RHEL will be based on.
> 
> You throw out anything that is not compatible with the libraries and
> programs you have chosen.  That can be very, very significant at
> times!  ;)

Sure, and sometimes it is too big and makes you question why one would
want this giant change at all.  If a library changes that much, it must
have a new ABI version in which case you can keep both at once.  At
least with a decent packaging policy you can.

> Open your mind.  Stop Debian distro pissing.  Your viewpoint is not
> the same as everyone inolved with LPI, and we do not subscribe to
> your definitions of "buggy" and other _subjective_ comments.  ;)

I am very open minded, and very opinionated.  It is possible to be both.

--
Len Sorensen
_______________________________________________
lpi-examdev mailing list
[email protected]
http://list.lpi.org/cgi-bin/mailman/listinfo/lpi-examdev

Reply via email to