I have recovered sufficiently from my last fall to climb back up onto my
soapbox again.  I'll have to keep an eye peeled for incoming spooge from
Derek, though.  ;-)

On Tue, 1 Feb 2000, Scott Stirling wrote:
> Linux, or what we loosely refer to as "Linux," is probably going to diverge
> along several lines of specialization, viz., desktop/workstation, server,
> embedded/portable devices.

  I'm not so sure "diverge" is the right word.  I think what we will see is
different distributions and projects specializing in different things.  We
already kinda-sorta see this now.  Red Hat is marketing Linux as a commercial
solution, Debian is making a "totally free" distribution, there's a
distribution optimized for Pentiums, etc.  Fundamentally, though, they are all
based on the same software, and I think that will keep any problems at an
acceptable level.

> The one I left out last time was embedded devices, but that's clearly an
> emerging context for Linux.  Witness Linus Torvald's involvement working
> for Transmeta, creating Mobile Linux.

  'Tis worth pointing out that Linus has indicated that any changes made for
Mobile Linux are intended to be merged back into their mainstream packages.

> My main interest is in a workstation-cum-server for development and fun.  I
> have that.  The next version of GNOME and Enlightenment will be totally
> kick-ass, but I don't really need it.  It's just nice to have.

  For sure.  One of the best things about Linux is that it just keeps getting
better and better, very quickly.  Three years ago, easy-of-setup was going to
kill us.  Enter Caldera with a much easier installation process.  Two years
ago, the lack of a good GUI was going to kill us.  Enter KDE and GNOME.  A
year ago, the lack of an MS-Word-like word processor was going to kill us.  
Enter StarOffice, Applixware, Corel, KOffice, etc.  This year, the lack of an
MS-Access-like database front-end is the killer.  I wonder what the pundits
will choose next year?

> There are definitely issues, and they definitely involve Linux.

  Of course they do.  However, nothing I've heard voiced anywhere is a
fundamental problem with Linux.  It has always been either the particular
feature desired simply hasn't been finished yet, or someone actually
complaining about their OEM or ISV not supporting them.

> Without the applications, user-friendliness, eye-candy, etc., Linux is
> just a kernel.

  You can say the same about *any* OS.  :-)

> As seen in the original post in this thread, we tend to think of "Linux"
> the phenomenon or "Linux" the full distribution rather than Linux, the
> kernel.

  That's because "The Linux kernel plus the GNU tools plus XFree plus Netscape
plus KDE plus GNOME plus Perl plus Python plus TCL plus Apache plus
PostgresSQL plus RPM plus sendmail plus..." takes too long to say.  :-)

> So anyway, for an enterprise server (instead of a bunch of throw-aways
> clustered into server farms), you need better threading than Linux has.

  Yup!  Fortunately, this is being worked on, and the 2.3 kernel is supposed
to be much better in this area.

> Java is becoming the programming language of choice for enterprise
> applications.  Any application server worth its salt has to have a Java
> servlet engine, JSP engine, and an EJB server/container.

  I thought the Apache project was working on that?

> Java is inherently multi-threaded and when using native threads (a must on
> multi-CPUs and on production JVMs from Sun) it inherits the threading
> model of the OS it runs on.  The "every thread is a process" model doesn't
> seem to scale under big time loads running multi-threaded apps.

  I don't agree.  Fundamentally, there does not have to be any difference
between a thread and a process.  Indeed, threads are often called "lightweight
processes".  Because threads can share resources like file descriptors and
memory space, people sometimes think of them as something radically different
then regular processes.  But they're not.  Read on.

> Two major problems are the algorithms used by the kernel scheduler, which
> wastes lots of time scheduling all these individual processes...

  Um, *something* is going to have to schedule those threads.  If it isn't the
process scheduler, it is going to be some other code, be it a separate kernel
thread scheduler or a userland thread library.  You can't make that scheduling
overhead go away simply by moving it out of the kernel.  :)

> and the related problem of a lack of a user-level threads library.

  It has been noted that *working* with Linux threads is not as easy as it
could be.  But I imagine a library to handle the messy details for you isn't
that far off.

> Most major Unixen have adopted the many-to-many threading model.  Linux
> may never because Linus and his cronies are dead set against the
> complexity of it.

  Right!  They realize that writing another scheduler just because people see
threads differently then traditional processes is silly.  Instead, the same
scheduler handles everything.  When you call clone(), you select what elements
(file descriptors, signal handlers, memory space, etc.) you want your child to
share with you.  Not only is this conceptually elegant, it reduces a lot of
performance penalties you would otherwise encounter.

  At least, that's the justification the kernel team gives, and it makes sense
to me.  A little of their logic can be seen here, in this Kernel Traffic
article:

  http://kt.linuxcare.com/kt20000103_49.html#1

--
Ben Scott
[EMAIL PROTECTED]


**********************************************************
To unsubscribe from this list, send mail to
[EMAIL PROTECTED] with the following text in the
*body* (*not* the subject line) of the letter:
unsubscribe gnhlug
**********************************************************

Reply via email to