On Sat, Apr 7, 2012 at 7:19 AM, Zbigniew <zbigniew2...@gmail.com> wrote:

> At the time of cheap computers, do we really need "multiuser" OS? It
> was reasonable 20 years ago, when fast machine was really expensive -
> but is it still today, when every average user can have his own
> computer (and - in fact - has several without even realizing it, e.g.
> his handy, iPhone, watch etc)?

We need a *multitasking* OS, where more than one thing can be running
at one time.  Multi-user kind of comes along as part of the package.
When you have a multitasking system, you need to distinguish between
different classes of processes.  Some will be system processes,
executed by the OS kernel, and some will exist in user space.
Different processes will have different priority levels and different
things they are allowed to affect.  The easiest way to make such
distinctions is by who owns the process, and what permissions the
owner has on the system, hence, multiple users.  You also need a file
system where you can store such attributes, which lets FAT out of the

> No idea however, how much overhead/complexity could be disposed of, if
> Linux were single-user - but indeed after establishing my own user
> account ("root" is always present immediately after installation) I
> don't need to create the third one on the same machine.

The Puppy flavor of Linux *is* single user.  You are always running as
root.  There are a couple of other IDs defined by the system, but you
can't log in as them, and the infrastructure that allows you to create
and maintain other IDs has been removed.

Puppy gets away with it because it is intended for older, lower end
hardware, to replace things like older versions of Windows.  I've been
a Unix admin, and the notion of always running as root gave me hives.
I realized that MS-DOS and Windows made the assumption that the user
at the machine was administrator with all powers up till WinNT and
more or less got away with it, and Puppy does for the same reasons.

The machine that I have Puppy on also runs Ubuntu, and I spend most of
my time in it.  I *prefer* to run as a normal user, and become root
(or use sudo to gain temporary administrative privileges) when I need
to do something that will affect the underlying  installation.

The amount of overhead and complexity "saved" by Puppy being single
user is minimal.  A Puppy user laboriously put multi-user support back
*in* a custom Puppy version, and estimated it added about a megabyte
to the size of a Puppy ISO image.  It saves very little in terms of
executing processes.  There are all sorts of things running in the
background on a Linux system, mostly daemons started by the system as
part of the boot process, and they'll all be there regardless.  Puppy
is a little faster on the old notebook I put it on than Ubuntu, but
not enough to make the compromises it makes worth it.

There are actually a boatload of single-user Linux systems out there:
half the wireless routers in existence use a Linux kernel.  So do
things like the Amazon Kindle and B&N Nook ebook readers.  You don't
see Linux unless you "root" them, b8ut it's there behind the scenes.

> I remember, that it has been advertised as serious advantage of Win
> 3.x, that the drivers from now on shall be created "for Windows", and
> not for every single program separately, like it was before - but
> never found information, why the drivers weren't made "for DOS"
> earlier. Not "for AutoCAD", "for WordPerfect" and so on - but "for
> DOS" in general (then available for every program/application).
> Anyway, if there really were pure technical reasons - it can be taken
> into consideration during development of such DOS/32 (or DOS/64).
> Anyway: having a choice between using a shipped driver, or accessing
> the hardware directly, would be an advantage to me.

Drivers *were* made for DOS, and you loaded them in CONFIG.SYS.  (I
loaded things like a mouse driver, ramdisk, and disk cache that way.)
But a driver is a layer between the OS and the hardware.  Video was
the big issue: it was possible to update the screen via BIOS calls
(and some "compatibles" running MS-DOS did), but it was slow.  Many
programs, especially games) wrote directly to the video hardware to
get performance.  (The rule-of-thumb test back when for whether your
machine was truly IBM-PC compatible was whether you could run Lotus
1,2,3 and Flight Simulator.  If you *could*, it was.)

You could get away with this on a PC under DOS because it was a
single-tasking environment, and the executing program owned the
machine.  You *cannot* get away with it on a multi-tasking system
where more than one thing might be going on at a time.

Windows forced you to access the hardware through drivers rather than
directly so that context could be maintained and programs wouldn't
step on each other's toes.  And the hardware requirements needed to
run Windows were such that you didn't *need* to access the hardware
directly to get performance.  If the box was capable of running
Windows, you could access the hardware through drivers

> Z.

Better than sec? Nothing is better than sec when it comes to
monitoring Big Data applications. Try Boundary one-second 
resolution app monitoring today. Free.
Freedos-user mailing list

Reply via email to