On 5/4/2012 9:29 PM, Rugxulo wrote:
> In other words, original UNIX (tm) didn't have all the GUIs,
> multi-threading, 64-bit, job control, networking, tons of memory, etc.
> that we take for granted today. Heck, the PDP was 16-bit!

Grr.

The original Unix didn't have all of the things you list.  But the point 
was, the original architecture was extensible in a sane way!  You had an 
operating system with multiple processes in independent address spaces, 
kernel level vs. user space protection, a well designed system call API, 
etc.

Adding a GUI (X windows) was easy - that was an application.  Networking 
was more integral to the OS, but it used the file descriptor APIs that 
were already established, allowing re-use of the existing system call 
APIs.  Job control was already in the base part of the OS, along with 
the multi-threading.

Compare to DOS.  Single address space.  No kernel vs. user space 
protection.  No ability to keep user space applications from touching 
hardware (and trashing it).  Extensibility encouraged through TSRs in an 
ad-hoc manner instead of user space daemons running concurrently.  It's 
just not built to be extensible - it reflects the entry level PC 
hardware and the needs of home users at the time.  It is slightly 
extensible, but to continue to bad analogy the foundation of the 
building can't support all of the advanced features.

>> The bad analogy to use here would be that Unix and the variants endure
>> because the foundation of the building was good.
> You see anybody still writing K&R C? Using the ed editor? Even plain
> vi has been replaced by huge additions like VIM or VILE. GNU prefers
> Emacs, which alone is bigger than the first UNIX (tm). Nobody uses
> a.out or a.outb anymore, only ELF (or barely, COFF). The original
> Bourne shell is not supported by most as it's not POSIX compliant (as
> most assume Korn or Bash extensions). So a lot has changed.

What is the point here?  I'm talking about the structure and base 
features of the OSes.  Multi-tasking, independent address spaces for 
processes, the file descriptor style API, etc. all exist independently 
of the compilers and specific applications.  We're talking about the 
architecture of the building and the foundation, not the curtains.

>> In comparison, DOS is
>> the hut out in the woods - primitive and good at the time when you
>> needed shelter, but it was never designed to compete with full operating
>> systems that do multitasking, demand paging for memory management,
>> decent network stack hooks, device driver/module support, etc.
> The original PCs only had like 64k RAM! I know you of all people know
> that. Try porting Linux to that. Even ELKS can barely do anything
> useful in 640k. Compilers (and authors) just aren't dedicated enough
> to waste hours trying to cram things into small amounts of RAM. So the
> increase continues. And when your kernel demands 128 MB or more just
> to boot up, you have no reason to use uber lean and mean things like
> DOS (though you can barely use anything else either these days,
> sheesh).
>
> It was the 286 that had hardware support for task switching, not the
> 8088 (though a limited 8086 Desqview version existed, right?). You did
> have device drivers for DOS, but they too were limited in RAM and
> features depending on which hardware extensions you had. Demand
> paging? Did even the 286 have an MMU? Can't remember, doubt it. That's
> why EMM386 exists at all. Face it, the 386 is a monster machine. It
> doesn't make everything else crap, but it does support a lot of
> things.
>
> But so-called "386" OSes these days ... ugh. Even 386BSD could run in
> a handful of RAM originally. Linux too used to run in 2 MB. Well not
> anymore, sadly. GCC alone probably won't work anymore with less than
> 256 MB without heavy paging.
>
> And BTW, networking ... what networking? In 1981? At best you could
> maybe dial up a BBS, heh. Those were fun (during mid '90s for me), and
> they were well-supported for DOS, but that was a far cry from what we
> have today (as you know). You can't blame DOS for not supporting what
> didn't exist.

You are making my case for me - PC hardware and DOS reflected what was 
available at the time.  And as things got slightly better, the new 
functions were layered on top of the existing code base, mostly to 
preserve backward compatibility.  Anything that changes that 
compatibility might result in a nice operating system, but it would be 
hard to call it DOS if it didn't run WordPerfect 5.1 and the various 
versions of the Flight Simulator that are out there.

>> Windows is not a relative of DOS.  Even something primitive like Win 31
>> or Win 95 basically looks at DOS like a boot loader.  They are not in
>> the same league.
> Both of those 100% relied on DOS to do various calls behind the
> scenes. They would not run without DOS. It's already been proven that
> Win 3.1 could run atop DR-DOS (and even run Win95 GUI with a small,
> proprietary hack). There is more stuff going on, yes, but it's still
> heavily dependent on DOS proper. Was it perfect? No, but it worked.

Time out.

Windows does not use DOS for memory management.  Windows does not use 
DOS for semaphores, task scheduling, etc.  Windows does not use DOS for 
most things.  Yes, those classic versions of Windows used DOS for 
bootstrapping and to run DOS applications, but once again it was for 
backwards compatibility.  Break that compatibility and nobody wants to 
talk to you.

Win 95 and Win 98 are big pieces of code with lots of function in them.  
Once they are loaded and running, DOS is fairly irrelevant.

> Win 3.0 was originally going to be a DPMI client, I think, but that
> idea was later abandoned. However, internally their Win16 apps (not
> counting the very few that were 8086-based, which was soon abandoned)
> used DPMI themselves!
>
> BTW, the bugs in NT's DPMI were never fixed by MS (e.g. for Quake 1)
> because they didn't intend NT (at that time) for home users or games.
> Of course that target (but without bugfix) was changed later with
> Win2000 (added FAT32, LFNs) because they decided to abandon Win9x.
> When WinXP came out for home users, they officially declared DOS dead
> (as it didn't use DOS at all at bootup, only a patched v5 version for
> its buggy old NTVDM).
>
>> Here is the crux of my argument - if you need modern hardware support
>> and modern features, you probably should use something that is current
>> and maintained.
> Everything fades, so you can't rely on anybody, sadly. DEC was long
> ago bought out by Compaq (which got bought by HP), for instance.
> Novell bought and sold Digital Research, which spun off a bunch of
> times. Even your beloved IBM no longer messes with home computers
> directly as it spunoff to Lenovo. It's quite a brittle industry.
>
>> It is not feasible to "grow" DOS to do all of these
>> other things;
> It's already been done, more or less.

No, DOS really doesn't multi-task well.  DOS doesn't have a concept of 
networking.  DOS isn't keeping up with BIOS changes, USB, and other 
devices that require something more advanced than the original device 
drivers we are carrying around.  DOS doesn't do 32 bits - DOS extenders 
do, in a hacked up kludgey way.

>> what would be left would be something that barely runs any
>> DOS applications and doesn't do all of the new stuff well either.  There
>> is not enough demand or qualified people with free time.
> Probably not enough demand, no, but qualified people always find a way
> to extend Linux. Nothing is stopping those same people (or similar)
> from improving things. If I said, "Linux will never support xyz", I'd
> be short-sighted and probably proven wrong. But if I say the same
> about FreeDOS, somehow I'm being fair? Why are people so stubborn?
>
>> Want to keep DOS relevant?  We need applications ...  today.
> Yes, but our bigger problem is the entire architecture is dying. Not
> x86, that's thriving big time, but just 16-bit in general. They want
> to replace the BIOS with UEFI, and who knows how long before cpus no
> longer support legacy mode. VT-X is better than nothing, I guess, but
> that isn't universal yet.

So where should we spend our time?  Trying to recreate all of the modern 
conveniences of Linux (32/64 bit support, multi-tasking, networking, 
modern device support, etc.) or writing some compatibility layer code so 
that we can boot our existing OS that runs our existing code?

A lot of the DOS applications that we are using date back quite a few 
years.  They need refreshing, and given that a lot of them don't have 
source code available it is not going to happen.  Case in point - my 
TCP/IP code.  Not to throw mud on the existing networking code out 
there, but it was pretty old, pretty buggy, and pretty slow.  A few 
modernized applications can go a long way to keeping DOS useful for people.

>> (Different topic: In an earlier note you misinterpreted me.  I said you
>> could re-implement DOS with the int 21 interface easily, and some well
>> behaved code might even run.  But if you don't have the direct access to
>> all of the hardware, TSRs, device drivers, segmented memory tricks,
>> etc., it's not really DOS anymore - it would run so few applications.)
> VMs are too slow and buggy. Maybe we should focus on improving those
> (VirtualBox, Bochs, QEMU, DOSBox). But I doubt that will be easy.
>
> Long story short: you think the answer to everything is "more apps"??
> DOS already has 30 years of apps, and it didn't phase MS (of all
> people) one bit. Win64 still doesn't support it, they didn't even try
> (and yes, they do know how). DOSEMU isn't in most distros at all.
> DOSBox is only treated as a lark for rare games. So much good stuff
> seems to be thrown out the window.  :-(
>
> Could we use more apps? Sure. Would I be willing to help? Of course,
> in whatever little way I can. But will it solve anything long-term?
> No. The kernel isn't the problem (though it can always be improved),
> it's the ecosystem, the world, developers, everything has changed so
> much. Everybody has their own (incompatible) solutions to everything,
> which sadly often doesn't involve FreeDOS at all, not even when
> appropriate.
>
> Saying we can't fix or improve anything is very pessimistic. No, I
> don't see any technical reason to drop compatibility, and I don't want
> anybody to do so, but I don't necessarily think we should rest on our
> laurels.

Once again, where should we focus scarce resources - keeping what we 
have running and refreshing some of the apps, or trying to do a complete 
redesign from the ground up and re-inventing a lot of code that has 
already gone into Linux?

Anybody can go off and start another SourceForge project.  There are 
lots out there.  But we need another 32 bit "DOS like" operating system 
that doesn't run the existing applications like a hole in the head.

Like it or not, the path for the future is going to be running DOS 
inside of virtual machines hosted by an operating system that is kept 
current and up to date.  It's easier to keep the emulators going than it 
is to keep up with all of the device driver and hardware changes that we 
are going to need.  Virtual environments are going to look pretty 
attractive when you can't boot a machine with DOS running on the bare 
metal in a few years.


Mike


------------------------------------------------------------------------------
Live Security Virtual Conference
Exclusive live event will cover all the ways today's security and 
threat landscape has changed and how IT managers can respond. Discussions 
will include endpoint security, mobile security and the latest in malware 
threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/
_______________________________________________
Freedos-devel mailing list
Freedos-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/freedos-devel

Reply via email to