> by the way, the 2 programs are the setup stuff and edlin. hardly
> rocket science.

Even those two "simple" programs would probably be considered "rocket science", 
or at least akin to "learning a foreign language", by most students who are 
only used to modern technology and web apps.  Low-level stuff like I/O and IRQ 
and BIOS emulation isn't flamboyant enough for most younger people to get 
involved with these days.

****

> Perhaps it could be used to solve one of the most frequent problems
> I hear. Running FreeDOS on modern UEFI hardware.

I think that is the main point of it all.

> As we are all well aware, this cannot be directly accomplished and
> would require an abstraction layer between the OS and the actual
> hardware.

It actually can, at least in certain situations.  The fundamental problem, at 
least as I see it, is I/O.  I know a lot of the talk seems to be around sound 
cards, but let's discuss it in terms of something a little simpler: joysticks.

A regular, old-fashioned joystick uses an I/O port, specifically port 201h.  
There is also a BIOS function that lets you access the joystick, INT 15.84.  
Over the years as computers got faster and joysticks got more complicated (more 
buttons and axes), the INT 15.84 BIOS interface effectively became useless.  
Modern programs that want to use a joystick almost never use INT 15.84 and 
instead try to talk to I/O port 201h directly.  That has caused all kinds of 
problems.

Sound cards have a similar, yet slightly different, problem.  There was NEVER a 
widely support BIOS-level interface for sound cards.  The de facto "standards" 
that everyone seemed to base their sound card designs on were Ad Lib and Sound 
Blaster which were fundamentally hardware (I/O) based and proprietary (they 
also used things like DMA's and IRQ's).

Now, we're in a world where USB is pretty much the standard port for everything 
plugged into the computer (certainly for joysticks and to a large extent for 
sound devices).  From a hardware perspective, the USB host controllers are the 
only things that actually talk to the "real" I/O ports (sometimes PIO but 
usually MMIO).  DOS programs still expect to see the joysticks on I/O port 201h 
and the sound cards at various I/O ports, but those ports don't exist on the 
real hardware any more and must somehow be virtualized.

The problem is even worse when you're using something like DPMI.  DPMI does 
allow thunking down to the lower level (the DOS/16-bit "layer") for interrupts 
but doesn't provide a standard way to thunk for I/O ports.  Since a way to 
thunk I/O isn't provided, every single DPMI-based program needs to understand 
things like USB and all the different kinds of devices that can be 
USB-attached.  Of course, none of them understand USB and none of them thunk 
the I/O, either.

It is possible to virtualize I/O in DOS with MS EMM386 and with Quailitas' 
386MAX.  JEMM also has a proprietary way to virtualize I/O with JLOAD, and I am 
currently trying to add the MS/Qualitas-style capability to JEMM.  But those 
methods will only work with programs that don't use DPMI or that thunk the I/O 
from DPMI.  IOW, many old programs would need to be rewritten change the way 
they do things to work properly on modern hardware without a VM.  That ain't 
gonna happen.

What a Virtual Machine does is provide another level of abstraction between the 
real hardware (e.g., the I/O ports associated with the USB hardware) and the 
virtualized hardware (e.g., I/O port 201h that a DPMI program is expecting to 
see a joystick attached to).  With this extra layer of abstraction, the DPMI 
program doesn't need to do the "thunking" -- it is handled "automatically" by 
the abstraction layer that sits between the real hardware (real I/O ports) and 
the virtualized hardware (virtualized I/O ports).

That hardware abstraction layer is one of the main things that Virtual Machines 
provide.  Of course, different VMs virtualize different hardware devices.  I 
think they all do mice and keyboards and disks, a lot of them do Ethernet cards 
and printers, but only some of them will do things like PC Speakers and 
joysticks and sound devices.

One of the main things that Windows has done is to move the BIOS _function_ (an 
abstraction layer between the hardware and the software) up into the OS itself. 
 In fact, Microsoft at least used to call it the "Hardware Abstraction Layer" 
or HAL, but I don't know if they still call it that or not.  *nix does 
something similar -- they have hardware-specific drivers (that talk to devices 
at the I/O level) and then have a "layer" that a program "talks to" when it 
wants to send or receive data from some particular device.  The program doesn't 
need to know what kind of hardware port the device is attached to.

DOS has always expected that abstraction layer to be provided by the BIOS.  For 
example, if you use the BIOS INT 13h function for disk access you don't need to 
know if the DISK is MFM/RLL, ESDI, SCSI, IDE/ATA, USB, or whatever.  The INT 
13h abstraction layer hides all that from you.  The Ethernet packet driver 
interface is similar -- it provides a standard software interface that Ethernet 
programs can talk to and they don't need to understand the nuances of every 
different kind of Ethernet card.  While the Ethernet packet driver interface 
doesn't actually exist in the BIOS, it provides an abstraction layer similar to 
what a BIOS would provide if there were one.  My DOS USB drivers provide a 
similar BIOS-level interface for the USB host controllers (but a specific USB 
device, like a joystick, needs an additional layer before a program can "see" a 
USB joystick).

In modern computers the BIOS abstraction layer no longer exists -- it has been 
moved up into the (modern) OS itself.  From a user perspective, I think it was 
a very bad mistake to let that happen but we're way too far down the road to go 
back.

> A project could be created to provide a very thin Linux based system
> (possibly using an RTOS kernel) whose only job is to manage the
> abstraction layer and implement the virtual machine to run FreeDOS. 
>
> This could be done almost transparently. Booting straight to DOS
> unless the user pressed a specific key (Like F1) during boot. 
>
> Pressing such a key would bring up a BIOS like interface that could
> be used to change the virtual BIOS settings and configure drivers
> and such aspects of the host OS.
>
> Their job would be to create that interface and make it all work
> seemingly. Most of the pieces required exist. But, it would not be a
> small task to implement.

No, it wouldn't.  And I think you're also assuming it would be based on some 
existing Linux VM (like QEMU or KVM or something like that).  Some existing VMs 
provide enough of a BIOS emulation that it may be a partial solution, but it 
would need to provide enough specific hardware support that I'm not exactly 
sure how "thin" it could actually be.  It will still need to include all the 
hardware drivers for all the modern hardware that could potentially need to be 
virtualized to be used by the DOS VM.  One advantage would be standardization 
(e.g., all sound devices could be virtualized as one of the SoundBlaster 
models).

> It could yield much better performance and more accurate emulation
> than traditional virtual machines. With todays multi-core systems,
> individual cores could be dedicated to emulating various aspects of
> PC hardware. For example, one core could be dedicated to performing
> the tasks handled by a sound card.

In theory, yes, but probably not a place to go.  Modern systems generally run 
too fast to run older programs correctly (especially games).  I think such a VM 
would still need to accurately emulate various types and speeds of hardware 
rather than just running as fast as it can all the time, similar to what some 
(but not all) modern VMs do.

> I think such a project could appeal to many. There is a lot of
> interest in playing old games. Also, since this would be a generic
> legacy PC emulation layer. It could be used to install other
> Operating Systems like MS-DOS, PC-DOS, etc.

Or even older versions of Windows.

****

> I think I actually proposed a similar idea some time ago except that
> it was *itself* the OS, rather than a stripped-down Linux distro.
> Sort-of a pre-AMD64 PC emulator, virtualizing where possible,
> emulating where not.

I think a "thin" Linux distro has the best chance of working and also the best 
chance of getting volunteers to work on.  You still need all the 
real-to-virtual I/O port stuff to be handled somewhere/somehow, and I think 
Linux would be the best place to find existing resources that could provide the 
necessary level of support.  You certainly can't expect to get it from MS.

> As for all the people saying "make FreeDOS more like Linux", they
> don't seem to understand FreeDOS *or* Linux.

Indeed.

****

> This is a very sketchy thought, but...
>
> AIUI, the way that 386 memory managers for DOS work is that they put
> the CPU into protect mode, map RAM into upper memory blocks as DOS
> wants, then start a single, non-multitasking V86 mode VM for DOS
> itself.
>
> That basic process would be enough to boot a DOS instance, wouldn't
> it?

No, not really.  Before you can run a Memory Manager you need a BIOS and an OS 
to install it on.

> "All" it needs is a memory manager that can start as a 32-bit
> process, set up a few interrupts -- INT 11 for the hard disk, for
> instance -- start a single V86 process, and then kick DOS off in
> that process. Then a stub program for DOS to load in CONFIG.SYS to
> enable the memory manager.

No, not quite.  For example, the way MS Windows worked (back before NT when 
Windows was actually a DOS program) was that when Windows started it would send 
a special call to EMM386 that told EMM386 to shut itself off.  Windows would 
transfer a little bit of information from EMM386 (but it would NOT, e.g., 
transfer the I/O virtualization information) and then would handle the EMM386 
stuff 'The Windows Way".  When Windows exited, it would tell EMM386 to turn on 
again.  Exactly how that all worked was proprietary to MS which is why other 
memory managers are incompatible with Windows.  Bob Smith and Qualitas managed 
to work through some of those issues, but I'm not sure they ever figured out 
the whole thing.  I do believe 386MAX was able to work with Windows, at least 
partially.  BTW, this process was called GEMMIS (if you want to try and look it 
up).

> Normally, DOS starts EMM386 or JEMM386 or whatever. This way round,
> JEMM386 starts DOS.

Nice thought, but I don't think it will work.  You need much more than a memory 
manager -- you really need a BIOS and an OS also.  Even though I don't 
particularly like Jerome's idea of a "Thin Linux" (I really would like things 
to run on real hardware), it may be the most reasonable solution.

****

> There could be a small image containing a thin linux host that is
> booted by system. Possibly in it’s own partition or image file.
> This host then provides an abstraction layer which then boots the
> system like it was a PC with a legacy BIOS. Most likely providing a
> some abstracted and emulated hardware like a SoundBlaster compatible
> audio card. The Client OS (FreeDOS) would be installed to a normal
> partition on the drive. 
>
> Providing support to also map things like I/O ports from the host OS
> to the client, to allow the possibility of connecting real legacy
> hardware to the machine (like CNC machines and TTY devices).

Again, I think the I/O stuff is the real point to emphasize, though I'm not 
sure how effective you can be in mapping I/O ports for old hardware.  You don't 
even have a real ISA bus any more (at least not any ISA card slots) -- even 
that is "virtualized" and only exists in the software/firmware.   

****

Here's maybe another way to think about it.  DOS was originally written to site 
"on top" of the BIOS, and the BIOS sat directly "on top" of the hardware.  In 
DOS you could bypass the BIOS and go directly to the hardware if you wanted, 
but it was usually better to go through the BIOS (at least if the BIOS did what 
you wanted it to do).  The way modern hardware works, that's not really 
possible any more.  Now, the "BIOS" is simply an "abstraction layer" in the OS 
itself.  To virtualize a BIOS for DOS to use, the OS must inject a new 
"translation layer" between the real hardware and the BIOS that DOS needs to 
virtualize the old types of hardware (I/O ports) that no longer exist.  The way 
this is done on modern machines is to completely "encircle" DOS in a Virtual 
Machine rather than simply providing a "translation layer" between the BIOS and 
the hardware.

For awhile, the UEFI manufacturers provided a CSM (Compatibility Support 
Module) as the "translation layer" so you didn't need a VM.  But they've even 
stopped doing that nowadays.  So, we'll either need to come up with a "generic 
CSM" that doesn't need a VM but still provides the needed level of hardware 
support both now and in the future, or we'll need to do some kind of "thin VM" 
as Jerome is suggesting.  I think the second option has a better chance of 
long-term viability even though I would prefer the first.

I also think the second would be something that might be more interesting to 
Google aficionados but it is FAR from trivial. 


_______________________________________________
Freedos-devel mailing list
Freedos-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/freedos-devel

Reply via email to