On Fri, Sep 7, 2012 at 3:36 PM, Rugxulo <rugx...@gmail.com> wrote:
> On Fri, Sep 7, 2012 at 11:19 AM, dmccunney <dennis.mccun...@gmail.com> wrote:
>> I'd be interested in what Felix is doing and where he sees visible slowness.
>> As you mention, FreeDOS is 16 bit code.  But how fast things will
>> appear to be will have more to do with the hardware you are running on
>> than whether the code is 16 or 32 (or 64) bit.
> Hardware and software coexist. So if one is hobbled, the other
> suffers. That's why we need good compilers and non-crippled computers
> that can run all instructions at decent speeds. I'm afraid that some
> newer ones neglect legacy in lieu of newer stuff. (Though I guess slow
> is better than nothing!) Using appropriate switches in GCC can really
> make a noticeable difference. (Though I still think optimization is an
> almost impossible task, too cryptic.)

Hand-optimizing is deprecated.  The compiler can generally do a better
job than you can.  Optimization for speed generally involves profiling
the code and writing assembler for speed critical parts, particularly
those done in loops where the operation will be called many times.
Optimizing for size is better done by the compiler, but you are faced
with the trade-off - the fastest code is inline, but bigger.  Smaller
code is slower due to the overhead of calling functions.

>> HD access varies depending upon the drive and the BIOS.  The old
>> notebook I run FreeDOS on is hobbled, because the HD is IDE 4, with an
>> 18 mbit/sec transfer rate.  This is a BIOS limitation, so I can't just
>> swap in a faster drive.  FreeDOS flies, but Windows and Linux notice.
> Also there's an inherent 64k limit in size of reads.

That too.  It's a reason I went to ext4 for Puppy and Ubuntu, because
ext4 extents provide a 25-33% I/O speedup on big files, and the box
needs all the help it can get.  Even with ext4 I/O is the bottleneck.

(I found an open source driver that lets me read/write the Linux
slices from Win2K, and Puppy and Ubuntu mount each other's slices,
with some big apps living on one slice or the other end shared between
them.  FreeDOS can't see anything else, but if I'm in it I don't

>> I remember the old MS-DOS days where I would do things like change the
>> interleave value on a drive for faster operation. It gave the
>> machine time to finish loading the first block before the second came
>> under the drive head, and you didn't have to wait a full drive
>> rotation for it to happen.  You got a nice I/O boost.  It has been a
>> long time since I was concerned with such things.  :-)
> I think newer file systems, e.g. HPFS, were designed to correct such
> limitations in FAT. Unfortunately, nobody in DOS world ever cared
> about anything but FAT. Though it's not easy, so I somewhat
> sympathize!

By the time newer file systems were becoming available, DOS was on the
way out.  No point to add DOS support, as how many would actually need
it?  The reward was not worth the effort.

>> On a current machine with a multi-ghz CPU and an IDE 6 or SATA drive,
>> I'd be a bit startled at perceptible slowness in file I/O, even with
>> 16 bit code running.  The faster the underlying machine, the faster
>> the code will run, even it it's 16 bit.
> The AMD Athlon's and even Pentium IIIs were faster in some ways than
> Pentium IVs. It's not just raw clock speed, it's not just newness,
> it's the whole underlying architecture. Computers are very very weird
> and sensitive to certain instructions and pairings, pipelines, or
> whatever. I don't claim to understand it, just saying, you can't
> expect the same software to run at the same relative speed on a
> different computer.

Oh, true enough.  Bur in general, faster machine, better performance.
Remember all the folks running 25mhz 80286 AT boxes as fast DOS

>> Better performance under something like DOSEMU isn't really a
>> surprise, as it's essentially a virtual machine, and the actual file
>> I/O is being passed to and performed by the native routines of the
>> underlying OS.
> Well, presumably it's easier to use LFNs there than otherwise.

I use LFNs in FreeDOS, and haven't noticed a problem, but I'm likely
not doing things where I *would* notice.

> Obviously due to the FreeDOS kernel lacking direct VFAT support
> (patents ... ugh), it's going to be slower with a TSR mirroring all
> file access. This is why I wanted to do it with SFN only to prove the
> point.

And it's as fast as it can be, at the cost of some features you might
ordinarily want.

> BTW, since you mentioned old old machines, I'm linking to this (thanks
> to rr for helping me rediscover it):
> http://www.bitsavers.org/pdf/borland/Turbo_Languages_Brochure_1988.pdf
> "best programming tools in the world" ... heh. It's true, they are
> very good, and you can get a lot done with them. So it's not like DOS
> doesn't and hasn't had good tools for the time. If all you wanted to
> do was program a little C or Pascal or play a few games, esp. on
> really old hardware which no other OS can run on (barely), I would
> really not hesitate on recommending FreeDOS. Even on newer hardware,
> it still (more or less) works! And we have a lot more tools these days
> than we did 20 years ago. Too bad DOS is actually less (!) popular now
> with more tools than previously.

Several of the old Borland tools are available free from Embarcadero
as legacy tools, like Turbo C and Turbo Pascal.  (Don't recall which
revisions offhand.)

For the time, Borland tools *were* good.  But it's worth noting
Borland didn't create them - they acquired rights to and repackaged
stuff created elsewhere.

Meanwhile, I don't get upset, because I don't *expect* much from DOS.
There are reasons I was perfectly happy to move to Windows, Unix, and

Live Security Virtual Conference
Exclusive live event will cover all the ways today's security and 
threat landscape has changed and how IT managers can respond. Discussions 
will include endpoint security, mobile security and the latest in malware 
threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/
Freedos-user mailing list

Reply via email to