Hi,

On Fri, Sep 7, 2012 at 3:17 PM, dmccunney <dennis.mccun...@gmail.com> wrote:
>
> Hand-optimizing is deprecated.  The compiler can generally do a better
> job than you can.

Depends on the cpu. GCC is fairly good for newer ones (Pentium IV) but
bad at older ones (486). And most compilers aren't up to snuff with
SIMD, to say the least.

Besides, somebody has to write the compilers! That alone means
hand-optimizing!   ;-)

> Optimization for speed generally involves profiling
> the code and writing assembler for speed critical parts, particularly
> those done in loops where the operation will be called many times.

Don't forget instruction timings, caching, pipelines, etc. It's quite
arcane, I don't claim to understand it. But I think most average apps
don't need to worry about it (hence the abundance of dynamic scripting
languages, which aren't exactly super speedy).

> Optimizing for size is better done by the compiler, but you are faced
> with the trade-off - the fastest code is inline, but bigger.  Smaller
> code is slower due to the overhead of calling functions.

Depends on the cpu. On the 8086 (and maybe 386), smaller meant faster.
On the 486, it was quite different (pipelined), 586 even more
different (superscalar), PPro even more different (slow 16-bit), PII
even more different (4-1-1 uops), PIV very different (no barrel
shifter), etc.

Different ones have different strengths and weaknesses. It's not quite
easy to understand. I do agree compilers are useful (or else why have
them?), but they are far from perfect. And GCC isn't exactly the best
for small size, by far! Assembly will kick C's rear end any day of the
week. In days past, my horribly silly opinion was that C was
potentially 10x bloatier than plain assembly. (But you can blame the
libc or linker or ABI for that. Though, again, a good compiler can
make a difference. But all the world's a GCC, sigh.)

Honestly, it's hard (or impossible) to write blended apps which are
optimized for different cpus. I think Agner Fog calls this dynamic
dispatching (via CPUID, etc). Most people don't bother. The kludgy
workaround is separate .EXEs for each target, but in a perfect world,
the 1% difference would be inlined and called as needed (saving
space).

>> Also there's an inherent 64k limit in size of reads.
>
> That too.  It's a reason I went to ext4 for Puppy and Ubuntu, because
> ext4 extents provide a 25-33% I/O speedup on big files, and the box
> needs all the help it can get.  Even with ext4 I/O is the bottleneck.

Dunno, I've not followed all the various Linux file systems and
benchmarks, even if I run one or two of them myself (barely)!
Something like DJGPP's 16k transfer buffer and (setvbuf) line
buffering hide this somewhat (unlike 16-bit Watcom, must do it
manually, eh??).

> (I found an open source driver that lets me read/write the Linux
> slices from Win2K, and Puppy and Ubuntu mount each other's slices,
> with some big apps living on one slice or the other end shared between
> them.  FreeDOS can't see anything else, but if I'm in it I don't
> care.)

Since MS isn't very supporting of other file systems, most people just
use FAT (e.g. FAT32) for sharing stuff. I forget now, but IIRC,
FreeBSD supported HPFS too (at least read-only) and Linux also.

Most of the DOS workarounds for other file systems seem buggy and
don't work half the time, sadly. LTOOLS [doesn't work here] and
TestDisk are the ones that come to mind (not counting that read-only
HPFS one or the shareware Paragon thingy). All of the NTFS ones were
slow and buggy, last I check (very weakly).

>> I think newer file systems, e.g. HPFS, were designed to correct such
>> limitations in FAT. Unfortunately, nobody in DOS world ever cared
>> about anything but FAT. Though it's not easy, so I somewhat
>> sympathize!
>
> By the time newer file systems were becoming available, DOS was on the
> way out.  No point to add DOS support, as how many would actually need
> it?  The reward was not worth the effort.

OS/2 was originally called "DOS 5", I think. And yes, they'd rather
you upgraded the whole system than just the file system. And they
basically dropped OS/2 in favor of Windows and FAT (later Win9x and
FAT32) and then later XP (NTFS).

It's probably not an accident that NT only supports NTFS and (barely)
FAT these days. They should probably natively support a few other file
systems, but I don't guess that's in their best interest (sigh).

>> The AMD Athlon's and even Pentium IIIs were faster in some ways than
>> Pentium IVs. It's not just raw clock speed, it's not just newness,
>> it's the whole underlying architecture.
>
> Oh, true enough.  Bur in general, faster machine, better performance.
> Remember all the folks running 25mhz 80286 AT boxes as fast DOS
> workstations?

A Core 2 at 3 Ghz will outperform a P4 at 3.6 Ghz. Heck, I'd bet even
a 2 Ghz Core 2-ish laptop will also. Then again, it totally depends on
how smart the compiler is. If you recompile (some) things, it will
speed up somewhat. But most of us aren't quite willing to go Gentoo
(GNU/Linux) exclusively (ugh, me either). That's just asking for lots
of bugs.

Last I checked, Blair's 16-bit md5sum was 2x or 3x slower than
DOS386's FBMD5 (FreeBasic, non-optimized, 32-bit). But again, that
totally depends on how well your cpu supports "legacy" 16-bit code.
Some are better than others. (And of course DJGPP has md5sum too in
one of its pre-CoreUtils packages, but it's built with a fairly old
compiler, so a recompile would probably help a lot, in case anyone
here actually wants to benchmark it.)

>> Well, presumably it's easier to use LFNs there than otherwise.
>
> I use LFNs in FreeDOS, and haven't noticed a problem, but I'm likely
> not doing things where I *would* notice.

It's not bad, it's just slower than without. Juan on the DJGPP list
recompiled libc with and without LFNs, and it was 2x slower with LFN
enabled. But most DJGPP apps these days are ported from *nix, so we
don't really have the luxury to be picky, i.e. LFNs are often
required.   :-/

>> BTW, since you mentioned old old machines, I'm linking to this (thanks
>> to rr for helping me rediscover it):
>> http://www.bitsavers.org/pdf/borland/Turbo_Languages_Brochure_1988.pdf
>>
>> "best programming tools in the world" ... heh. It's true, they are
>> very good, and you can get a lot done with them.
>
> Several of the old Borland tools are available free from Embarcadero
> as legacy tools, like Turbo C and Turbo Pascal.  (Don't recall which
> revisions offhand.)

Yes. TP1, TP3, TP55, and TC2 are freeware downloads, last I checked,
if you sign up (give them your info), but you can't redistribute them
(so we can't mirror them for FreeDOS).

We have other similar tools that are more free, though, thankfully.

> For the time, Borland tools *were* good.  But it's worth noting
> Borland didn't create them - they acquired rights to and repackaged
> stuff created elsewhere.

They got old, original versions but heavily modified them. TP1 and TP3
are far far far from TP55 and TP7.

> Meanwhile, I don't get upset, because I don't *expect* much from DOS.
> There are reasons I was perfectly happy to move to Windows, Unix, and
> Linux.

Naively, I expect DOS to work as well as it used to in its heyday. But
it doesn't, due to bugs, omissions, incompatible hardware, and lots of
devs deprecated everything.

------------------------------------------------------------------------------
Live Security Virtual Conference
Exclusive live event will cover all the ways today's security and 
threat landscape has changed and how IT managers can respond. Discussions 
will include endpoint security, mobile security and the latest in malware 
threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/
_______________________________________________
Freedos-user mailing list
Freedos-user@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/freedos-user

Reply via email to