Am Samstag, dem 27. Mai 2017 schrieb Rugxulo:

> >> > http://akfoerster.de/dl/akf-software/row4.zip
> >
> > Please look for updates. I'm still working on it.
> > Though most work is for ports to other systems...
> >
> > I always wanted to make it as portable as I could. So I started
> > with the most limited environment I could find.
> 
> I agree that portability is nice.
> 
> > I also plan to maybe make graphical versions.
> > Doing graphics with bcc could be very hard. There is no lib, yet.
> 
> Then use a different compiler for the (separate) graphical version.
> OpenWatcom or DJGPP would be fine. (GRX? Allegro?)

No, that wouldn't be fun. ;-)
I still want to explore what is possible with those 64kB.

Note that bcc also had no code to support screen oriented programs.
I had to do that in assembly already. Meanwhile it can also do colors.
That's what I meant when I said, it could be usefull for others...
Look at the source code.

When someone wants to integrate that with bcc, I'd be willing to
rerelease those parts under GPLv2+ and help ith the integration...

I think graphics should not be much harder. But I wouldn't want to
do it, if I have the impression, that no one will use it then...

> FreeDOS is open to accepting any Free (or OSI) software in its repos.
> It does welcome games now, so yes.
> 
> > Well, that compiler has many limitations and quirks.
> > I used it mainly to gain experience.
> > And it is, as far as I know, the only 16 bit compiler for x86 that is
> > free software.
> 
> No, there are others (of varying quality). In particular, Free Pascal
> (since v3) has supported i8086-msdos cross-target.

Pascal is a different language.
Actually I like Pascal, but I don't want to rewrite everything.
C is much more widely supported.

> >> > My first thought was that this is very
> >> > small... But then I thought, wait a minute... my very first computer
> >> > only had 16kB of RAM. At that time 64kB wasn't considered small at all!
> >>
> >> 64 kb is indeed a lot of code ... but only in assembly. HLLs tend to
> >> bloat up a lot more (esp. due to "dumb" linkers or big and complicated
> >> functions like printf).
> >
> > Yes, note that I took great pains to avoid stdio completely.
> 
> I don't think printf is the main problem here. But for other compilers
> (OW or DJGPP), definitely yes. Still, if you don't need it, don't
> require it.

Well, when you mentioned printf I thought you understood...
Actually not using it made the binaries with bcc several kilobytes
smaller. It's not, that it wasn't required. I had to write my own
function to print integers. And it's still several kilobytes
smaller. ;-)

> > And it's not dumb linkers, but dumb libraries, that use printf
> > and everything else internally. If everything depends on everything,
> > the linker can do nothing. Worst case: see glibc. Static linking with
> > glibc? Forget it. It always drags in nearly everything. And most
> > of it is never used.
> 
> There are other compilers that compile statically (e.g. (T)ACK or
> OpenWatcom or FPC).

I think you are confused here. It is not about the compiler, but
about the glibc. The glibc is not a part of gcc. On most systems
gcc uses a different libc. And also for Linux there are other
libc's you can use with gcc. In fact gcc is pretty good in
optimizing (djgpp is a different matter).

The binaries for GNU/Linux in the package row4.zip are statically
linked with the diet libc. That is a libc that is optimized for
size to the extreme! It even warns you, when you use printf or
stdio. That's where I learned that trick.
https://www.fefe.de/dietlibc/

> It's not much (if any) speed increase. And alignment can indeed waste
> far too much, more than just a few kb. You have to be very very
> careful.

I am. ;-)

> >> (UPX, FTW!)
> >
> > Well, UPX made sense when we still used floppy disks...
> 
> Well, if you already have terabytes free and extremely fast Internet
> speeds, then it doesn't matter as much. Otherwise, it's still good to
> have (for various other reasons too).

But it's not the kind of small that I mean.

When you compress with UPX the code has to be uncompressed each and
every time the program is started. So the code that needs to be
executed gets larger rather than smaller. That's how I see it.

Disk space is not so rare these days. You can get small USB-keys
or SD cards that have megabytes or even gigabytes of space.

In fact I often don't even strip the symbol table anymore. That
is not executable code and I don't know about other systems, but
on GNU/Linux it's not even loaded into RAM. It's just additional
information, and that's good. Those who are really low on disk
space can still strip it easily.

And most people don't understand the importance of distributing
source code. They are socialized on systems that don't do it.
So the source code can easily get lost. And then the symbol table
could be useful to those who want to analyze the program then.

Maybe UPX has a place in the world of IoT...?


Did you say Internet? When I put software on the Internet I compress
it with zip or gzip. You don't want to execute code directly over
the Internet, wouldn't you?

-- 
AKFoerster <https://AKFoerster.de/>

------------------------------------------------------------------------------
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
_______________________________________________
Freedos-devel mailing list
Freedos-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/freedos-devel

Reply via email to