> The UPX issue would be irrelevant if we didn't ship binaries at all. Hear
> me out. :)
>
> Repeatedly, I have seen stressed here the importance of making sure all
> packages we ship are 100% open source. And rightfully so - I don't
> disagree with that.

Be careful not to conflate things here -- Open Source and Free are not the same 
thing.

> However, shipping binaries could be seen as a form of bloat because they
> are completely redundant when each package is supposed to have complete
> source code included.

Not necessarily.  Not only do you need the source code, you need all the 
appropriate compilers/assemblers (including versioning) and the correct 
environment in which to run them and knowledge of how compilers/assemblers 
work.  I've seen several threads in this forum discussing issues with trying to 
compile/assemble programs.  In addition, users generally don't want the source 
code.  Even though I'm a developer, I'm also a user and as a user I normally 
don't want to have anything to do with the source code.  I just want the 
program to work and I don't want to be forced to customize the source code and 
recompile.  The program should have enough options so there is no need to 
compile a custom program.

>From a "bloat" perspective, I think you actually have your reasoning 
>backwards.  Bloat is when you have something that you don't actually need/want 
>and is a waste of space.  Source code is usually orders of magnitude larger 
>than the executable, and if you don't need or want the source code it's a 
>complete waste of space (not to mention the waste of space for the 
>compilers/assemblers you don't want to use).  As a developer, there are times 
>when I download parts of the source code if there's something 
>interesting/unusual the program does, but as a user that almost never happens.

In terms of speed, let me relay a story.  Back in the early days of Virtual 
Machines and Emulators, one of the first I used was VirtualPC.  It was 
originally made by Connectix and was originally just for Macs.  It was later 
implemented for Windows and then Microsoft bought out Connectix and VirtualPC 
became a Microsoft program.  Eventually Microsoft got rid of VirtualPC in favor 
of Hyper-V.

I used VirtualPC back in the day because it was one of the only VM's that would 
let you access a real hard drive instead of a virtual hard drive (which is 
something I desperately wanted, and still do).  VirtualPC had a way of 
accessing real hard drives from within a VM running DOS, but it used a network 
setup and was horribly slow -- too slow to be useful.  The way I tried to get 
around it was when the VM started I would create a RAM disk and copied the slow 
network hard drive to the RAM drive, run from the ram drive, and on shutdown 
would copy things (slowly) back to the real hard drive again.  I would do lots 
other things while I waited for all the slow copying to take place.  It did 
work, but was painful.

What your suggesting is somewhat analogous to what I went through with 
VirtualPC.  Everything is too slow, too big, and unwanted, although it can be 
made to work. 


_______________________________________________
Freedos-devel mailing list
Freedos-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/freedos-devel

Reply via email to