[EMAIL PROTECTED] wrote:
> Garrett:
>
>
>> But one should not, I think, ignore the overhead that adding new
>> runtimes (and even new compile-time languages) adds. Each language
>> needs (typically) either or both of a full time runtime environment and
>> a development tool chain.
>>
>
> I'm not suggesting that we ignore the overhead. However, in your
> previous post you lamented the fact that Perl 5.8.x was now at 38mb.
> With 500gb disks showing up as standard in most systems, 38mb doesn't
> really seems that large any more.
>
500gb disks are commonplace, but so are systems with considerably
smaller flash memories. It is useful to be able to consider a minimized
system that runs out of 32 or 64MB compactflash (or a USB thumbdrive),
and it is also useful to consider about systems that run largely from
ramdisk.
I've got a fair bit of experience with embedded UNIX now, and I can tell
you that (practically) nobody is serious about using Solaris in an
embedded context because it is just too huge. Assumptions about
practically unlimited amounts of storage and core memory are, IMO, at
the root of this.
There is also the problem of upgrading systems. A lot of systems out
there can't be upgraded to more recent versions of Solaris simply
because the disk requirements have continued to grow. While growth is
inevitable, I think it is useful to take a careful look at least at the
biggest consumers. Perl certainly falls into the category of one of the
larger ones. (Especially as a piece of software that resides in /usr,
rather than /opt.)
I think I'm not alone also, in my frustration at the industry's trend to
"bloat". By this, I mean that I still use a computer in much the same
same way that I did back in the mid 90's ... I use a mailer, a web
browser, a bunch of terminals, etc. But now I need a gigabyte of RAM to
be productive, whereas previously the whole system ran as smoothly (in
some cases even _more_ smoothly) with just 32 or 64MB. A lot of the
"waste" is for visual effects, etc, but I think a lot of it is also lost
due to simple carelessness ... the notion that "its just a megabyte"
repeated over hundreds of different programs, ultimately sucking up
pretty much all of my swap.
I seriously consider some of my biggest improvements to packages that
I've maintained over the years to be _negative LOC_. In my earlier days
as a Sun employee, I think I eliminated probably nearly 100KLOC of
"bloat" in replicated functionality, redundant, or otherwise useless
code. All without removing any user-visible functionality.
I wish more people would think harder about the small impact of each
kilobyte of core consumed. A lot of stuff in ON is "smart" in this
regard, there is still a lot that isn't. 100MB-consuming java programs
fall into the later. So does burning 38MB of storage to provide
functionality that could easily be coded into a few 10KB's of compiled C.
Observers will find that a lot of my thoughts on software development
are driven by KISS and minimizing waste.
-- Garrett
_______________________________________________
opensolaris-code mailing list
[email protected]
http://mail.opensolaris.org/mailman/listinfo/opensolaris-code