Thomas Billert wrote:
> 
> great job with Warpzilla, thanks! But: it takes away 25 MB of RAM
> without even loading a website and nees about 3 times as long to load
> compared to Netscape 4.61. I know it's huge, and it's not very fast on
> other platforms, too, but why did Mozilla become such a fat animal?
> 
> Is there a lot of debug code in it? What can we expect for a final
> release? I'm mostly using weeker machines (pentium class, 64+ MB RAM),
> so for me it's really an issue...

About 25MB at startup is what I've seen in every version I've tried,
even with nondebug optimized builds, and while it's considered an issue
and I understand people are working on reducing it, I would be surprised
if ultimately the reduction turned out to be substantial.

First, I think it's part of the price of using C++. For example, in C a
string is simply a sequence of characters and the memory it occupies
equals its length plus the terminating '\0'. In C++, that same string
becomes an instance of a class which gives it all kinds of powers in the
forms of methods: it might be able to measure its own length, replicate
itself, change case, reverse itself, convert itself into Unicode or any
other character set, find a character or substring within itself and
have all sorts of nifty moves. Moreover, its class might be the
fourth-generation descendant of a base class, and inherit more primitive
methods from its progenitors. So the same string now has to carry at
least an addressbook as baggage to access all those methods, and that
obviously takes up space. Put another way, C programs tend to be more
like sportscars and C++ ones like deluxe Detroitmobiles which may have
every imaginable feature, but can end up handling like a tank (and
rolling over the sportscar like one).

At the same time, there are soft spots in the code. More than once, in
debug stepthroughs, I've come across what I would consider needless
buckpassing or bureaucratic bloat: Class A has to open a file, so it
calls Class B, which in turns calls Class C, and this goes on several
letters further down the alphabet, during which I've sat here screaming
"Open the #%^*^%$ file!!!", until we eventually hit a worker that
actually does the job (often by calling the C library function). In this
example, I would think having Class A call the worker directly could
lead to gains in both speed and memory consumption. But fat fuels growth
and there is a season for trimming.

Yet another factor is the increasing cheapness of memory and rising
minimum bar. Most machines now come with 32 or 64MB. When I bought this
one three years ago, I put in 160, figuring 70 or so for Warp, about 40
for apps, and 50 to spare. Nowadays I'm regularly swapping 20 to 40
megs. Last month a 40-gig hard drive cost what I paid for a 20-gig one
(same brand and manufacturer) in September, and not so very long ago a
40-meg disk was humungous. This type of inflation has been a fact of
life since the advent of computers and will likely only plateau once the
difference in responsiveness between running a program or accessing data
on your machine and one on the other side of the planet becomes
practically imperceptible. Ten years from now? In any case, the trend
can lead to a tendency to shrug off large memory usage with a sort of
Marie-Antoinettish "Let them buy chips!", especially if your priorities
lie elsewhere. In the long run, however, modularity or portability or
some other goal may prove well worth the cost of greater memory
consumption and temporary inconvenience for users until the base model
packs 128MB or a gig or whatever.

The bottom line: this problem is more likely to be solved by going away
as people gradually upgrade to more spacious machines, than by a
deliberate effort to reduce memory consumption significantly. If, for
instance, using less memory means slower layout even on the fastest box,
and a code downgrade with the expectation of having to restore the speed
in a year or two when the specific amount of RAM involved is no longer
an issue, it doesn't quite make sense to sacrifice design or performance
or other longterm gains. If you build a high-speed train that needs
special tracks, engineering a slow version for ordinary rails kind of
defeats the purpose.

Meanwhile it's all too easy to forget that Mozilla/Warpzilla is not
intended to be an end-user product, but rather an open-source platform
for developers. The end-product is whatever Netscape or IBM or anyone
else packages and offers. You might think of the nightlies as a generic
demo, not unlike those that generally accompany a programming library.
In this regard, Mozilla is really one step removed from the problem of
end-user machine capacity; its clients are the entities that do
something with the code base, not the people who use the resulting
products. And the extent to which those entities worry about memory
consumption may well be colored by other considerations; one that also
sells memory chips, for instance, would be stupid not to see the higher
memory requirement as a means of creating demand for those chips, and
weigh that against the resentment forced upgrades often engender among
consumers.

Last but not least, when you run mozilla.exe, remember you're previewing
tomorrow's product which will naturally perform at its best on
tomorrow's machine.

h~

Reply via email to