Rob, Michael and Nathan all have excelent points, and I agree with
every one of them; and as Linux users this is an issue that hits us
harder than most (when was the last time you saw slowaris running on
16Mb RAM? or less?); however, it shouldn't be as big an issue as it
is. (Yes, I've seen X thrashing a 256Mb system with well over 200Mb
of X swapped out - it is not a fun thing to watch.)

There seems to be a feeling around Java that just because there is
garbage collection you don't have to even think about memory
management. There is similar thoughts around threading... that
because Java makes it so easy to create threads you don't have to
think about concurrency at all. I see both of these thoughts expressed
on a regular basis. (note that I am NOT saying that this is what Rob
meant to say or was implying by any stretch of the imagination... I'm
backing up a bit here to look at this from a distance... in fact he
clearly is trying to take the opposite track.)  I think everyone here
would agree that both of these sentiments are "just plain wrong" for
any facet of software development that is "interestingly non-trivial"
and/or beyond the domain of education.

So, working from the assertion that developers must manage memory (at
least for the _big_stuff_) we need to ask our selves exactly what is
it that:
(1) causes resources to be leaked*.
(2) causes leaked resources to be reclaimed.
(3) we can do to cause #2 to happen at our call, rather than whenever
     the collector decides to clean up.

     (* note for #1... memory is only one resource outside of the
        heap that we could be talking about here... things like XA
        resources in an RDB are another - less abundent - resource
        we could be talking about.)

For #1 we can say that the leak is temporary while we wait for GC
to realize that the object is garbage and do something about it.

For #2 then in the case of most everything that holds onto resources
let's start with the GC path. Can we release the object by simply
calling System.gc()? Well the answer is "yes and no". gc() is merely
a *suggestion* "that the Java Virtual Machine expend effort toward
recycling unused objects in order to make the memory they currently
occupy available for quick reuse." I have seen call behave as a
no-op on many platforms under many circumstances. Besides... that
is still relying on GC to find and dispose of the object... is
there a way for us to more directly control our object's destiny?
After all, we KNOW the object we want garbage collected!

So what part of garbage collection actually releases the resources
of an object? java/lang/Object.finalize(); The way gc frees up
resources is that it calls finalize on any object that it has
detected is garbage (is not reachable) before it goes through and
takes the object apart. So can we call System.runFinalization()
and let it clean up these resources? Well... all that does is
*suggest* that the vm "expend effort" twoard running the finalizers
of objects on the finalizeMeQ. How does an object get put on the
queue... why by garbage collection of course. :) So again we're
back to relying on gc, and if we could do that then we wouldn't be
here in the first place. (and again, I've seen this as a no-op.)

So what are we to do? We're sitting here with an object that we
have just finished with... we're about to null out the last
reference to it and hand all those resources over to the care of
the GC. What can we do for #3?

Why not finalize the object ourselves?
"It supposed to be automatic." - go reread that assertion above....
"It's dangerous to call something meant for the JVM." - a method
  call is a method call; who calls it is irrelivent. If you are
  trusting the author of this class to hold onto native resources
  (that is ones outside the JVM) then you better also be trusting
  them to clean them up correctly... and part of creating a proper
  finalizer is to handle the case of it being called more than once,
  or on an already cleaned up instance.
"It's 'protected'." - yeah... so? For one thing 'protected' isn't
  really protected in the OO sense it is supposed to be... more of
  a package protection... in fact, I can't see any difference between
  default and protected... but that's a whole 'nother [OT] discussion.
  For another thing just about every (well written) class that holds
  significant native resources also has a public clean-up method
  (for example java/awt/Image.flush();) as well as finalize (they
  may not be the same, BTW)
"We may not be done with it." - if you can't figure out that you're
  done with it then why are you saying the GC is leaking? Memory
  leaks may be the fault of the VM, but object leaks are purely your
  own fault. (OK RMI objectTables may be the exception here... :( )

So then... we've got the object and are finished with it from the
perspective of our application logic... we're about to null out
the last reference so what do we do? well . . .

does it hold a lot of resource? {
    do we care about resource? {
        does it have a cleanup method {
            call it!
        } does it have an accessable finalizer {
            call it!
        } punt!
    } liar.
} let GC handle small stuff.
null out the LAST reference

several options abound for punt! including subclassing and
making finalize accessable in your package... agressive memory
scrubbing... do nothing at all... I guess it depends on your
situation.

*We* know how to avoid the meltdown... but that's because we're
smarter than the JVM; we as developers have lots of information
at the application layer that the VM doesn't grok... so why would
we let *it*, as dumb as it can be, take us to three mile island?
I think that's the "solution"; not quite a full step back to new,
malloc, free and delete.

I'm reminded of the US Marine Corps statement #7P:
"Proper Prior Planning Prevents Piss Poor Performance".

Oi. I did it again; went to jot off a quick note with a
couple points and wrote a blasted treatise. I'll be
leaving on holiday in the morning and so miss the discussion
that I'm bound to kick up... but I'll read it all when I get
back. (after routing the usual flames for going off topic and
"preaching" to the nearest empty bit bucket.) Cheers all -=Chris



  cabbey at home dot net <*> http://members.home.net/cabbey
           I want a binary interface to the brain!
Today's opto-mechanical digital interfaces are just too slow!


----------------------------------------------------------------------
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]

Reply via email to