You've missed nothing - I just have not communicated clearly

>> If the cached object is reachable from a root and it is dirty, then
>> the incoming object is discarded.

>...so you're throwing away the new incoming data?

Yes that is correct, because the cache object is dirty, whereas the incoming
data is not - therefore, the user has performed updates on the object that
is in the cache, and would be less than pleased if their changes were
overwritten.

>Or:

>> However, if the cached object is not reachable from a root, then the
>> incoming object should replace it.

>Why bother replacing it if you know it's not reachable?  If it's not
reachable it's never going to be used again, and will be discarded at the
next GC, so it really doesn't matter what you do.  Which suggests that you
will get away with always doing the first thing in all cases, i.e. to ignore
the incoming data.  So it sounds like you don't even need to be receiving
that data at all, since the two outcomes you describe are (a) ignore the new
data or (b) store the new data in a doomed location.

Sorry.  When I said replace, I should have been more clear.  The cache works
with weak references indexed by a key.  If the target of the cached
weakreference is not reachable from a root then the cached weak reference
should be discared, and a new weak reference should be created using the
incoming object.  As I'd mentioned, these objects have the same logical
identity and so, the new weak reference would be stored using the same key.



-----Original Message-----
From: Griffiths, Ian [mailto:[EMAIL PROTECTED]
Sent: Thursday, August 07, 2003 9:23 PM
To: [EMAIL PROTECTED]

> Interesting.  I thought that the graph calculation was comparatively
> less expensive than compacting the memory.

It'll depend.  If there are no unreachable objects, then it's pretty clear
that the graph calculation will take longer, because there will be no
compaction...  And in general, the fewer objects that actually need
collecting, the more the graph calculation time will tend to dominate.
(OK, so it's not a straightforward function - the compact cost will vary
according how much of the generation being compacted stays still and how
much has to move, so it's possible to contrive examples where freeing even a
single object makes for a costly compact.  But on average, if fewer objects
get freed, the compactor will have to move less stuff.)

I'm prepared to believe that the graph calculation is usually cheaper than
the compact (especially so for a gen 1 or gen 2 compact) but a large part of
that will be because the GC algorithms try not to run a collect all that
often.

I suspect that if you wanted to run 20 heap graph walks a second, the cost
of the heap walk might start to dominate...

In any case, as far as I know it's not possible to do the first phase of a
GC but not the actual compaction.

I think you're going to have to come up with a different strategy.  I don't
really understand what you're trying to achieve to be honest - neither of
the two behaviours you described made sense to me, so I must have missed
something.  You say that when a new object comes in, then
either:

> If the cached object is reachable from a root and it is dirty, then
> the incoming object is discarded.

...so you're throwing away the new incoming data?

Or:

> However, if the cached object is not reachable from a root, then the
> incoming object should replace it.

Why bother replacing it if you know it's not reachable?  If it's not
reachable it's never going to be used again, and will be discarded at the
next GC, so it really doesn't matter what you do.  Which suggests that you
will get away with always doing the first thing in all cases, i.e. to ignore
the incoming data.  So it sounds like you don't even need to be receiving
that data at all, since the two outcomes you describe are (a) ignore the new
data or (b) store the new data in a doomed location.

So I think I must have missed something...

--
Ian Griffiths
DevelopMentor


> -----Original Message-----
> From: Pinto, Ed [mailto:[EMAIL PROTECTED]
>
> Yes I mean reachable from a root.
>
> <snip>If it was able to determine whether objects were reachable
without
> performing a GC, it wouldn't need to do most of the work involved in
> performing a GC!</snip>
>
> Interesting.  I thought that the graph calculation was comparatively
less
> expensive than compacting the memory.
>
> <snip>Why do you need to know this?</snip>
>
> I cannot determine the lifetime of the objects I am caching (otherwise
I
> would make them disposable).  Many different objects may have picked
up
> references to them. The cache will periodically be passed an object
that
> is
> a different instance, but has the same logical identity as an object
> already
> in the cache.  If the cached object is reachable from a root and it is
> dirty, then the incoming object is discarded.  However, if the cached
> object
> is not reachable from a root, then the incoming object should replace
it.

Reply via email to