Steven Schveighoffer <schvei...@yahoo.com> changed:
What |Removed |Added
--- Comment #8 from Steven Schveighoffer <schvei...@yahoo.com> 2010-03-12
04:39:55 PST ---
(In reply to comment #7)
> Well, there are really two issues here: What happens when GC.free() gets
> called and what happens when the GC collects. As much as people (Andrei comes
> to mind) hate it from a theoretical purity point of view, I believe it's
> absolutely necessary to be able to GC.free() a large array while the GC sucks
> as bad as it currently does.
GC.free, I don't agree with. Deleting an array, yes. The issue is that the GC
is unaware of the runtime's array features, it's just a mechanism to allocate
and free memory. It's the same issue as something like in C++ how you should
never call C's free on something that you allocated with new. deleting an
array calls a runtime function that is in the perfect place -- where all my
other fixes are.
One thing I have thought of which should help somewhat is to mark arrays with a
flag in the blockinfo attributes, thereby disallowing in-place appending to a
memory block that was not allocated via the arraynew routines. My biggest
worry (indirectly identified by searching for this nasty bug we just fixed) is
that someone will try to append to a stack-allocated item, but because the
function is a closure, it's actually heap allocated and succeeds in appending.
It also gets rid of a bizarre consequence of requiring padding for class
> For the GC collection case I still don't understand what's wrong with clearing
> the LRU. If I understand how this stuff works correctly, the information is
> also stored at the end of every block, so on the next append the cache will be
> repopulated. It will only cost one non-cached lookup per array per GC
I just don't like how it would affect array append performance across threads
in a strangely coupled way. If you have 15 threads all doing appending, as
soon as one triggers a collection cycle, they all are affected. That's the
potential to degrade 8x15 arrays. While it would not be a hugely noticeable
degradation, any *avoidable* degradation should be discouraged.
What I would be willing to look at is having the collection cycle *selectively*
remove freed blocks from the cache, and leave allocated blocks alone. Since
the GC already has to stop the world, this shouldn't be too much of an extra
What I don't know is how it will deal with threading issues. I think I can
make sure it's OK if the entire blockinfo matches before erasing. A block
shouldn't be being cycled *and* inserted into the cache at the same time. Does
that make sense?
> For the GC.free() case you raise a very good point about thread safety. I
> really don't have a good answer for it. Calling free() doesn't have to be
> cheap, but stopping the world is a little too expensive.
Clearing all the caches on every free is not an option. Removing the
associated block from the cache on an array delete is a good option, and should
work well enough to satisfy.
BTW, I'm changing this to enhancement, because that's what it really is.
Configure issuemail: http://d.puremagic.com/issues/userprefs.cgi?tab=email
------- You are receiving this mail because: -------