Andreas L Delmelle wrote:
On Nov 14, 2007, at 21:38, Jeremias Maerki wrote:
Hi Jeremias, Chris,
My proposal, incorporating the changes in Jeremias' diff, below.
Thanks for the diff. Unfortunately I have been unsuccessful in applying
it after several attempts. First I tried using Tortoise SVN client, then
I downloaded GNUWin32 Patch and that fails to apply all but hunk 7. I
also asked a colleague working on Linux to try and apply the patch but
it fails for him too (although one more hunk is successful)
I guess I could manually make the updates, but I would prefer to work
out whats going wrong here to avoid similar problems in the future and
to minimize the risk of error.
To sum it up:
Only one CacheCleaner per PropertyCache, and one accompanying thread.
If, after a put(), cleanup seems to be needed *and* the thread is not
alive, lock on the cleaner, and start the thread.
If the thread is busy, I guess it suffices to continue, assuming that
the hash distribution should eventually lead some put() back to same
As Jeremias noted, currently rehash is called for every time an attempt
to clean up fails. Maybe this needs improvement... OTOH, rehash now
becomes a bit simpler, since it does not need to take into account
interfering cleaners. Only one remaining: CacheCleaner.run() is now
synchronized on the cleaner, and rehash() itself is done within the
What I see as a possible issue though, is that there is a theoretical
limit to rehash() having any effect whatsoever. If the cache grows to
64 buckets, then the maximum number of segments that exceed the
threshold can never be greater than half the table-size... This might
be a non-issue, as this would only be triggered if the cache's size is
at least 2048 instances (not counting the elements in the buckets that
don't exceed the threshold). No problem for enums, keeps. Strings and
2048 doesn't sound good enough as a maximum number of instances if
Strings and integers are included. Why can't this number be increased by
having more buckets and/or segments?