Hi Chris and other TC guys,

I know this is a pretty old post but I'm running into problems again
with the LockStore implementation, and unfortunately, this time it
doesn't seem to be a bug.

The LockStore behavior seems to be the one described by Chris as follows:

> In your case although you have 2M entries, you will only have at most as
> many locks in the L2 as there are live values in the L1s.  (The actual
> number could be less than this due to hash collisions, and the possibility
> of live objects that have no associated greedy lock).

The problem now is that I have ~3M total entries (using a toolkit map
rather than ehcache but this shouldn't make any difference), and ~2M
live entries into L1s heaps: this is causing the LockStore on the L2
to hold 2M locks (confirmed by heap dumps) and take one third of the
total heap, which is pretty huge.
My biggest concern here is that 2M live entries are not that much at
all, and that L1s scale pretty poorly this way because limited by L2
memory requirements; even by using the enterprise Server Array, we
would need tens of servers just to store 10M - 20M objects!

So, are there any plans to reduce LockStore memory footprint? Maybe a
disk-based implementation?

-- 
Sergio Bossa
http://www.linkedin.com/in/sergiob
_______________________________________________
tc-dev mailing list
tc-dev@lists.terracotta.org
http://lists.terracotta.org/mailman/listinfo/tc-dev

Reply via email to