The latter: I created my own tc locks and striped them between nodes.
I have some collisions but no relevant drop of throughput, because locks don't 
cross the node boundary, that is, there are no lock recalls given that I have 
specific data partitioning rules and each lock is assigned and only used by a 
single node.

Sergio Bossa
Sent by iPhone

Il giorno 30/nov/2010, alle ore 16:05, Chris Dennis 
<cden...@terracottatech.com> ha scritto:

> I assume by 'external lock management' that you are manually  
> controlling the lock pin/unpin calls (or are you reducing the number  
> of locks, and coping with the drop in throughput on collisions?).   
> There have a couple of recent developments on this, and it's looking  
> like we're probably going to do away with the pin/unpin functionality  
> for TC locks...  There are some developments in the enterprise (non OS  
> code) that alleviate some of these problems in the L2, and I think the  
> server team were doing some work on optimizing the way that locks are  
> stored in the server... but I've not been following that too closely.
> 
> Chris
> 
> On Nov 29, 2010, at 1:22 AM, Sergio Bossa wrote:
> 
>> Resolved by using the NullLockStrategy and implementing my own  
>> external lock management.
>> 
>> Anyways, I still think a disk-based lock store would be the way to  
>> go for easily scale up.
>> 
>> Sergio Bossa
>> Sent by iPhone
>> 
>> Il giorno 25/nov/2010, alle ore 19:20, Sergio Bossa <sergio.bo...@gmail.com 
>>> ha scritto:
>> 
>>> Hi Chris and other TC guys,
>>> 
>>> I know this is a pretty old post but I'm running into problems again
>>> with the LockStore implementation, and unfortunately, this time it
>>> doesn't seem to be a bug.
>>> 
>>> The LockStore behavior seems to be the one described by Chris as  
>>> follows:
>>> 
>>>> In your case although you have 2M entries, you will only have at  
>>>> most as
>>>> many locks in the L2 as there are live values in the L1s.  (The  
>>>> actual
>>>> number could be less than this due to hash collisions, and the  
>>>> possibility
>>>> of live objects that have no associated greedy lock).
>>> 
>>> The problem now is that I have ~3M total entries (using a toolkit map
>>> rather than ehcache but this shouldn't make any difference), and ~2M
>>> live entries into L1s heaps: this is causing the LockStore on the L2
>>> to hold 2M locks (confirmed by heap dumps) and take one third of the
>>> total heap, which is pretty huge.
>>> My biggest concern here is that 2M live entries are not that much at
>>> all, and that L1s scale pretty poorly this way because limited by L2
>>> memory requirements; even by using the enterprise Server Array, we
>>> would need tens of servers just to store 10M - 20M objects!
>>> 
>>> So, are there any plans to reduce LockStore memory footprint? Maybe a
>>> disk-based implementation?
>>> 
>>> -- 
>>> Sergio Bossa
>>> http://www.linkedin.com/in/sergiob
>> _______________________________________________
>> tc-dev mailing list
>> tc-dev@lists.terracotta.org
>> http://lists.terracotta.org/mailman/listinfo/tc-dev
> 
> _______________________________________________
> tc-dev mailing list
> tc-dev@lists.terracotta.org
> http://lists.terracotta.org/mailman/listinfo/tc-dev
_______________________________________________
tc-dev mailing list
tc-dev@lists.terracotta.org
http://lists.terracotta.org/mailman/listinfo/tc-dev

Reply via email to