Dean, as I can see you are satisfied with the result of increasing ii from 128 to 512, didn't you observed any drawbacks of this change? I remember you mentioned no change in Read Latency and a significant drop of heap size, but did you check any other metrics?

I did the opposite (512 -> 128; before we've had problems with heap size, now we can revert it, so I check if it makes sense) and I do not see almost any difference in Read Latency too, but I can see that the number of dropped READ messages has decreased significantly (it's 1 or even 2 orders of magnitude lower for the nodes I set ii = 128 comparing to the nodes with ii = 512; the exact value is about 0.005 / sec. comparing to about 0.01 - 0.2 for other nodes) and I have much less connection resets reported by netstat's Munin plugin. In other words, as I understand it - there's much less timeouts which should improve overall C* performance, even if I can't see it in read latency graph for CFs (unluckily I don't have a graph for StorageProxy latencies to easily check it).

To make sure about the reason of this differences and its effect on C* performance, I'm looking for some "references" in other people's experience / observations :-)

M.

W dniu 22.03.2013 17:17, Hiller, Dean pisze:
I was just curious.  Our RAM has significantly reduced but the *Index.db files 
are the same size size as before.

Any ideas why this would be the case?

Basically, Why is our disk size not reduced since RAM is way lower?  We are 
running strong now with 512 index_interval for past 2-3 days and RAM never 
looked better.  We were pushing 10G before and now we are 2G slowing increasing 
to 8G before gc compacts the long lived stuff which goes back down to 2G 
again…..very pleased with LCS in our system!!!!!

Thanks,
Dean


Reply via email to