I think this came up recently in another thread.  If you're getting large
numbers of SSTables after repairs, that means that your nodes are diverging
from the keys that they're supposed to be having.  Likely you're dropping
mutations.  Do a nodetool tpstats on each of your nodes and look at the
mutation droppped counters.  If you're seeing dropped message, my money you
have a non-zero FlushWriter "All time blocked" stat which is causing
mutations to be dropped.



On Tue, Jun 9, 2015 at 10:35 AM, Anuj Wadehra <anujw_2...@yahoo.co.in>
wrote:

> Any suggestions or comments on this one?
>
> Thanks
> Anuj Wadehra
>
> Sent from Yahoo Mail on Android
> <https://overview.mail.yahoo.com/mobile/?.src=Android>
> ------------------------------
>   *From*:"Anuj Wadehra" <anujw_2...@yahoo.co.in>
> *Date*:Sun, 7 Jun, 2015 at 1:54 am
> *Subject*:Hundreds of sstables after every Repair
>
> Hi,
>
> We are using 2.0.3 and vnodes. After every repair -pr operation  50+ tiny
> sstables( <10K) get created. And these sstables never get compacted due to
> coldness issue. I have raised
> https://issues.apache.org/jira/browse/CASSANDRA-9146 for this issue but I
> have been told to upgrade. Till we upgrade to latest 2.0.x , we are stuck.
> Upgrade takes time, testing and planning in Production systems :(
>
> I have observed that even if vnodes are NOT damaged, hundreds of tiny
> sstables are created during repair for a wide row CF. This is beyond my
> understanding. If everything is consistent, and for the entire repair
> process Cassandra is saying "Endpoints /x.x.x.x and /x.x.x.y are consistent
> for <CF>". Whats the need of creating sstables?
>
> Is there any alternative to regular major compaction to deal with
> situation?
>
>
> Thanks
> Anuj Wadehra
>
>

Reply via email to