Re: How to deal with too many sstables

2015-02-02 Thread Roland Etzenhammer
Hi, maybe you are running into an issue that I also had on my test cluster. Since there were almost no reads on it cassandra did not run any minor compactions at all. Solution for me (in this case) was: ALTER TABLE tablename WITH compaction = {'class': 'SizeTieredCompactionStrategy',

Re: Many really small SSTables

2015-01-18 Thread Roland Etzenhammer
Hi, just as a short follow up, it worked - all nodes now have 20-30 sstables instead of thousands. Cheers, Roland

Re: Compaction failing to trigger

2015-01-18 Thread Roland Etzenhammer
Hi Flavien, I hit some problem with minor compations recently (just some days ago) - but with many more tables. In my case compactions got not triggered, you can check this with nodetool compactionstats. Reason for me was that those minor compactions did not get triggered since there were

Many really small SSTables

2015-01-15 Thread Roland Etzenhammer
Hi, I'm testing around with cassandra fair a bit, using 2.1.2 which I know has some major issues,but it is a test environment. After some bulk loading, testing with incremental repairs and running out of heap once I found that now I have a quit large number of sstables which are really

Re: Many pending compactions

2015-02-16 Thread Roland Etzenhammer
Hi, 1) Actual Cassandra 2.1.3, it was upgraded from 2.1.0 (suggested by Al Tobey from DataStax) 7) minimal reads (usually none, sometimes few) those two points keep me repeating an anwser I got. First where did you get 2.1.3 from? Maybe I missed it, I will have a look. But if it is 2.1.2

Re: Many pending compactions

2015-02-19 Thread Roland Etzenhammer
Hi, 2.1.3 is now the official latest release - I checked this morning and got this good surprise. Now it's update time - thanks to all guys involved, if I meet anyone one beer from me :-) The changelist is rather long:

Re: can't delete tmp file

2015-02-19 Thread Roland Etzenhammer
Hi, try 2.1.3 - with 2.1.2 this is normal. From the changelog: * Make sure we don't add tmplink files to the compaction strategy (CASSANDRA-8580) * Remove tmplink files for offline compactions (CASSANDRA-8321) In most cases they are safe to delete, I did this when the node was down. Cheers,

Re: Data tiered compaction and data model question

2015-02-19 Thread Roland Etzenhammer
Hi Cass, just a hint from the off - if I got it right you have: Table 1: PRIMARY KEY ( (event_day,event_hr),event_time) Table 2: PRIMARY KEY (event_day,event_time) Assuming your events to write come in by wall clock time, the first table design will have a hotspot on a specific node getting

Re: incremential repairs - again

2015-01-28 Thread Roland Etzenhammer
Hi, the automatically meant this reply earlier: If you are on 2.1.2+ (or using STCS) you don't those steps (should probably update the blog post). Now we keep separate levelings for the repaired/unrepaired data and move the sstables over after the first incremental repair My understanding

incremential repairs - again

2015-01-28 Thread Roland Etzenhammer
Hi, a short question about the new incremental repairs again. I am running 2.1.2 (for testing). Marcus pointed me that 2.1.2 should do incremental repairs automatically, so I rolled back all steps taken. I expect that routine repair times will decrease when I do not put many new data on the

Re: SStables can't compat automaticly

2015-01-26 Thread Roland Etzenhammer
Hi, are you running 2.1.2 evenutally? I had this problem recently and there were two topics here about this. Problem was, that my test cluster had almost no reads and did not compact sstables. Reason for me was that those minor compactions did not get triggered since there were almost no

incremental repairs

2015-01-08 Thread Roland Etzenhammer
Hi, I am currently trying to migrate my test cluster to incremental repairs. These are the steps I'm doing on every node: - touch marker - nodetool disableautocompation - nodetool repair - cassandra stop - find all *Data*.db files older then marker - invoke sstablerepairedset on those -

Re: incremental repairs

2015-01-08 Thread Roland Etzenhammer
Hi Marcus, thanks for that quick reply. I did also look at: http://www.datastax.com/documentation/cassandra/2.1/cassandra/operations/ops_repair_nodes_c.html which describes the same process, it's 2.1.x, so I see that 2.1.2+ is not covered there. I did upgrade my testcluster to 2.1.2 and with

Re: incremental repairs

2015-01-08 Thread Roland Etzenhammer
Hi Marcus, thanks a lot for those pointers. Now further testing can begin - and I'll wait for 2.1.3. Right now on production repair times are really painful, maybe that will become better. At least I hope so :-)

Re: Possible problem with disk latency

2015-02-26 Thread Roland Etzenhammer
Hi, 8GB Heap is a good value already - going above 8GB will often result in noticeable gc pause times in java, but you can give 12G a try just to see if that helps (and turn it back down again). You can add a Heap Used graph in opscenter to get a quick overview of your heap state. Best

Re: Possible problem with disk latency

2015-02-26 Thread Roland Etzenhammer
Hi Piotrek, your disks are mostly idle as far as I can see (the one with 17% busy isn't that high on load). One thing came up to my mind did you look on the sizes of your sstables? I did this with something like find /var/lib/cassandra/data -type f -size -1k -name *Data.db | wc find