The reason for this is probably
https://issues.apache.org/jira/browse/CASSANDRA-10831 (which only affects
2.1)

So, if you had problems with incremental repair and LCS before, upgrade to
2.1.13 and try again

/Marcus

On Wed, Feb 10, 2016 at 2:59 PM, horschi <hors...@gmail.com> wrote:

> Hi Jean,
>
> we had the same issue, but on SizeTieredCompaction. During repair the
> number of SSTables and pending compactions were exploding.
>
> It not only affected latencies, at some point Cassandra ran out of heap.
>
> After the upgrade to 2.2 things got much better.
>
> regards,
> Christian
>
>
> On Wed, Feb 10, 2016 at 2:46 PM, Jean Carlo <jean.jeancar...@gmail.com>
> wrote:
> > Hi Horschi !!!
> >
> > I have the 2.1.12. But I think it is something related to Level
> compaction
> > strategy. It is impressive that we passed from 6 sstables to 3k sstable.
> > I think this will affect the latency on production because the number of
> > compactions going on
> >
> >
> >
> > Best regards
> >
> > Jean Carlo
> >
> > "The best way to predict the future is to invent it" Alan Kay
> >
> > On Wed, Feb 10, 2016 at 2:37 PM, horschi <hors...@gmail.com> wrote:
> >>
> >> Hi Jean,
> >>
> >> which Cassandra version do you use?
> >>
> >> Incremental repair got much better in 2.2 (for us at least).
> >>
> >> kind regards,
> >> Christian
> >>
> >> On Wed, Feb 10, 2016 at 2:33 PM, Jean Carlo <jean.jeancar...@gmail.com>
> >> wrote:
> >> > Hello guys!
> >> >
> >> > I am testing the repair inc in my custer cassandra. I am doing my test
> >> > over
> >> > these tables
> >> >
> >> > CREATE TABLE pns_nonreg_bench.cf3 (
> >> >     s text,
> >> >     sp int,
> >> >     d text,
> >> >     dp int,
> >> >     m map<text, text>,
> >> >     t timestamp,
> >> >     PRIMARY KEY (s, sp, d, dp)
> >> > ) WITH CLUSTERING ORDER BY (sp ASC, d ASC, dp ASC)
> >> >
> >> > AND compaction = {'class':
> >> > 'org.apache.cassandra.db.compaction.LeveledCompactionStrategy'}
> >> >     AND compression = {'sstable_compression':
> >> > 'org.apache.cassandra.io.compress.SnappyCompressor'}
> >> >
> >> > CREATE TABLE pns_nonreg_bench.cf1 (
> >> >     ise text PRIMARY KEY,
> >> >     int_col int,
> >> >     text_col text,
> >> >     ts_col timestamp,
> >> >     uuid_col uuid
> >> > ) WITH bloom_filter_fp_chance = 0.01
> >> >  AND compaction = {'class':
> >> > 'org.apache.cassandra.db.compaction.LeveledCompactionStrategy'}
> >> >     AND compression = {'sstable_compression':
> >> > 'org.apache.cassandra.io.compress.SnappyCompressor'}
> >> >
> >> > table cf1
> >> >         Space used (live): 665.7 MB
> >> > table cf2
> >> >         Space used (live): 697.03 MB
> >> >
> >> > It happens that when I do repair -inc -par on theses tables, cf2 got a
> >> > pick
> >> > of 3k sstables. When the repair finish, it takes 30 min or more to
> >> > finish
> >> > all the compactations and return to 6 sstable.
> >> >
> >> > I am a little concern about if this will happen on production. is it
> >> > normal?
> >> >
> >> > Saludos
> >> >
> >> > Jean Carlo
> >> >
> >> > "The best way to predict the future is to invent it" Alan Kay
> >
> >
>

Reply via email to