Could it that the app is inserting _duplicate_ keys ?

-- Brice

On Tue, Apr 21, 2015 at 1:52 PM, Marcus Eriksson <krum...@gmail.com> wrote:

> nope, but you can correlate I guess, tools/bin/sstablemetadata gives you
> sstable level information
>
> and, it is also likely that since you get so many L0 sstables, you will be
> doing size tiered compaction in L0 for a while.
>
> On Tue, Apr 21, 2015 at 1:40 PM, Anishek Agarwal <anis...@gmail.com>
> wrote:
>
>> @Marcus I did look and that is where i got the above but it doesnt show
>> any detail about moving from L0 -L1 any specific arguments i should try
>> with ?
>>
>> On Tue, Apr 21, 2015 at 4:52 PM, Marcus Eriksson <krum...@gmail.com>
>> wrote:
>>
>>> you need to look at nodetool compactionstats - there is probably a big
>>> L0 -> L1 compaction going on that blocks other compactions from starting
>>>
>>> On Tue, Apr 21, 2015 at 1:06 PM, Anishek Agarwal <anis...@gmail.com>
>>> wrote:
>>>
>>>> the "some_bits" column has about 14-15 bytes of data per key.
>>>>
>>>> On Tue, Apr 21, 2015 at 4:34 PM, Anishek Agarwal <anis...@gmail.com>
>>>> wrote:
>>>>
>>>>> Hello,
>>>>>
>>>>> I am inserting about 100 million entries via datastax-java driver to a
>>>>> cassandra cluster of 3 nodes.
>>>>>
>>>>> Table structure is as
>>>>>
>>>>> create keyspace test with replication = {'class':
>>>>> 'NetworkTopologyStrategy', 'DC' : 3};
>>>>>
>>>>> CREATE TABLE test_bits(id bigint primary key , some_bits text) with
>>>>> gc_grace_seconds=0 and compaction = {'class': 'LeveledCompactionStrategy'}
>>>>> and compression={'sstable_compression' : ''};
>>>>>
>>>>> have 75 threads that are inserting data into the above table with each
>>>>> thread having non over lapping keys.
>>>>>
>>>>> I see that the number of pending tasks via "nodetool compactionstats"
>>>>> keeps increasing and looks like from "nodetool cfstats test.test_bits" has
>>>>> SSTTable levels as [154/4, 8, 0, 0, 0, 0, 0, 0, 0],
>>>>>
>>>>> Why is compaction not kicking in ?
>>>>>
>>>>> thanks
>>>>> anishek
>>>>>
>>>>
>>>>
>>>
>>
>

Reply via email to