Thanks to CASSANDRA-11206, I think we can have much larger partition than
before 3.6.
(Robert said he could treat safely 10 15GB partitions at his presentation.
https://www.youtube.com/watch?v=N3mGxgnUiRY)
But is there still 2B columns limit on the Cassandra code?
If so, out of curiosity, I'd like
nds like there is there is a row limit too not only columns??
>>
>> If I am reading this correctly 10 15GB partitions means 10 partitions
>> (like 10 row keys, thats too small) with each partition of size 15GB.
>> (thats like 10 million columns where each column can h
>>>>> - compaction taking long time --> heap pressure --> long GC pauses
>>>>> --> nodes flapping
>>>>> - repair & over-streaming, repair session failure in the middle that
>>>>> forces you to re-send the whole big partiti
Hi. We met similar situation after upgrading from 2.1.14 to 3.11 in our
production.
Have you already tried G1GC instead of CMS? Our timeouts were mitigated
after replacing CMS with G1GC.
Thanks.
2017-09-25 20:01 GMT+09:00 Steinmaurer, Thomas <
thomas.steinmau...@dynatrace.com>:
> Hello,
>
>
>
>
Hi, I'm modeling some queries in CQL3.
I'd like to query first 1 columns for each partitioning keys in CQL3.
For example:
create table posts(
> author ascii,
> created_at timeuuid,
> entry text,
> primary key(author,created_at)
> );
> insert into posts(author,created_at,entry) values
> ('john',m
;
>>
>> 2014/05/16 23:54、Jonathan Lacefield のメール:
>>
>> Hello,
>>
>> Have you looked at using the CLUSTERING ORDER BY and LIMIT features of
>> CQL3?
>>
>> These may help you achieve your goals.
>>
>>
>> http://www.data