Hi DuyHai,

I am trying to see what are the possible things we can do to get over this
limitation?

1. Would this https://issues.apache.org/jira/browse/CASSANDRA-7447 help at
all?
2. Can we have Merkle trees built for groups of rows in partition ? such
that we can stream only those groups where the hash is different?
3. It would be interesting to see if we can spread a partition across nodes.

I am just trying to validate some ideas that can help potentially get over
this 100MB limitation since we may not always fit into a time series model.

Thanks!

On Thu, May 11, 2017 at 12:37 AM, DuyHai Doan <doanduy...@gmail.com> wrote:

> Yes the recommendation still applies
>
> Wide partitions have huge impact on repair (over streaming), compaction
> and bootstrap
>
> Le 10 mai 2017 23:54, "Kant Kodali" <k...@peernova.com> a écrit :
>
> Hi All,
>
> Cassandra community had always been recommending 100MB per partition as a
> sweet spot however does this limitation still exist given there is a B-tree
> implementation to identify rows inside a partition?
>
> https://github.com/apache/cassandra/blob/trunk/src/java/org/
> apache/cassandra/db/rows/BTreeRow.java
>
> Thanks!
>
>
>

Reply via email to