Hi all,

Another query around data Modelling.

We have a existing table with below structure:
Table(PK,CK, col1,col2, col3, col4,col5)

Now each Pk here have 1k - 10k Clustering keys. Each PK has size from 10MB
to 80MB. We have overall 100+ millions partitions. Also we have set
levelled compactions in place so as to get better read response time.

We are currently on 3.11.x version of Cassandra. On running a weekly repair
and compaction job, this model because of levelled compaction (occupied
till Level 3) consume heavy cpu resource and impact db performance.

Now what if we divide this table in 10 with each table containing 1/10
partitions. So now each table will be limited to levelled compaction upto
level-2. I think this would ease down read as well as compaction task.

What is your opinion on this?
Even if we upgrade to ver 4.0, is the second model ok?

Reply via email to