Hello,

Im trying to use spark with cassandra and it was oddly generating several
spark jobs because spark follow the guidelines generated by
partitions_count and mean_partition_size. The problem is that I have a very
small table (300MB) with only 16 distinct partition keys running on a
single C* node.

Even though, when I query partitions_count and mean_partition_size I get
the following numbers:
 91959383 and 256 respectively. That's way higher than what I have and
"nodetools cfstats" shows me that the tables are correcly way smaller.

Can someone explain me why this happens or, if it is an error, how to fix
it?

Obs: Im using cassandra 2.2.6

Thanks in Advance,
Alexandre Santana

Reply via email to