On Mon, Jul 20, 2015 at 6:32 AM, Pierre Devops <pierredev...@gmail.com>
wrote:

> Some of my partitions may have ~20 millions of rows, while others may have
> only a few hundreds of rows. It may grow up to 300 millions of rows per
> partition in a near futur, leading to very huge partition (50~80 GB I
> think).
>

Rows this large are strongly suggestive of improper schema design. Yes,
Cassandra partitions "can" contain up to 2 Billion cells. No, you probably
shouldn't get anywhere near that in normal operation.


> I create my sstable manually when I have a big number of rows, so I don't
> go through commitlog/memtables.
>
> I want to know what kind of issue I may encounter with big partition
> containing such a number of rows ? because now, I don't see problem on the
> database when I insert or query these big partitions. However, I didn't
> last it run a long time so may be problem may occurs during compaction ?
>

Certain operations are possible on Really Big Rows. Compaction is
heavyweight but probably won't OOM.

Requesting the whole row probably will probably OOM...

=Rob

Reply via email to