[
https://issues.apache.org/jira/browse/CASSANDRA-16?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12855002#action_12855002
]
Jeremy commented on CASSANDRA-16:
---------------------------------
When this is finally resolved do you see this as being a fix that mostly
protects from OOM cases, but really, a user shouldn't intentionally design a
scenario that might create very large (ever growing) CF because of performance
penalties... OR Is this is a fix that (as long as you have sufficient disk and
modest amount of memory) handles this case just fine and a user shouldn't worry
about it anymore...
> Memory efficient compactions
> -----------------------------
>
> Key: CASSANDRA-16
> URL: https://issues.apache.org/jira/browse/CASSANDRA-16
> Project: Cassandra
> Issue Type: Improvement
> Components: Core
> Environment: All
> Reporter: Sandeep Tata
> Fix For: 0.7
>
>
> The basic idea is to allow rows to get large enough that they don't have to
> fit in memory entirely, but can easily fit on a disk. The compaction
> algorithm today de-serializes the entire row in memory before writing out the
> compacted SSTable (see ColumnFamilyStore.doCompaction() and associated
> methods).
> The requirement is to have a compaction method with a lower memory
> requirement so we can support rows larger than available main memory. To
> re-use the old FB example, if we stored a user's inbox in a row, we'd want
> the inbox to grow bigger than memory so long as it fit on disk.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.