Thank you both!
Robert, I read through the bug--it sounds like this behavior has been fixed
(or the impact reduced) in 2.1, but given that our data is pretty uniform
(with no overlap between rows/values), it doesn't look like we'll suffer
from that. At least, that's what I understood from the bug
Moving to Leveled compaction resolved same problem for us. As Robert
mentioned, use it carefully.
Size tiered compaction requires having 50% free disk space (also according
to datastax documentation).
Pavel
On Wed, Jul 9, 2014 at 8:39 PM, Robert Coli wrote:
> On Wed, Jul 9, 2014 at 4:27 PM, An
On Wed, Jul 9, 2014 at 4:27 PM, Andrew wrote:
> What kind of overhead should I expect for compaction, in terms of size?
> In this use case, the primary use for compaction is more or less to clean
> up tombstones for expired TTLs.
>
Compaction can result in output files >100% of the input, if co
The problem here is the size and scope of the data—it’s basically a primary key
based on the ID and the date, and there are several large pieces of information
associated with it. The main issues with the various key/value stores are a)
the inability to do range queries, and b) the size limitat
On Mon, Jul 7, 2014 at 9:52 AM, Redmumba wrote:
> Would adjusting the maximum sstables before a compaction is performed help
> this situation? I am currently using the default values provided by
> SizeTieredCompactionStrategy in C* 2.0.6. Or is there a better option for
> a continuous-write ope
On Mon, Jul 7, 2014 at 9:52 AM, Redmumba wrote:
> I am having an issue on multiple machines where it's simply filling up the
> disk space during what I can only assume is a compaction. For example, the
> average node cluster-wide is around 900GB according to DSE
> OpsCenter--however, after comin
I am having an issue on multiple machines where it's simply filling up the
disk space during what I can only assume is a compaction. For example, the
average node cluster-wide is around 900GB according to DSE
OpsCenter--however, after coming in after the three day weekend, I noticed
that there wer