I believe that's the decompressed data size, so if your data is heavily
compressed it might be perfectly logical for you to be doing such large
compactions. Worth checking what SSTables are included in the compaction.
If you've been running STCS for a while you probably just have a few very
large
We started seeing this behavior before we even discovered that it was
possible to run manual compactions or cancel compactions by ID.
On Fri, Oct 13, 2017 at 5:58 PM, Jeff Jirsa wrote:
> Is it possible someone/something is running 'nodetool compact' explicitly?
> That would
Is it possible someone/something is running 'nodetool compact' explicitly?
That would cause the behavior you're seeing.
On Fri, Oct 13, 2017 at 4:24 PM, Bruce Tietjen <
bruce.tiet...@imatsolutions.com> wrote:
>
> We are new to Cassandra and have built a test cluster and loaded some data
> into
The default -- size tiered.
https://issues.apache.org/jira/browse/CASSANDRA-12979 mentioned
checkAvailableDiskSpace -- does this function compute a total that includes
all volumes on the system, rather than just the ones available to Cassandra?
There is also a system volume has 2.3 T, so if it
What's the compaction strategy are you using? level compaction or size
tiered compaction?
On Fri, Oct 13, 2017 at 4:31 PM, Bruce Tietjen <
bruce.tiet...@imatsolutions.com> wrote:
> I hadn't noticed that is is now attempting two impossible compactions:
>
>
> id
I hadn't noticed that is is now attempting two impossible compactions:
id compaction type keyspace table
completed totalunit progress
a7d1b130-b04c-11e7-bfc8-79870a3c4039 Compaction perfectsearch cxml
1.73 TiB 5.04 TiB bytes 34.36%
Can you paste the output of cassandra compactionstats?
What you’re describing should not happen. There’s a check that drops sstables
out of a compaction task if there isn’t enough available disk space, see
https://issues.apache.org/jira/browse/CASSANDRA-12979
We are new to Cassandra and have built a test cluster and loaded some data
into the cluster.
We are seeing compaction behavior that seems to violate what we read about
it's behavior.
Our cluster is configured with JBOD with 3 3.6T disks. Those disks
currently respectively have the following