We started seeing this behavior before we even discovered that it was
possible to run manual compactions or cancel compactions by ID.
On Fri, Oct 13, 2017 at 5:58 PM, Jeff Jirsa wrote:
> Is it possible someone/something is running 'nodetool compact' explicitly?
> That would
Is it possible someone/something is running 'nodetool compact' explicitly?
That would cause the behavior you're seeing.
On Fri, Oct 13, 2017 at 4:24 PM, Bruce Tietjen <
bruce.tiet...@imatsolutions.com> wrote:
>
> We are new to Cassandra and have built a test cluster and loaded some data
> into
The default -- size tiered.
https://issues.apache.org/jira/browse/CASSANDRA-12979 mentioned
checkAvailableDiskSpace -- does this function compute a total that includes
all volumes on the system, rather than just the ones available to Cassandra?
There is also a system volume has 2.3 T, so if it
What's the compaction strategy are you using? level compaction or size
tiered compaction?
On Fri, Oct 13, 2017 at 4:31 PM, Bruce Tietjen <
bruce.tiet...@imatsolutions.com> wrote:
> I hadn't noticed that is is now attempting two impossible compactions:
>
>
> id
I hadn't noticed that is is now attempting two impossible compactions:
id compaction type keyspace table
completed totalunit progress
a7d1b130-b04c-11e7-bfc8-79870a3c4039 Compaction perfectsearch cxml
1.73 TiB 5.04 TiB bytes 34.36%
Can you paste the output of cassandra compactionstats?
What you’re describing should not happen. There’s a check that drops sstables
out of a compaction task if there isn’t enough available disk space, see
https://issues.apache.org/jira/browse/CASSANDRA-12979
We are new to Cassandra and have built a test cluster and loaded some data
into the cluster.
We are seeing compaction behavior that seems to violate what we read about
it's behavior.
Our cluster is configured with JBOD with 3 3.6T disks. Those disks
currently respectively have the following
As far as I know, the nodetool stopdaemon is doing a "kill -9".
Or did it change?
2017-10-12 23:49 GMT-03:00 Anshu Vajpayee :
> Why are you killing when we have nodetool stopdaemon ?
>
> On Fri, Oct 13, 2017 at 1:49 AM, Javier Canillas <
> javier.canil...@gmail.com>
Other little update: at the same time I see the number of pending tasks
stuck (in this case at 1847); restarting the node doesn't help, so I can't
really force the node to "digest" all those compactions. In the meanwhile
the disk occupied is already twice the average load I have on other nodes.
I have been trying to add another node to the cluster (after upgrading to
3.0.15) and I just noticed through "nodetool netstats" that all nodes have
been streaming to the joining node approx 1/3 of their SSTables, basically
their whole primary range (using RF=3)?
Is this expected/normal?
I was
Hello cassandra folks.
So I want to ask how to migrate a keyspace in production to another smaller
cluster without have a downtime service.
So I was thinking that I dont want to use sstableloader and paid for the
compactions and streaming. I want actually to use the procedure when both
clusters
11 matches
Mail list logo