>
>
> - Heap size is set to 8GB
> - Using G1GC
> - I tried moving the memtable out of the heap. It helped but I still got
> an OOM last night
> - Concurrent compactors is set to 1 but it still happens and also tried
> setting throughput between 16 and 128, no changes.
>
That heap size is way to
Dear community members,
I have just upgraded my Cassandra from version 3.11.1 to 3.11.2. I kept my
previous configuration files: cassandra.yaml and cassandra-env.sh. However,
when I started the cassandra service, I couldn't connect via JMX (tried to
to it with a java program, with JConsole and a
On 04/05/2018 09:04 AM, Faraz Mateen wrote:
>
> For example, if the table is *data_main_bim_dn_10*, its data directory
> is named data_main_bim_dn_10-a73202c02bf311e8b5106b13f463f8b9. I created
> a new table with the same name through cqlsh. This resulted in creation
> of another directory with
Yeah, they are pretty much unique but it's only a few requests per day so
hitting all the nodes would be fine for now.
2018-04-05 15:43 GMT+02:00 Evelyn Smith :
> Not sure if it differs for SASI Secondary Indexes but my understanding is
> it’s a bad idea to use high
Hi all,
I have been spending the last few days trying to move my C* cluster on
Gcloud (3 nodes, 700GB) into a DC/OS deployment. This, as you people might
know, was not trivial.
I have finally found a way to do this migration in a time-efficient way (We
evaluated bulkloading and sstableloader,
Hi, Evelyn!
I've found the following messages:
INFO RepairRunnable.java Starting repair command #41, repairing keyspace
XXX with repair options (parallelism: parallel, primary range: false,
incremental: false, job threads: 1, ColumnFamilies: [YYY], dataCenters: [],
hosts: [], # of ranges: 768)
Not sure if it differs for SASI Secondary Indexes but my understanding is it’s
a bad idea to use high cardinality columns for Secondary Indexes.
Not sure what your data model looks like but I’d assume UUID would have very
high cardinality.
If that’s the case it pretty much guarantees any query
Tried both (although with the biggest table) and the result is the same.
I stumbled upon this jira issue: https://issues.apache.
org/jira/browse/CASSANDRA-12662
Since the sasi indexes I use are only helping in debugging (for now) I
dropped them and it seems the tables get compacted now (at least
Oh and second, are you attempting a major compact while you have all those
pending compactions?
Try letting the cluster catch up on compactions. Having that many pending is
bad.
If you have replication factor of 3 and quorum you could go node by node and
disable binary, raise concurrent
Probably a dumb question but it’s good to clarify.
Are you compacting the whole keyspace or are you compacting tables one at a
time?
> On 5 Apr 2018, at 9:47 pm, Zsolt Pálmai wrote:
>
> Hi!
>
> I have a setup with 4 AWS nodes (m4xlarge - 4 cpu, 16gb ram, 1TB ssd each)
>
It might not be what cause it here. But check your logs for anti-compactions.
> On 5 Apr 2018, at 8:35 pm, Dmitry Simonov wrote:
>
> Thank you!
> I'll check this out.
>
> 2018-04-05 15:00 GMT+05:00 Alexander Dejanovski
Hi!
I have a setup with 4 AWS nodes (m4xlarge - 4 cpu, 16gb ram, 1TB ssd each)
and when running the nodetool compact command on any of the servers I get
out of memory exception after a while.
- Before calling the compact first I did a repair and before that there was
a bigger update on a lot of
Thank you!
I'll check this out.
2018-04-05 15:00 GMT+05:00 Alexander Dejanovski :
> 40 pending compactions is pretty high and you should have way less than
> that most of the time, otherwise it means that compaction is not keeping up
> with your write rate.
>
> If you
40 pending compactions is pretty high and you should have way less than
that most of the time, otherwise it means that compaction is not keeping up
with your write rate.
If you indeed have SSDs for data storage, increase your compaction
throughput to 100 or 200 (depending on how the CPUs handle
Hi, Alexander!
SizeTieredCompactionStrategy is used for all CFs in problematic keyspace.
Current compaction throughput is 16 MB/s (default value).
We always have about 40 pending and 2 active "CompactionExecutor" tasks in
"tpstats".
Mostly because of another (bigger) keyspace in this cluster.
Hi Dmitry,
could you tell us which compaction strategy that table is currently using ?
Also, what is the compaction max throughput and is auto-compaction
correctly enabled on that node ?
Did you recently run repair ?
Thanks,
On Thu, Apr 5, 2018 at 10:53 AM Dmitry Simonov
Hello!
Could you please give some ideas on the following problem?
We have a cluster with 3 nodes, running Cassandra 2.2.11.
We've recently discovered high CPU usage on one cluster node, after some
investigation we found that number of sstables for one CF on it is very
big: 5800 sstables, on
17 matches
Mail list logo